I recently participated in FRC by joining RFactor, A team based in Mumbai. Although I lived in Goa, I would travel to Mumbai during my holidays, especially during Diwali, Christmas and summer holidays. Despite this restriction I persevered by spending 16-18 hours every day at the lab. With this commitment I was able to contribute hugely to the team in the areas of Programming, CAD and prototyping.
Me and Viraj together managed the entire programming aspect and coded the swerve drive of the robot. We used pathplanner and YAGSL for this and were able to make a highly accurate swerve. Using a pathplanner meant that our robot's autonomous code was very accurate and could correct itself at any moment using accurate odometry.
This meant we could change the autonomous path right before the match and adjust our autonomous to our alliances. Due to our accurate odometry and pathplanner our autonomous would never fail and always deliver us points. We used paths of 1+1 and 2+1 based on our alliance and how many notes they were doing.
We also used Computer vision in the form of limelight and photon vision on a RPI. We put note detection algorithms on the limelight using a google coral and put april tag detection on the RPI using photon vision. We created alignment codes to align the robot to the note for intaking and to align the robot to the speaker before shooting.
Since the cameras gave us the angles of the target we had to use trigonometry to create formulas for converting this into the distance that the robot should travel.
Here is the trigonometry we used for converting the angle of the camera to the april tag into the angle of the robot. However we later changed this and instead tilted the camera upwards and took the distance between the camera and the april tag.
In the end our code would do this: 1. Look at the speaker, 2. Go right up till the speaker(the perfect distance for shooting a note) 3. Stop and wait for the driver to give the shoot command.
Apart from these two major aspects of our code we also coded all the subsystems and mechanisms in the robot. This included the intake, loader, shooter and arm. We interlinked these mechanisms to work together with each other. For eg, When activated the intake would run at the same time as the loader and once the note was intake and reach the limit switch of the shooter; the loader and intake would turn off and the note would be kept within the shooter. Once the driver wanted to shoot, the shooter would slowly rev up to maximum speed and once it reached there the loader would automatically shoot the note out.
We also fixed the code and polished it to be efficient and easily usable so that future members of the team would be able to use our code too.
Computer Aided Design:
As for the CAD, Me and Viraj redesigned the entire CAD of the robot, made it accurate and took renderings of it.
Initially the CAD was completely un-organised and had many duplicate files, interlinking errors and other inaccuracies. We rebuilt the entire CAD from scratch in an efficient and neat manner. We used this opportunity to finalise all the measurements and lengths along with all the specific angles so that the robot could intake the notes and also feed them into both the speaker and the amplifier.
We also simplified, modularised and made the mechanisms simpler to manufacture while being stronger and efficient. For example the shooter I designed it to be made of 7 metal L’s and metal plates making it very sturdy to be able to withstand the high rpm of the shooter’s motor.
We also took many renders of our CAD for the engineering handbook
Apart from this we also were a part of the prototyping and building stage. We tested the early versions of the shooter and also made sure all the mechanisms were interlinking with each other. We also did the electricals several times in the test versions of the robot and also contributed significantly to the wiring of the final robot.
Comments
Post a Comment