We won Oustanding Senior Design Project!
During my senior year of my electrical engineering degree, I, and my team, were assigned to a project for the BSU robotics lab. They had an old air-hockey robot that was used as a previous capstone project. Mechanically it worked fine, but the students that had worked on it previously did not put much effort into the robot's perception system. My team worked to revamp the robot's vision system entirely.
This picture shows me (on the left, craning my neck) and my team hastily calibrating and debugging our project on showcase day.
Our team worked hard to address the wants of the robotics professor, Dr. Satici. We had to make the fast, reliable, and most difficult- modular.
Early on in the project, I volunteered to tackle camera calibration, while my team members took care of image processing and puck detection. I learned that there are two kinds of calibration that were necessary to make the system work: intrinsic and extrinsic calibration.
Intrinsic calibration accounts for how different cameras see, how distorted their view is, and how big their field of vision. This can be accomplished by taking images of a chessboard from a variety of angles and distances and processing them through OpenCV.
Extrinsic calibration accounts for the position and rotation of the camera in 3D space. This was required to allow multiple cameras to observe the same air-hockey table simultaneously. This sounds difficult, but it only requires one image of a chessboard to be taken.
Once I got the camera calibrated, the problem I faced was this: Now that we've found the puck with the camera, how do I relate the pixel coordinates of the camera to the real-world coordinates on the hockey table? I tried a number of OpenCV functions to solve this problem, but there was nothing that met our specific needs. I would have to code my own system.
I ended up using a concept called homography. The gist of my solution is this:
The hockey table is my z-plane, and the hockey puck will be on the z-plane while in play. The pixel grid of the camera is another flat plane (this is only true because it was intrinsically calibrated). Any vector originating at the projection center of the camera and passes through the pixel grid of the camera will also pass through the hockey table at a unique position. The illustration on the left shows that a vector originating from camera 1 that passes through point x will also pass through point X.
This image is pretty blurry, but it basically just shows that the position seen by the robot, and the position that I measure with a tape measure match up perfectly.
I was really proud that this worked so well. With our design, the robotics lab could take their robot to different locations and move the camera to best avoid glare. Or they could use a hundred cameras to cover a football field. The possibilities of our modular system were really endless.
Last summer I was able to work a full-time schedule for Multiquip. During this time, I was able to begin a new project. The purpose of this project was to get more visual and numerical insight when collecting vibration data from the HVM 200.
The app is pre-calibrated for the HVM 200, but it can be used for any files of the same format. Updates to support .tdms files are underway.
Vibration information is displayed in a number of different forms, which can be selected by ticking the boxes on the menu. The most succinct option is the Spectral Content graph, which shows the power spectral density for each axis as well as a total sum of all axis at all frequencies.
Displacement and Velocity PSDs are also derived from the acceleration data.
The section that I am the most proud of is the displacement path section. This was recently added at the request of the R&D engineers on the team who saw a potential use-case for the app in vibration fatigue reduction.
Once key frequencies have been identified using the spectral content graph, the engineers can use this app to see the actual path of displacement of the probe at that frequency. This additional information can help them make intelligent decisions when trying to reinforce machine parts or choose an orientation for an oscillating piston.
The app is available for you to try here: https://drive.google.com/drive/folders/1kzl8uUHrbuiPnTkfJjEDUV-N8vsK7dgW?usp=sharing
This app has been published with the permission of Multiquip Inc., but it is intended only for exhibition. Do not use this app for any purpose other than to observe the work that I have completed.
I became interested in acoustics during a project I did my Freshman year. We were assigned a service learning project intended as our introduction to prototyping, experimentation, and project management.
My group was assigned to a student in middle school who had an issue with motor control in his flingers. Because he was having difficulty typing and writing, he was using a speech-to-text software to take notes in class and write essays. It was embarrassing for him and disruptive to other students. Our solution was to create a microphone encasement that would block out the sound of dictation to others in the room.
We began with a stub of PVC pipe filled with felt. Felt, it turns out, absorbs sound very well in the frequency range of human speech. This helped to dampen the sound, but too much was still escaping out the back of the tube.
Concerned with the shape of our tube, we cut it in quarters and taped it back together. This did not help at all, since there were too many gaps, and it was difficult to fit your mouth into the smaller tube. But it did lead us to consider not only the absorption of the materials, but also the shape of the surface the sound waves would bounce off of.
We came to a solution where we created a mock-anechoic chamber at the end of a horn shape. A horn shape may seem unintuitive since we think of horns as loud instruments, but this shape helped to diffuse the sound waves throughout the entire padded chamber. The microphone was suspended in the center of the radius on the border between the horn and the chamber. Despite our own doubts, dictation with this device was still just as accurate as it was with a normal microphone.
Our final product did not perform as well as our prototype because we had used a 25% fill on the 3D printer to include an element of decoupling. This helped reduce the amount of sound that escaped, but made it very fragile. Our final version used 50% fill.