Week Six - Hard-Aware Hack

What did you do this past week?

Half of this week was spent recovering from HackGT. Overcoming the tiredness of lack of sleep, 1 hour jet lag (I don’t deal with time-zones), and 10 lbs of food gained was hard, but I was eventually able to get enough rest (I’m still struggling with the 10 lbs). Unfortunately, I haven’t gotten my reimbursements yet for HackGT, so I’m down $275, which means I have to rely on free food and another hackathon (TAMUHack) for “living expenses” for the next weekend.

The other half of the week was spent catching up with school work, since the Operating System’s test and project were both ending/due this week. Figuring out synchronization with locks, semaphores, and monitors was not difficult, but being able to memorize several PowerPoints slides of 50-60 slides long was arduous.

Eventually though, after finishing the first Operating System test, I was able to take a mental break by working on my PintOS project (Or waiting on the PintOS fairy). Unfortunately, the PintOS fairy did not come in time, so my team was unable to finish the project (what a stress), but we were able to finish the design documents for the code implementation, so fingers crossed on a lot of partial credit.

TAMUHack

Additionally, this weekend, I decided to venture out to another hackathon located at A&M, TAMUHack. At this event, I formed a team several weeks ahead of time with people I knew, so I didn’t have to experience finding a team last minute before a hackathon (though it’s cool meeting new people). In doing so, we were able to spend more time formulating ideas and researching into APIs (Twilio, AWS, Microsoft Azure, Google Translate, etc…) and CS fields we could go into (machine learning, virtual reality, hardware hacks…). This was a good break from the usual mobile application route many programmers take at hackathons (at least for a beginner like myself).

Traveling by bus, we arrived at A&M, innocent and superficially ready for the next 24 hours of hacking. Though we did not have a concrete idea yet, our plan was to scrounge through the company prize specifications and create a cool hack that incorporated as many company APIs as possible. While were going about the venue thinking of ideas after the opening ceremony, I had a sudden urge to want to work with hardware. We decided to check out an Amazon Echo, an Arduino 101 set, and some hardware sensors/lights.

Working with Hardware

After acquiring hardware and going through a crash course lesson by a mentor over Arduino set up and learning how to work with a tracking sensor, we noticed that the NSA had this cool hardware gadget called an Intel Smart Glove. It was connected to an Arduino Yun with 16+ sensors on the glove (accelerometer g-force, finger pressure and flex, gyroscope voltage, magnetometer voltage, etc…) and contained some sample Arduino and python code in order to work and read output. The NSA graciously allowed us to check out the equipment, and made me go through the formalities of giving my contact information to them so they could contact me if the glove got damaged/stolen (even though they already have access to all my personal information…).

Data Visualization

Before we had an idea, we initially wanted to visualize the data outputted by the Smart Glove in order so we could understand what data we would need to use based on variance and how data changes due to the movement and hand gestures the Smart Glove represented (a data visualizer for others, a live debugger for ourselves). Taking the output from the sample Arduino code (output: non-understandable data) and having it go through the sample python code, Alfred pipelined the code so that the output of the python code (JSON format string) would go into a Java program, which would take the JSON data and output it on a Java UI monitor.

Initial Idea - Voiceless Communication with Voice-driven Alexa

As CS students, we had never developed with hardware before, yet being adventurous, we decided to pursue an idea way beyond our scope. Our idea was to use the Smart Glove to communicate gestures to the Amazon Echo, such as raising or lowering volume, or playing music, allowing for remote access over the Amazon Echo in a loud environment. In other words, our goal was to find a way to communicate to a voice-driven device without using any sound.

We researched into Alexa skill sets and found out that calls for skills are formatted in string of words, starting with “Alexa” and ending with a series of string word command/queries that Alexa would process. Each Alexa skill runs under different intents based on the string, where intents decides on commands such as making Alexa restart a skill, continue running a skill, etc… These skills can be customized through AWS Lambda, an Amazon Web Service that provides server-less calls, and allows Alexa skills to be created and executed on there. Unfortunately, after grasping the life cycle of Alexa, we realized the project was not feasible, because the current development environment for Alexa only allowed sound-based input to be sent through an intent, not string based commands (ex. “Alexa, play an audiobook”). A solution to this would be to use Google Translate to play commands to the Alexa device (it does work occasionally) but unfortunately that would defeat the purpose of voiceless communication.

Final Idea - Virtual Air Drum set

It was near the mid of nigh, and the crisis of our previous idea not making any notable progress after 5-6 hours really hit us, since we thought we found a golden project to work on. Fortunately, one of the mentors from Capital One, Shane, had been helping us on a lot of the creation of Alexa, and ha been working on an Alexa skill in AWS Lambda himself. The skill involved having a type of beat repeating itself in order to provide drum set functionality. It was cool, but we couldn’t see how that could possible fit with our hand-held wearable sensor.

Until everything suddenly clicked.

We could do an air drum set!

Using the Smart Glove, taking voltage data from the finger pressure and flex sensors, we could assert different gestures to represent unique drum set sounds, such as a snare drum, clash cymbal, and bass drum. Subsequently, we could use the change in g-force calculated by the current and previous iteration of the accelerometer sensor to assert whether a hand movement is signaling a sound to play. By combining both of these algorithms within a thread-run program, we were able to create a working demo of an virtual air drum set.

Calculating Gestures

Initial solution: We had three gestures: Snare drum, clash cymbal, and bass drum. Initially, I tried to work with the finger flex voltage data to assert when the hand was in open or closed position. The finger flex data, when un-flexed, starts at around 1015, and as a finger increases its flexiness, the flex data decreases in voltage. As cool as this algorithm was, unfortunately the flex voltage had too low of a range, providing a very constrained set of data to operate gestures on.

Final solution: Gestures were eventually calculated based on the finger pressures applied. The pressure voltage is initially 0, and as the finger applies more pressure, the voltage increases at a very abrupt noticeable pace. This provided a high range, and values were distinctly noticed as either on the low or high side of the spectrum.

Calculating Moments of Sound

Initial solution: After distinguishing which hand gestures would activate which sounds, I worked on calculating the change in accelerometer Y value, based on the current and previous accelerometer value. Taking the difference, I would assert whether the value fell above, below, or within the noise category, and set values of 1, -1, or 0 for the change in accelerometer (delta). The problem with the accelerometer is that there was a lot of noise that came from the sensor, so that even though the Smart Glove remained static on the table, our Java UI showed that data was constantly changing and fluctuating.

Final solution: In order to solve this, I change the calculated value from accelerometer Y value to G-force Y value. This was a double value that was much more resistant to noise, and provided a smaller noise set to disregard.

Presentations/End Product

In the end, we were able to have a live demo to show everyone at the hackathon demo presentations, and I looked forward to having quality time with people who wanted to understand our hack.

Before, I used to think about presentations in two categories, quantity or quality. Before, believed before that the more people I got to talk, the better chances I had in winning a hackathon (cause a big reason people go to hackathons is to win). This was due to the fact that last weekend at HackGT, the project right next to me (a virtual reality Lego builder game) took up all the attention. I never got to speak to anyone except the required judges, and even then, their attention span seemed too small to care about the demo I worked on.

I realized how short-sighted that was this weekend. In fact, quality time is probably one of the best things to have when sharing about a hack. Not only is there mutual interaction in trying to teach and understand a hack, but it just shows that there is interest the hack you created. And even though crowds of people are not flocking to your hack, you know that your hack has some value, and you realize that all that sleep-deprivation and espresso shot pills were worth it.

Of course, having both quality and quantity is the best, and it seems the hackathon project that get that usually win.

Overall, I really enjoyed the hack I got to work with. Video Demo - Devpost link - Github repo (Can’t really do much with building the repo unless you have the same hardware device).

Most of all, considering past experiences where I never really got to be fully engaged with all members, both in creation. preparation, and presentation, I found this team to meet and exceed all my expectations in my hackathon experience, and I do hope to have similar if not better experiences in comparison to this one. Thanks guys! (shoutout to Alfred Zhong, Harold Jackson, and Jake Heald for an awesome weekend. Get rest guys!)

What’s in your way?

Balance between designing ideas and coding them out. For PintOS, too much time was spent either designing a simple code solution or remaining stuck within a specific code segment due to bugs.

In addition, looking through Object Orient Programming’s Test #1 format, I realize that there are a lot of functions that need to be covered and study, and I will need to devote time learning and creating implementations for given C++ structures or functions. Furthermore, coding in Canvas will suck.

OOP Class Impressions

It’s really interesting to find that at times, during the lesson, Downing tells us to stop what we’re doing and implement a given function or struct that he’s been going over for us theoretically. This gives good hands on exposure to working with C++ code and the lessons that he teaches us, and it really challenges us to understand our understanding of the material and being able to applicate it when asked.

Regrettably, in some moments when we’re asked to applicate our knowledge, it may not be fully sufficient enough in all cases, such as Quiz #14 given this past Friday. Even though the material was over constructors and different types of constructers the class would invoked, the answer format for the questions was hard to interpret. Luckily, Downing noticed this, and proceeded to give us full credit for the questions we couldn’t applicate our knowledge toward, and encouraged us to look over the information pertaining to the constructers for creating a given class.

What will you do next week?

I’m going to try and get rest. Having a 2nd consecutive hackathon in two weeks really drains my energy, so I’m going to try and get settled on my sleeping schedule (in preparation for HackTX).

Also, I ended up splitting up with my partner for Operating Systems. I really enjoyed working with my partner, but I found that as a two-person team, it would be really hard for us to find one person to integrate with us (since teams are in 2-3 for Project 1). And integrating with another two-person team would prove to be even more difficult since we would not only have to take account of our own schedules open spots, but also connect with two other partners who schedules may not fully be compatible to ours. So by splitting up, it would be much easier to find a 2-3 person team in need of another partner, and at least for my side, it would be easier to work with a group that already has a set schedule.

In addition, I hope to study for OOP. There seems to be a lot of deep understanding that goes with C++ implementation, so I will try to find a study group to work and throw ideas/code at in order to perform optimally for the test on Thursday.

Finally, I will be teaching social dance this coming Saturday, so I will need to find open times to meet up with my co-teacher, Ruth, in order to figure out how the heck 2 step should be taught to people.

Tip of the Week

I mentioned a lot of cool APIs to work on at the beginning of this post. If you have time, go check them out.

HackGT had some pretty cool winners for their hackathon last week. I really enjoyed DeepWatch.

Also, two hacks I mentioned in the blog post were really cool. Youtube SubSearcher and Virtual Therapy. Definitely awesome hacks that I wish I could’ve done.

Also, we found a different version of our project that’s way better executed than our hack. Check out their video demo: AirDrum 1.0. Here’s their Github repo.

Written on October 2, 2016