A couple of weeks ago, my daughter Lucy and I trained up a Harry Potter inspired 'sorting hat' (see prior blog). If you're not familiar with the details - it's a hat that (in the movie) sits on a new student's head and pronounces them best suited for one of four school 'houses' - Brave Gryffindor; Brainy Ravenclaw; Loyal Hufflepuff and Nasty Slytherin.
- WATSON DEVELOPER CLOUD - code, docs and services: http://www.ibm.com/smarterplanet/us/en/ibmwatson/developercloud/
- CODE (R) - basic IBM Watson interface from R - https://github.com/rustyoldrake/R_Scripts_for_Watson - includes NLC interface used to train classifier below
- NLC GROUND TRUTH USED - https://github.com/rustyoldrake/R_Scripts_for_Watson/blob/master/sortinghat.csv - note there are BIg-Five and Needs / Values mapped here from IBMWatson Personality Insights
- SHD CODE - PI/Arduino - https://github.com/rustyoldrake/sorting_hat_shd_2015 and presentation materials (not complete)
Hardware We Used
- HAT CHASSIS: An old bike helmet (adult) + Round bit of plywood with a hole in it just smaller than helment + stiff wire for pointy bit (I used a tomato cage from the garden)
- FABRIC: An old black t-shirt is good and flexible. Can also augment with fabric from the fabric store "off cuts"
- RASPBERRY PI 2B - https://www.raspberrypi.org/products/raspberry-pi-2-model-b/
- ARDUINO YUN - with wifi (I think you can probably do everything with the RPI ONLY, we had lots of people and Arduinos so did it this way. More modular, but adds complexity) - https://www.arduino.cc/en/Main/ArduinoBoardYun ; MOTOR CONTROL - someone had a motor control board so the arduino could drive enough current to the mouth motor. Steppers were sufficient for direct drive
- JAWBONE or AMPLIFIED SPEAKER - I used some conference swag speaker with audio jack input and USB power - https://jawbone.com/speakers/jambox/overview you can ALSO use old PC speakers with a 12 volt power supply
- MOUTH MOTOR - hacked a DVD Drive from an old computer - the little motor used to drive the read/laser head turns out is DC (not a stepper) and is happy getting tossed around by low voltage DC
- EYEBROW MOTORS - steppers - driven by https://www.arduino.cc/en/Reference/Stepper
- POWER CUBES (portable) - We used some 'swag' again - phone rechargers with USB charge ports - worked Ok-ish -though a few times they seemed to reset - possibly due to over-current. we used two which share the load - one for Arduino Steppers and the otehr for PI and Audio.
- LED / EYES - we hacked apart some clear spheres, cut some white cardboard and poked some LEDs in - white/blue ; green ; red and yellow the yellows typically are not bright and we did not use.
- And WIFI of course, to hit remote cloud / APIs
Natural Language Classifier (NLC)
We trained and tested the NLC - posting here - https://dreamtolearn.com/ryan/data_analytics_viz/97 - and had a little fun classifying the presidential candidates and a few other notables from history - below:
- Hillary Clinton - Ravenclaw (92%)
- Joe Biden - Gryffindor (69%)
- Donald Trump - Gryffindor (48%)
- Ben Carson - Ravenclaw (65%)
- Stephen Hawking - Ravenclaw (97%)
- Jimmy Carter - Hufflepuff (54%)
- Pete Seeger - Gryffindor (77%)
So we went into the hackathon WITH a 'brain' already complete, but with no mouth, ears or eyebrows
The Science Hack Day team did a marvelous job of making the code come to life with animatronics - stepper motors for eyebrows, moving mouth (I hacked from an old DVD drive motor), and voice (jawbone style speaker), and custom built LED eyes we got by busting open some swag.
User Experience + Technical Sequence
- NEW HOGWARTS STUDENT approaches hat and marvels at its awesomeness. Puts it on (there is a chin strap as it weighs about 12 pounds)
- TRIGGER - microswitch in helmet and/or hand switch is keyed to start the sequence
- ANNOUNCE AND PROMPT - Hat sings excerpt from song below - declares identity and then prompts user to talk about them self
- HAT LISTENS / USER MIC INPUT - User talks, voice is input to a USB microphone attached to a Raspberry Pi, sent as a WAV or FLAC file
- IBM WATSON - Speech to Text - Watson STT is called - translates audio file into a TRANSCRIPT of the utterance of the student.
- IBM WATSON - Natural Language Classifier NLC - the transcript is sent to the second of two IBM services - the NLC trained by harry potter #1 fan Lucy. The response is the four "houses' of HOgwarts - Brave Gryffindor; Smart Ravenclaw; Loyal Hufflepuff and Shrewd/Evil Slytherin. Each one is ascribed a confidence %. often >95% confidence with typical inputs
- RASPBERRY PI QUARTERBACKs - three events happen on this sort event (a) WAV Files are called up to ANNOUNCE the house (b) LED eyes are turned the appropriate color and (c) Animatronics is engaged - eyebrows and mouth movements that map to the wav
latency is due to WIFI delays (there were about 200 hackers on the GITHUB Guest WIFI
Demonstration at Event:
Science Hack Day Team:
@Science Hack Day San Francisco 2015
Harry potter characters, ideas, images, and references are property of JK rowling and Warner Brothers studios. The opinions and information expressed here are my own, and not of my employer.
Getting Started with R Programming Language and IBM Watson APIs:
About this blog
Description is...<br/>Data Analytics & Visualization Blog - Generating insights from Data since 2013
Created: July 25, 2014Englishfrançais