The camera captures the motion image of the speaker’s hand gesture. It translates it into an oral language and gives out the translated words.
(the camera capturing the image)
You can pre-choose what kind of voice you’d like it to speak in. I’ll pick up tat of George Clooney’s. 🙂
This work is licensed under a
Creative Commons Attribution 3.0 Unported License.
Heyy, ran across your blog and thought this was a pretty cool idea, I really like design that’s geared towards the disabled. It’d be really cool if when the camera detected movement it would move along with your hands. I know there are some signs like thank you where you need to use your chin. Who is your audience exactly? Is it strictly for people with speaking disabilities or can you wear it because someone around you is speaking disabled and you want to communicate with them? How far back can it detect movement? I also love that it doesn’t look like the typical off white medical device, it’s a sophisticated design that doesn’t have to let everyone know you have a disability.
Cool post!
im interpreter sign language.i saw this camera with mic it very wonderful and i hope to knew more about it and the price to buy it.
Great concept. I really like how simple it is to use.
By the way, deaf people don’t “speak disabled”.
A flat shape that lays against the chest would be better for camera stability while signing.
This would reduce errors, and decrease the amount of processor power required to run the image recognition software.
Also the device could be connected by bluetooth to a smartphone in the users pocket/purse. It’s possible that some smartphones on the market now may have enough power to run the real-time image recognition software required.
If no phones currently available have enough processing power, any phone with a wi-fi connection could stream the video to a remote server to be decoded.
And finally if the output voice can be changed, it’s only a small modification to change which language it uses….
Nice post.
this sound like a cool idea, but what is the cost? how has the testing gone. where can I purchase one if i wanted to try it
its a cool idea, this means a lots to me since my uncle is deaf.
however, i suggest the Sign Language Interpreter (SLI) better applied in a reverse way. The camera is not to read the sign of opposite, but to read the sign of who wearing it.
some special sensors should be applied to the users hand in order to enhance the accuracy of sign interpretation
idea is great, n lots of rooms to further enhance the feature to different areas. gd job
If a device like this could be developed, it would probably be best accompanied by some kind of speech to text recognition as this would provide one way communication, but communication is a two way process.
This has my interest ! I have a 5 year old girl and would love to see this product on the market ! Don’t care what it cost , if it works with minimal errors would be just a great product for our deaf ! Now does it work with just the alphabet or can it interpret other signs as well ? Well keep up the research and this product !
Thank you Ted!
Seems interesting but as a human ASL ‘terp, I think we’re at least a decade or so off from being able to create software that can detect all of the nuances of ASL and the variation in signing styles out there. By comparison, speech to text is still a bit glitchy and that’s going from one form of a language to another form of the same language. The goal of this devise is to interpret from a visual, spacial, gestural language to an oral aural one — and in the case of ASL and English an inflected language to a word order language. Text to text mechanical translation (written to written, one language to another) is also far from accurate at this point. Programs would have to be written to understand all (or many) of the 130 or so known sign languages and ultimately, I guess, to voice in several spoken languages in the world. As others have pointed out as described this process only allows a monologue. No feedback, questions, etc. Even having speech to text is limiting if the Deaf person in the conversation isn’t as fluent in English (or the spoken language).
Just my thoughts on the concept.
hey i want that like. can you give me?
there is a ploblem what about signs that take place out of the view point of the camera many if not a lot of signs take place on or around the head or at the waist or belly hight and a lot of signs look alike that i think with would be hard for the technology to see and define the little differances that would be needed for the computer to pick the right words than their is the problem that in asl there is content that can change the meaning of a sign, plus everyone signs differant so the technology would have to be able to learn as it was being used and store a memory. but who knows technology has come a long way maybe it would work but it would take a lot of programing and it would be expensive to make and perfect. and the market is limited
As a sign language interpreter myself, it would be very cool to try out this technology.
how much would it cost???? brilliant idea though!
Hope to see many more fantastic pieces like this in the future. You really helped me build on my limited knowledge of this subject.