From Mzkezine:
Noah Feehan is in an office workroom that’s strewn with screwdrivers, a digital oscilloscope, and fume extractors. On a desk sits a half-finished circuit printer, filled with cartridges of nano silver particle ink and ascorbic acid. It prints circuits on paper. Feehan and his co-workers have been floating in and out of the room to work on it.
“I like to organize by project,” he says, gesturing to boxes filled with wires and scrap materials. Of the half-finished circuit printer: “I’d love to print an eight-and-a-half by eleven RF-harvesting antenna. Might be a good opportunity to experiment with algorithmic design.”
Feehan and his co-workers don’t work at a hardware store, electronics workshop, or tech startup. They’re standing on the 28th floor of 620 Eighth Avenue in Manhattan — the office of The New York Times.
Feehan — whose official, LinkedIn-approved, resume-topping title is “Maker” — isn’t the only tinkerer that works at the 163-year-old newspaper. Seven other makers populate the Times’ R&D Lab, which launched in 2006.
Their mission: To forecast game-changing technology trends that will unfold in the next three to five years. They then build prototypes to envision how these ideas will impact media’s future — and how these ideas upend our notion of the communicated word. How will content be delivered? What sort of devices will bridge information and audience? How will platforms change? The idea is not so much to create products based on these questions, but to discover what creative director Alexis Lloyd describes as “tangible artifacts of potential futures that have relevance to the Times.”
The lab is full of builders, coders, fixers, and the various things they’ve cooked up. “We all come from very different backgrounds, from video art to statistics,” says Lloyd. “We all have a background that sits at the intersection of art or design, technology and critical theory.”
Their latest invention, which they finished last September, is a four-foot-wide table, dotted with 14 capacitive strips, that sits in the middle of the lab, surrounded by stools. This is the “Listening Table”: part transcriptionist, part smart furniture, and part, well, table.
This puppy transcribes, in real time, what people say in meetings, using Android speech recognition. In the middle of the table is a microphone that captures every idea, pitch, suggestion, disagreement, digression, jabber, and joke. Around the edge are eight single-pixel thermal cameras that figure out who’s talking or gesticulating.
But the table’s a lot more than just a note-taker. On a flat-screen TV a couple feet away, the words appear on the screen nearly as they’re being spoken. Each word is a varying shade: Lighter, grayer words are deemed less relevant (“the,” “a,” and other articles), while key topics are solid black.
And if you touch one of those capacitive strips on the table, the system recognizes the 30 seconds prior to the tap and the 30 seconds after to be key moments in the meeting, making it easy to tease out important chunks in the transcript. The Listening Table is not just recording what’s being said — it’s recording why it’s being said, and what’s important about it.
http://makezine.com/2015/03/11/new-tech-times/
Wednesday, March 11, 2015
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment