7 kinds of Media?

July 6, 2013 by Joe

I recently watched this video from AWE 2013 by Toni Ahonen on why Augmented Reality is the 8th mass media. The part of his talk that didn’t quite sit right with me is his list of types of Mass Media. Here’s the list (including his new 8th type):

  1. Print (books, pamphlets, newspapers, magazines, etc.) from the late 15th century
  2. Recordings (gramophone records, magnetic tapes, cassettes, cartridges, CDs, DVDs) from the late 19th century Cinema from about 1900
  3. Radio from about 1910
  4. Television from about 1950
  5. Internet from about 1990
  6. Mobile phones from about 2000
  7. Augmented Reality from about now

I have two problems with this list. The first is that software is mostly absent. If I download an app from the app store or install a piece of software off a physical disk isn’t that a kind of media? One of those is arguably “Internet” and “Mobile”, but they’re both basically the same thing and both certainly qualify as “mass”.

My second problem is that the list seems to be an odd mix of content types and distribution mechanisms. Print gets one entry despite having a zillion forms. Audio and video both get two entries even though they’re both distributed in significant ways over the Internet. And Augmented Reality isn’t really either one, it’s sort of a kind of software that might not even be distributed.

I would use a different list:

  1. Print – The written word, including digital words like the ones you are reading right now. This includes still photography and all the flat kinds of art. – from the late 15th century
  2. Audio – Spoken words, music, and other kinds of sound regardless of distribution mechanism. – From about 1900
  3. Video – Moving pictures regardless of the distribution mechanism. – From about 1900
  4. Software – Pretty much anything where you are interacting with an automated system. This became a mass media with the personal computer revolution. – From about 1980

If you need to talk about how this media is transferred you could build a related list of distribution mechanisms:

  1. Physical – Somebody drops a hunk of dead tree on your doorstep or you buy a movie on physical media at a store and carry it home. – Since forever
  2. Radio – An analog or digital signal using radio waves. – Since about 1900
  3. Land Line – An analog or digital signal travelling down a wire or hunk of optical fiber. – Since the late 1800s
  4. Internet – This actually happens on top of land lines and radio, but it abstracts away all of that so well that it probably deserves to be its own distribution mechanism. – Since about 1995 (as a mass thing)
  5. Undistributed – Live performances or one-of-a-kind artifacts. The consumer has to physically go somewhere to experience things that are transferred this way. – Since forever

And if you care about the style of distribution there’s a third list:

  1. Broadcast – One producer, many consumers. The printing press started this, arguably. – Since the late 15th century
  2. Peer to Peer – Many producers, many consumers. For print this would include letter writing. – Since the invention of written language
  3. Many to One – Many producers, one consumer. This is used for things like the census, tax returns, and polls. – Since the invention of governments

Maybe it’s just my engineer-brain talking, but this seems like a much more clear way to express the various types of mass media. The thing is, I’m not sure it’s actually any more useful than that first list. The point of the first list seemed to be to make first Internet companies feel good about themselves. Then later that was expanded to Mobile companies and now it is being expanded to Augmented reality companies. Did anybody ever look at that list and gain any kind of insight?

What do you think? Is this sort of breakdown of media types useful?

Posted in Augmented Reality, History, Mobile | 2 Comments »

Secret Innovation

June 1, 2013 by Joe

One of the staples of near-future science fiction is organizations working in absolute secrecy to produce big game-changing innovation. I’ve been trying to come up with examples of this in the real world, but haven’t found any.

A few examples from science fiction: (These are kind of spoilers, but not very good ones.)

  • In Daniel Suarez’s Daemon a character named Matthew Sobel invents a new world order in the form of a not-quite-intelligent internet bot. Even the contractors working with Sobel don’t really know what they’re working on until Sobel dies right before the book begins, causing the Daemon to be unleashed.
  • In Ernest Cline’s Ready Player One James Halliday and his company go complete dark for several years and emerge with OASIS. This was a combination of hardware and software that created a globe spanning, latency-free shared virtual world for the people of earth to inhabit.
  • In Kill Decision (also) by Daniel Suarez, a shadowy government contractor builds automatous drones and deploys them for months before anyone realizes what’s going on.
  • The society ending technology from Directive 51 by John Barnes is a prime example. The book uses nanotech, extreme mental subversion and idealist/zealot manipulation to hatch a world wide society ending event. (via Jake)
  • In Nexus by Ramez Naam somebody engineers nanobots that link people’s emotions. Then some other people develop software to run on Nexus that give them super-human abilities. All of this happens in secret.

(There are certainly more examples than these. Mention them in the comments and I’ll add them to the list.)

In reality it doesn’t work like this, and there are two reasons for that.  The first is that the secrecy is far from perfect and the world gets a gradually better picture of whatever the thing is as it approaches completion. The second is that in the real world every successful innovation is an improvement on a usually-less-successful innovation that came before it.

The most likely counter-example people will bring up is the iPhone. Didn’t it spring fully-formed form the head of Steve Jobs on January 9, 2007?  Well no. Not even remotely. Apple’s own Newton, the Palm Pilot, the HP 200LX, Blackberry, and many more were clear fore-bearers of the iPhone. Apple certainly drove the design of that sort of device further than anyone else had. They definitely improved it to the point that millions of people bought them as quickly as they could be manufactured. They didn’t spring an entirely new kind of device on the world in a surprise announcement. The iPhone was basically a Palm Treo with the suck squeezed out.

Unfortunately as the iPhone demonstrates, “Big game-changing innovation” is not very easy to define. Let’s go with:

A new product or service that is so advanced that society doesn’t have a cultural niche in which to put it.

The Daemon, OASIS, and the swarms of killer drones from Kill Decision certainly fit this definition.  Are there any examples of products or services from the real world that do? If you can think of one, please leave a comment below and tell us about it.

Posted in CrazyFutureTech, Future of Computing, History | 11 Comments »

3D Printing and The Humble Toaster

January 12, 2013 by Joe

I have been thinking quite a bit about 3D printing lately. Maybe that’s because I’m now surrounded by 3D printed prototypes at work. Maybe it’s because the news in the tech world is full of stories about products and services related to 3D printing. All of this has lead me to two conclusions:

  1. In the short to medium term 3D printing will be something that companies use to provide customized goods to consumers, not something that consumers use directly.
  2. 3D printing won’t take off for in-home manufacturing until a printer can build something like a toaster

A 3D printer is an incredibly powerful tool for prototyping. In the hardware lab at the office we have a couple of Dimension printers, a laser cutter, a PCB mill, a Vinyl cutter, and a fairly complete set of power and hand tools. With those tools, a few McMaster-Carr and Digikey orders, and enough assembly time we can build a prototype of just about anything. A dizzying array of goods has come out of that lab and let us try out things in a few days to a week that would have taken us a few months to a year if we had tried to do the same thing ten years ago.

The problem is that the output of the printer requires additional off the shelf parts and significant assembly time to turn an ABS structure into something functional. This is where I think the toaster is a useful mechanism to talk about.

The pop-up toaster is arguably the simplest appliance in your kitchen. And yet it is also filled with components that are beyond the reach of 3D printing today.  (You can learn more than you ever wanted to know about how a toaster works here.) Here are several challenges that toasters present for 3D printers:

  1. The whole device is heat-resistant. Toasters heat to about 300 degrees Fahrenheit, but the melting point of ABS plastic is only 221 degrees. Clearly printing the toaster body out of ABS isn’t going to work.
  2. The power cord of a toaster contains both conductive elements and insulating elements. There are also insulating elements scattered around inside the toaster. Extrusion printers can handle the insulating elements, but not the heat resistance. Binding printers (sintering or EBM) can handle the conductive elements, but only print one material at a time so they can’t combine the conductive elements with the insulating elements in one object.
  3. “A toaster’s heating element usually consists of Nichrome ribbon wound on mica strips.” Nichrome is actually used in extrusion printers to heat the plastic. I can’t find any reference to either it or Mica being printable, and that would definitely require multiple materials in a binding printer.

None of these problems are insurmountable. Several 3D printers can already print in multiple materials. They just all happen to be different kinds of extruded plastic. Eventually those printers will figure out how to include metal in their parts. Heat resistance is probably a bigger challenge given how the printers work, but in theory they could use materials with even higher melting points so the resulting products could handle 300 degrees without a problem. And the list of materials that can be printed is growing every year. Eventually these problems will be solved.

That brings me back to the first conclusion, however, because I don’t think those solutions are going to come from the low-end printers you can afford to put in your house. Figuring out how to print conductive and insulating materials in the same batch is a hard problem. Heat resistance is a hard problem (for extrusion printers). Printing new and exotic materials is hard problem. These are the sort of thing that researchers are going to chew on for a while, then expensive first generation commercial implementations will need to be built.

As MakerBot eats away at the low end businesses of Objet and Dimension these bigger companies will move further up-market and add some of these higher-end features. Eventually MakerBot and its ilk will get those features too, but that is going to take a long time, probably ten or more years. In the meantime, the only printers capable of printing “advanced” devices like a toaster will be those that cost tens or hundreds of thousands of dollars. Price alone will keep these printers out of the hands (and garages) of individuals for a long time.

The only people who will be able to afford the next generation of toaster-capable printers are companies that use that capital investment as part of their business. That includes hardware labs that use these as tools for prototyping, but also mass-customization companies that build iPhone covers or print MineCraft levels today. These companies can charge a premium for their products because of the customization so they can amortize the cost of the printers (and learning how to use them) over thousands of customers. They will also be the primary market for companies like Dimension and Objet, so those printer providers will have no reason to drop their prices to a level normal people can afford.

One last random thought before I end this meandering post:  The printers that we’re running in twenty years will bear about as much resemblance to today’s 3D printers as my Nexus 4 bears to my first computer (a TI 99-4/A). They will be a combination of binding and extrusion, or something we can’t even imagine now. They will include many elements of a pick-and-place machine to include complex semiconductors in the resulting devices. And, once their utility reaches a certain point, their prices will be in free-fall. Eventually the future of 3D printing is bright. I just don’t think it’s going to happen overnight.

Posted in CrazyFutureTech | 5 Comments »

How did I do with my 2011 predictions?

January 1, 2012 by Joe

A year ago I posted a list of predictions for 2011. Let’s see how I did:

  1. Wrong. Netflix did pick up more content. They are doing some interesting things with Arrested Development, for instance. However, I don’t think people will look back on all the pricing changes, Qwikster, and the general user annoyance as a year when Netflix “kicked ass”.
  2. Correct. 50 Mbit FiOS! Woot! We moved from Seattle to Kirkland into a house where the previous owners bullied Verizon into running the fiber. It’s completely awesome.
  3. Sort-of Correct. It’s not 60%, but Android is up to 43% market share. “Dizzying array” is the only way to describe the number of Android devices out there, though that’s not entirely a positive thing.
  4. Correct. Google and some of their partners announced a plan to improve the update situation. Hopefully that will work out.
  5. Correct. What new glasses? The Vuzix Star 1200s look cool, but $5k is a bit out of the consumer price range.
  6. Correct. This one was sort of a gimme since there’s no way to tell if this is really the start of anything. I’ve seem a bunch of Nissan Leafs around my neighborhood though.
  7. Wrong. I certainly haven’t heard of any such advance. Did I miss one?

4.5 out of 7 isn’t so bad. Some of my predictions were pretty soft-ball though, so I kind of cheated. spacer

I think I’m going to skip writing my own 2012 predictions post this year. I would love to hear what you think is going to happen this year. Post them in the comments! (I’ll probably pick the best of them and put up a new post collecting them.)

Edit: @JZig points out that math is hard. 7 – 2.5 = 4.5

Posted in Augmented Reality, Future of Computing, Mobile | 2 Comments »

Figuring out a common time base

June 26, 2011 by Joe

Using the locator beacon network from the simulator for an indoor navigation system that follows my own requirements means that a receiver must be able to use the network without sending out signals to each node. The receiver must be able to determine its distance from multiple nodes without actually sending anything to those nodes. This is accomplished in the GPS network by synchronizing all the satellites on a very precise clock. Once the transmitter and the receiver have a clock in common the receiver can easily compute its distance to the transmitter:

  1. Transmitter sends a signal at time X saying “It’s time X!”
  2. Receiver receives that signal at time Y
  3. Distance = ( Y – X ) / speed_of_light

So how do you build a common time base on a ad-hoc network of locator nodes run by a random assortment of people? The short answer is that you don’t. Assuming a few things about all the hardware involved you can get by coming up with an ad-hoc time base that still lets you compute distance without requiring any kind of central authority.

This assumes a few things about beacons and receivers in the network:

  • Each node (beacon or receiver) has a fixed (and known) amount of receiver lag. This is the time between when a signal hits the antenna and when it’s pushed to whatever internal system can tag it with the internal clock of the node. This is Send(N) for node N.
  • Each node has a fixed (and known) amount of transmitter lag. This is the time between when a message is sent and timestamped, and when it actually leaves the transmitter’s antenna. This is Recv(N) for node N.
  • None of the nodes are lying.

With these assumptions, the system is relatively straightforward. Any node can compute the translation from its own time base Time(n, t) to some other node’s time base Time(m, t) by pinging that node:

  1. Node N generates a random number ping_key
  2. Node N broadcasts a message containing ping_key and records ping_time = Time(N, t).
  3. Node M receives that broadcast message and records receipt_time = Time(M, t).
  4. Node M sends out a broadcast of its own with: NodeID(M), ping_key, Send(M), Recv(M), receipt_time, Time(M, reply)
  5. Node N receives the response and notices that ping_key matches its own ping_key.
  6. Node N computes its relative time base with M as follows:
    • Total_time = Time(N, reply_receipt) – Time(N, transmission) – ( receipt_time – Time(M, reply))
    • distance_time = (Total_time – Send(N) – Recv(N) – Send(M) – Recv(M) )/2
    • distance = distance_time/speed_of_light
    • time_base_difference = Time(M, t) – Time(N, t) = Time(M, receipt) – Time(N, transmission) – (distance_time + Send(N) + Recv(M))
    • time_base_difference2 = (Time(M, reply) + distance_time + Send(M) + Recv(N) ) – Time(N, reply_receipt)
  7. Node N stores NodeID(M), time_base_difference, distance in its table of nearby beacons. (time_base_difference and time_base_difference2 should be equal assuming that all the lag numbers are right and the distance hasn’t changed. If they drift apart something is wrong.)

This mechanism enables each beacon to keep a constantly updated time base for every node in range via the same packets it is using to determine distance to those beacons. Beacons can then send out everything they know:

  • Their own location (estimated from GPS and refined via the algorithm in the simulator)
  • Time(beacon, broadcast)
  • For each beacon N in range:
    • Node ID of N
    • Time(N, broadcast)

Here comes the unfortunate part: In order to figure out its own time base relative to a beacon each receiver must ping at least one beacon. Once it has that beacon’s time base it can figure out every other beacon in range of that beacon and work out from there. Because clocks on nodes will drift apart over time this ping will need to be repeated every X minutes. Because the last pinged beacon will eventually be out of range, it should also be repeated whenever the receiver moves more than Y distance. This ping need not contain any identifying information about the receiver, so it shouldn’t have privacy implications, but it will reduce the scalability of the system from receiver_count = infinite to receiver_count = bandwidth / (ping_size * ping_frequency). As long as the pings are relatively small and infrequent that should not be an issue. Multiple receivers attached to the same clock (i.e. multiple receivers carried by the same person) could also share time base information and would not need separate pings.

What do you think? Will it work? Are predictable transmit and receive lag even realistic?

Posted in Augmented Reality, Mobile | No Comments »

Older »

gipoco.com is neither affiliated with the authors of this page nor responsible for its contents. This is a safe-cache copy of the original web site.