The RoBlog
Thursday, December 23, 2004
MISWEB: Message from the future
I just finished reading an article on the MIS Magazine web site (based on a posting on Roland Piquepaille's Technology Trends) that gives some thoughts on the future from 5-25 years out. These always seem to get my juices flowing, and this time was no exception. Here are some of my thoughts on the article.
As I believe I've mentioned before, I think it's perhaps not a good idea to depend on people building a technology (especially the scientists and engineers that are solving the complex problems) to determine how the technology will be used in the future. This is especially true when we're talking about infrastructure technologies like nanotechnology. I think that these types of people are often too caught up in thinking about solving the challenges of the technology to spend much time thinking about how it will really be used. They are also, by the nature of working on the problem, more likely to be biased towards its importance (and often, a particular kind of importance) to be reliable. This often manifests itself in company PR statements claiming that a technology they are developing will revolutionize the way we do something but that something is either too narrow for a revolution to be interesting, or too broad for the technology to apply. So when I see statements like "Scientists portray a future in which...", assuming those scientists are the ones developing a technology, I tend to treat it with a grain of salt.
The author of the article, Helene Zampetakis, uses the term "disruptive technologies" a couple of times in a way that I think is just a bit off. Typically we think of disruptive technologies as ones that cause a paradigm shift making existing companies scramble or go out of business. While nanotech may well do this in some industries, saying that they will "disrupt the entire semiconductor fabrication industry" is an overstatement. As the existing major manufacturers of semiconductor electronics are likely to be the ones that will be creating nanotech electronics in the future, the disruption will really likely happen at the tool vendor level. The big guys will remain big and in charge, and most of us will probably not notice anything but the continued expression of Moore's Law. "Revolutionize" may be a better word, as few existing semiconductor companies are likely to have to scramble much more than they already are (and have been for quite a number of years).
In the section on quantum information processing (qip), Bob Hayward, vice-president and research fellow at Gartner is quoted as saying "It will give us an order-of-magnitude jump from the fastest computer of that time - it will not be an evolutionary increase." While he, no doubt, has access to much better information that I do, I do wonder that an order of magnitude increase from the fastest computers of the time (when quantum information processing comes online) will, in fact, be an evolutionary increase. I don't think anyone is saying that information processing ability follows a linear increase. To hear many tell it, we should see something significantly greater than that. I expect that qip will be used tentatively in the market at first while the bugs get flushed out, and by the time it has any real impact on the lives of regular people, we'll be able to point to some curve that more-or-less expected this kind of revolution; may, in fact, that a step-wise change was inevitable.
Zampetakis goes on to quote Hayward saying "As for security, whereas it takes a month for a supercomputer to crunch a DES encrypted code today, it would take just a couple seconds [using QIP]. But maybe we'll also get a corresponding advance in security algorithms." With the recent work done in quantum encryption (the subject of this month's Scientific American, coincidentally), it would seem that security, in the form of encryption, in any case, will still be strong enough to withstand the computing power of the foreseeable future. More interesting would have been a note about how qip will allow information on you from disparate sources to be correlated in real time, and how the vast sensor network will be able to track where you go and what you do (though this may not be as bad as it seems).
An interesting distinction that often gets overlooked when making predictions about the future is between when it will be possible to do something, and when doing that thing will be available enough to have any impact on our existence. For example, in the section on advanced materials, Zampetakis says: "Over the next five years, the development of nanoscaled sensors will allow intelligence to be built into many materials for multiple applications." This may be true, but (and this is highly dependent on the application) it is likely to be twice that amount of time before products incorporating this ability are directly interacting with or impacting regular people.
Dr. Peter Corke, autonomous systems team leader at CSIRO (I don't know what "CSIRO" is, and when I went back through the article looking for it, I couldn't find it, and I'm too lazy today to research it) is quoted as saying ""Robots will be fairly prevalent 10 years from now. We'll see them in offices and hospitals and shopping centres." I'm still doubtful on the regular integration of robots into daily life, but I do feel that they are probably coming SOMETIME soon, so 10 years is likely not that bad of an estimate. It's probably more accurate than other timeframes in this article.
Using robots to perform tasks like mail delivery and store re-stocking seems fairly unlikely to me in the near-ish term. Typically a technology like robotics encroaches into our daily life by taking over the excessively dangerous, costly, error prone, or tedious jobs from humans. Neither store shelf re-stocking, or mail delivery seems to fall into any of these categories with enough severity to warrant them taking human jobs. Further, both are significantly complicated in terms of object negotiation and barrier navigation that the technology will have to be pretty sophisticated (read: expensive) to do any good. More likely will be robots increased use in fire fighting, policing, mining (as mentioned in the article) and the like. No doubt there will be robots capable to doing most or all of the store re-stocking/mail delivery tasks in 10 or so years, but my belief is that they won't be deployed for those uses by that time. By the way, warehouse inventory re-stocking is a whole other matter than shelf re-stocking, and I expect we'll see widespread deployment of automation in that area (I'm guessing this already exists in non-trivial dollar amounts today).
I've mentioned NASA's efforts to help bring about a flying car in the past (although I can't for the life of me find the post...hmm...). Flying cars are one of those things we love to imagine, but I suspect we'd hate in practice. Even if we could conquer the noise, and the risk, we'd still be faced with an energy issue: It takes a heck of a lot of energy for an object to stay airborne. Probably about the time that we could have flying cars, the demand for them will be pretty low as virtual presence technologies increasingly mitigate the need for long-range travel (the most likely niche for flying cars), and automated ground travel (the self-driving car) and mobile connectivity mean that the time we spend in cars can be productive and entertaining for all.
The problem with allowing people to be further distanced from their places of work (the non-social problem, that is) is that if people migrate en-masse to wherever they like, congestion will become a critical issue where all of these commuters come together to land (given the urban airport scenario cited by the article), and then people will have to get from these urban airports to their places of work anyway. I have to admit that I'd LOVE a flying car, but I'm doubtful of it becoming practical enough to become anything but a toy for the rich. I suppose if it was fully automated, you might rent one for a family vacation, but the economics and convenience would have to work out pretty well even for that.
I've notices that this article uses "the next generation of" at least twice, implying that things like 3D holography are just around the corner. Most of these technologies are still several generations away from public consumption.
While we wait for 3D holograms to allow us a form of telepresence that allows us to walk around remote rooms (would we be walking around in specialized recording rooms in this scenario, or would a computer be adjusting where we are walking to make it look natural in the display environment so that when, for example, I walk around the desk in my office I don't appear to walk through a table in the remote location?), technology that allows us to meet in virtual space will likely become more commonplace, allowing us to interact more naturally an a wholly constructed environment than holography will allow for quite some time.
Five areas of business change are identified (rather awkwardly) at the end of the article, and I agree with every one of them. What we'll see, in my opinion, is the transformation of corporate IT from something that looks like the fleet department to something that looks more like the human resources department. In fact, I'd go so far as to say that we'll be seeing combined "Human and Information Resources" departments in the next decade or so as most of the "working" portion of IT becomes outsourced and the IT director becomes someone who manages contracts and regulations than software and computing infrastructures.
Something interesting occurred to me while reading this article. I get hung up thinking about the apparent paradox between things that seem unlikely today but happen anyway, and things that seem likely but don't. What occurred to me is that there is often a single breakthrough required for a technology to become mainstreamable, and that breakthrough is often not predictable. For example, holography is something that seems like it may yet take a while to produce even though it's been around for a while. At some point, some genius will re-envision some core component, and suddenly we'll all have 3D TVs. Perhaps a technology needs to mature enough (which, by the way, takes its own geniuses) so that a genius can study the system and make the requisite discovery. Food for thought.
PhysOrg: Just in time for New Year's: A proposal for a better calendar
Professor Richard Conn Henry of Johns Hopkins University proposes a new calendar that retains 12 months, and seven day weeks, but that eliminates one-day leap years for 7 day "Newton weeks" that occur every 6 years or so.
Henry's calendar has the benefit of keeping events like holidays, on the same day of the week every year.
At the same time (so to speak) Henry advocates everyone moving to "Universal Time", presumably over the kind of relative time we now experience between timezones. In that way, if I'm in London, and you are in San Francisco, I can ask if you can attend a meeting at 4pm, and that time will mean the same thing to both of us (afternoon in London, early morning in SF) without requiring any translation.
While all of this is interesting, it strikes me as one of those things that will encounter huge amounts of social resistence. People's sense of time is much more personal than their sense of measurement, whose conversion we are still awaiting here in the US (in non-science segments).
Perhaps equally important is the large amount of software that encorporates time either for historical or predictive calculations. This would be an effort akin to the year 2000 bug if not handled correctly, and without impending doom as an incentive.
Henry is trying to get the world's calendar changed over by 2006, when the current Gregorian calendar and Henry's proposed calendar sync up (I noticed that the PhysOrg article doesn't mention any other calendars in current use - Chinese, for example).
I expect this will get a little bit of press but ultimately we'll chug along with our inefficient timekeeping systems.
PhysOrg: Just in time for New Year's: A proposal for a better calendar
Wednesday, December 22, 2004
Mappr! Where It's At.
Just a quick update to my previous mention of Mappr.com. The link below is to the beta version of the tool. Something's lacking about the user experience (I can't quite put my finger on it) but cool nonetheless.
Mappr! Where It's At.
Tuesday, December 21, 2004
IFTF's Future Now: Flickr and "folksonomies"
Just a bookmark of sorts that I left a comment on IFTF's Future Now blog relating to Flickr and what a fellow named Thomans Vander Wall calls "folksonomies".
IFTF's Future Now: Flickr and "folksonomies"
Sunday, December 19, 2004
Self-Documenting Life (Transcription)
I have an iPod
Last month I bought an iPod, but I didn't buy it for the usual reasons.
Yes, I am an avid music listener, and the ability to carry around my favorite tunes was definitely a plus. As was the ability to use the iPod as a hard drive so that I could tote around files that I need both at the office and at home.
But I didn't buy it for these reasons. I bought it to record what I say. All the time. And have it transcribed into text.
I've found that in describing the purpose of this project, people are either intuitively in favor of it, or don't understand it at all.
For those immediately interested, we talk animatedly about how interesting it is to do this kind of thing, and when I explain the things I think would be interesting outcomes of such a project, they are often completing my sentences for me.
For those for whom the project holds but perplexity, no amount of explanation convinces them otherwise, and, indeed, I'm often at a loss to explain why it seems so interesting.
If you fall into the former category of people, below is a bit more detail on what I'm actually doing, and then some idea of where it could go from here. For those in the latter, thanks for dropping by, but I'm not sure it'll get any more interesting from here.
The Setup
The basic idea of this project is to record only my part of any conversation on something with enough storage to contain an entire day or more worth of conversation so that I wouldn't have to regularly juggle media just when things were getting interesting.
After some digging, I found that the third and forth generation iPods have the ability to record audio suitable for voice (and not much more, no doubt due to piracy concerns, but perhaps owing to the processing power internal to the iPod as well).
To enable this, you need to buy a third party product that allows you to plug a microphone into it. There are two products, that I'm aware of, that support this: Griffin's iTalk, and Belkin's Universal Microphone Adapter.
I was initially intrigued with Griffin's product as it has a built-in microphone/speaker that lets you record ambient audio without an external microphone, and you could play back through the speaker so that others could here without needing to use headphones (Griffin has a similar product, the Voice Recorder, but it lacks the ability to plug in an external microphone).
As I don't have a large gadget budget, I decided that it would be best to try to borrow an iPod rather than buy one as I wasn't really interested in having a portable music device. It turns out this was harder than I had thought given the amount of press the iPod has gotten. I found a few people with iPods, but mostly older ones that don't support the recording of audio. What compatible ones I did find were formatted for Macs, and apparently you have to reformat them (thus wiping them clean of whatever was on them) to use them on a PC (seems darned inconvenient). I finally found someone who was willing to part with their Mac iPod for a week and allow me to reformat the drive, but at the last minute his broke.
Since, after making the arrangements to borrow an iPod, I had already ordered Griffin's iTalk, and since no other iPod appeared to be forthcoming, I sucked it up and purchased one new (the 20G version) from Best Buy.
I had some trouble buying it as Best Buy keep them behind the counter, and I couldn't get a sales person to help me get at them. I also ended up having to get into an argument with a gal at the register about why I didn't want the extended warranty (I had just said that I wasn't interested, and she pushed to ask why, and then tried to counter everything I said). This seemed like a bad idea as: 1) I don't like to be browbeaten (and I'm guessing most other customers don't either); 2) Her arguments were based on anecdotal evidence that, despite a large amount of research that has been done on the subject to the contrary, I was supposed to trust; and, 3) Part of her argument was that she saw a lot of iPods come back for repair, which doesn't exactly make Apple look good, and, if I were a less technically inclined customer, would have made me think twice about spending my hard-earned dollars on something that seems to break a lot.
Anyway, the iTalk arrived in short order and I plugged it in immediately. The audio that it took from it's built-in microphone was fine, but in uncontrolled situations I didn't expect it would work well enough to be transcribed. The speaker was also fine (despite much I had read about it being underpowered, but then I had low expectations, and no real need of it for this project).
The iTalk has a single jack on it that can be used for either a microphone, or for headphones, but not both simultaneously (again, this didn't matter much for me for this particular project; though for future projects of this type it would have).
I plugged one of several PC mics I have lying around in and started talking. The result? Nothing. It would say it was recording, but on the playback, nothing but silence. I tried a couple of other mics with the same result. Strangely, if I plugged in an earphone and it record, it would pick up my speech (though very poorly), but no microphone would work.
I went around and around with Griffin's technical support via email (their live support hours being inconveniently short each day, and the fact that Thanksgiving occurred in the middle of this not helping either) where they claimed I was using the wrong kind of mic (which sent me on a two day wild goose chase) before I finally sent it back and bought Belkin's product at a nearby Mac store (the only place in town that I could find that carried it).
The Universal Microphone Adapter worked immediately and well, and I don't have any complaints about it. It was nice to be able to plug in both the microphone and earphone portion of one of the dictation headsets I have around, at the very least because that means I don't have to have a wire dangling around, and also because my plan was to get a stereo dictation headset (like this one from Koss) which would allow me to go back and forth between listening to music and recording conversations without having to plug things in and unplug others.
I have owned both Dragon's Naturally Speaking, and IBM's ViaVoice (both now owned or licensed by ScanSoft), but I couldn't find the install CD for one, and the version of the other didn't support file transcription, so I picked up IBM's ViaVoice 10 Advanced Edition predominately on its merits of being about $100 cheaper than the equivalent Dragon product.
I recorded one of the training readings you have to do to get speech-to-text software to get used to how you talk. I tried to read it as much as I could in the way I might talk to someone else, rather than the way I might read something, as that was how I expected most of my recordings would sound.
Getting the audio file to my PC with the transcription software had minor annoyances due to the fact that the iPod can only be synced to one iTunes at a time, so I had to get the file from the iPod (it copies down automatically when the iPod syncs) and copy it to another machine for transcribing.
ViaVoice complained about the low bitrate of my file, but dutifully accepted it anyway. I recorded two more training files to try and get its accuracy up. It did just fine with one, and appeared to do fine with the other until it reached the end and then decided all of the lines it said it had accepted were faulty.
That evening I recorded my first regular conversation and got about three hours worth of audio of just my side of the conversation.
I had expected that the transcription would be off more than normal since I wasn't in the best conditions, wasn't speaking particularly clearly, and wasn't dictating punctuation or line breaks. I had guessed I'd see transcription in the 70%+ range.
Nope. The first transcription was about 30-40% accurate. In fairness, when I'm having an animated conversation, the way I talk certainly isn't easy for software to transcribe. Also, ViaVoice steadfastly attempted to transcribe everything that was audible, so if I stammered, coughed, or corrected myself mid-word, it would try to assign a word to the sounds.
The nice thing is that if you're transcribing an audio file, you can watch the words pour out on the screen which is thrilling in it's own way, and it's faster than the conversation. My 3 hour recording took about 20 minutes or so to transcribe.
Here's a snippet of the conversation transcription for your amusement:
for the most hard those things don't necessarily add a whole lot of burden to those folks read before was the French ban the French ban I read Dryden freight your Ios ago from a year ago i.e. as the latter but that it is difficult to get a never-ending but it's not just eating
Since the software doesn't pay any heed to conversational breaks, and since I wasn't dictating punctuation, it becomes difficult to follow the conversation rather quickly as disparate ideas based on, for example, non-sequitors from other participants in the conversation, get all jammed together as an apparent train of thought. Further complicating matters is ViaVoice's attempt to bring in context to help figure out what words are. It turns out (not surprisingly) that the types of context you might have while doing a direct dictation, are rather different from the types of context you might have while conversing. This lead it to make the wrong choice of word based on context even when it apparently chose the individual word correctly based on speech-matching.
So I've been doing some training by making corrections to the text and introducing new words to the software's vocabulary (like "y'know"). This does appear to be improving the transcription accuracy, but at a maddeningly slow pace, made more frustrating by the fact that the application crashes or loses its place from time-to-time, including the only time I've seen it say that it was ready to update my voice model.
In any case, tests have only been conducted indoors and in fairly well controlled environments, and will probably continue as such until I can get a accuracy rate high enough to actually follow the conversation.
One final note on the setup: I was jonesing for a Jawbone headset as I think the technology they are using for filtering out background noise is pretty interesting (they sense the vibration in your jaw to determine when you are talking). Alas, they don't have a version that plugs into any old audio jack just yet (only special phone jacks). I've sent them an email asking when they might have a more general product, but haven't heard back from them (and, frankly, I don't expect to). This kind of technology will be absolutely critical for my project to work in the majority of live situations.
Where this takes us
So, why bother doing this at all?
I think we're on the cusp of some very interesting capabilities that will be brought about by having portable computing with relatively fast processing, large storage repositories, access to fast broad-area networking, and intuitive near-area networking. Here's where the iPod experiment fits in.
Probably the most obvious use is indicated by the title of this entry. If you can record everything you say, you have, in no trivial sense, provided some part of your story for others to see either now or in the future. I understand that this sentiment is probably shared by only a minority of people, but I would like my descendents to have some view of who I was and how I went about being me. It is a stab at a certain kind of immortality, I suppose, allowing a portion of my being to exist beyond my lifetime. Some people do it with written or photo journals. I'm far to lazy for that, so technology can provide a hand.
Of perhaps broader interest is the ability to Tivo your life. For example, if the iPod was able to record and play at the same time, and it always knew when you were talking, you could have the ability to skip backwards some amount of time and review something you had said, potentially putting an end to arguments that go something like this: "Well, you said I could go bar hopping with the boys," "I most certainly did not", "Remember, last week when I mentioned it?" "There's the couch, my friend, dream up another one." If you were going to do this kind of functionality, however, you might want to record more than just yourself, but recording your self is a good first step.
I have a certain fascination with building knowledge structures to expose the right ideas to the right people who can take the idea and build upon it (I'm starting to believe that humans' primary purpose is to create and maintain information; and not even on that abstract a level, but that's fodder for another entry). The Internet is an excellent example of how having a large group of people's information on pretty much everything allows us to spread knowledge and a very fast rate, and build upon that knowledge faster than we ever have built knowledge before (even normalized for the size of the global population). People who are interested in the Semantic Web are looking to make this system even more efficient and potentially bring another revolution in knowledge sharing (though I have quiet doubts at this point).
The first step in building on knowledge, however, is capturing it. I have a pretty poor memory, as do several of my friends. This means that we are often rediscovering our own theories years later, much to everyone's amusement. This stems in part from the fact that we don't take notes when we are having interesting conversations. Often it is not possible to take notes as we're driving around together, or talking on mobile phones. Having a transcription of everything we say may not prevent us from re-creating ideas, but it certainly can reduce the occurrence of it, and allows us to look back at things we've talked about as it was captured and build upon those ideas.
Perhaps it is ironic that I am interested in contributing to the very glut of information that I believe will increasingly make the Internet hard to search through for quite a while yet, but I already have this blog, so why not everything I say as well?
Ok, on to the less philosophical reasons this is interesting.
Imagine that I was able to get this process to work very efficiently, so that the transcription knew my voice model well enough to have an accuracy rate of more than 99%.
Suppose I was able to carry this complete system with me, and that it operated in real time (there is no reason the speech-to-text software can't do that as that's what it was originally built for, and it was built to run on much slower computers than I currently have).
Suddenly, you can perform searches on everything you said in real time, playing back the actual audio, or displaying the transcription, depending on what you need. Add in some other metadata like date and time, GPS coordinates, even which direction you are facing, and you can do searches like: "What was I saying last Tuesday when I was sitting at Starbucks?".
Now imagine that you attach a timestamp to every single transcribed word (I have to believe this is trivial now, but no one had a good use for it). You can then integrate other information, like pictures, documents, and the like into a single stream of information. You might reference this via your transcription stream with other information sources included right in the interface.
Now, if I had a conversation with you, and we were both recording our side of the conversation, I could send you my transcription, and you could send me your transcription, and we could integrate them to have the entire conversation as it was originally spoken.
If my system knew who you were, as the sender of the other half of the conversation, I now have new data I can search by.
If you and I were connected via some form of network, I could broadcast my transcription to you in real-time.
And if you didn't speak my language, you could automatically route my transcription through a translation service that fed you back a translated document pretty much in real time.
From this, you could have a text-to-speech application read the translation into your headphones in real time as I'm talking.
Perhaps you could even use my own voice model that I might choose to make available to you, so that the translation you are hearing of my words actually sounds like me as well. If my software is able to discern that I am yelling, or whispering, that data might also get passed along as part of the meta data stream to you, allowing for nuance.
Given this, there's no reason we have to be in the same location, or even connected via any voice application. I could just send you my transcriptions in real time and let your computer speak it to you and vise-versa, greatly reducing the amount of bandwidth required for electronic carrying of the conversation.
What becomes interesting here, is that we end up building an infrastructure from which new applications can be created to provide capabilities we never even thought about just by transforming a type of information we constantly put out (in fact, THE information we constantly put out) into something that can be manipulated, transmitted, and combined with other things in just the same way that the telephone, the highway, and the Internet have done. It probably wouldn't have quite the same transformative effect as the other things I just mentioned, but you have to admit, it's interesting.
Mappr
Weeeee, a fun information swarming idea. Essentially, Mappr takes geographic information from photos saved on Flickr.com (which I use from time to time), and places the photos on a map of the US allowing you to virtually explore real space via OPP (other people's photos...has enough time passed for that reference to be cool yet, or is it still lame?).
It looks like it's not up and running quite yet, but if it can do what they say it can, or even some part of that, it should be a very interesting project.
I'd be interested to know, since they state they don't use GPS, what kind of location data they're using. Could be interesting for my Self-Documenting Life project that I'm currently running.
stamen: mappr: geo-location of tagged images on flickr.com
Saturday, December 11, 2004
The Usability Explosion and the Heavy Approach to Technology Review
Here's a comment I left at Future Now on a post having to do with the apparent explosion of usability books, and on how technology should be reviewed before being released:
My own guess as to why usability is getting such focus is basically twofold: 1) The maturation of the Web as a tool front end; and, 2) The increased speed at which different products that perform the same function, and new products that perform new functions, is getting to market.
In the halcyon days of the VCR, interfaces that came from an engineering group as the end point of function (as seen by them) had a relatively small social cost. New products simply took longer to get to market, and were expected to stay on the market longer. Relatedly, there were not many other product types that competed for basically the same functionality. Interfaces could be revved over time when it made economic sense, and when someone happened to improve on existing metaphors.
Since the web began to be used seriously by corporations as an interface to communicate with customers, and to have customers participate more directly in processes, it became critical that new users could understand how to use an interface in a relatively short period of time. This has meant that there was economic incentive for a class of workers to take up the study of interface design and usability. More people in this line of work increase the probability that more people will apply this outside their original domain.
At the same time, product developers have significantly less ability for function to win out over usability as products come to market quickly and often face competition from other products built for entirely different markets. Consumers need to "get it" faster to get at the underlying functionality in a way that is useful to them. Witness the explosion of the iPod whose success in no small part had to do with the introduction of an intuitive interface metaphor.
Finally, of course, is the increased complexity of the functionality introduced in modern devices. It is my belief that we will continue to see products evolve in a manner where pressing "Play" may be too much of an oversimplification to be useful as what new products do become more sophisticated, inter-related, and subtle.
On a different point, I'd like to disagree with the idea that we need to discuss the "ends [a technology] will serve is before we deploy it." Coming up with a reason for being for a technology before it is ever created is not just inefficient thinking, but I would argue, counter to human nature in ways that would be difficult to put the breaks on.
It has been my observation that often the most interesting uses of a technology are the ones that no one thought of until it was launched. The people building a technology are not always the best people (for good or ill) to determine what it can be best used for. Now since the web site said nothing about having the creators do the discussing necessarily, I should also say that it would be significantly difficult for a technology that is in some sort of review period to get the attention of the kinds of people that might extend it elegantly. It would be very difficult to review all proposed technologies to determine what interesting things you could do with it, and frequently it requires a period of people getting used to how a new technology works - what it's all about - before the really good ideas spring forth. Equally likely is that tranformative technologies need a period of maturation before they arrive at a point (either technically, or through adoption) that new ideas can take root.
This is not to say, however, that it's not worth talking about these things AS they are being deployed, and after, and the good thing is that people actually tend to do this. I think what would be most useful would be if we had a good methodology for picking out the valid arguments about a technology from the noise and allow that to feed back into the system, either voluntarily, or via regulation where appropriate.
I guess what I'm saying is that waiting to get all of your ducks in a row before releasing a technology doesn't feel right. Rather an iterative approach of some sort might fit the bill very well.
Monday, December 06, 2004
Local Economics
On a related (to my previous entry) note, it is interesting the effect that RFID-enabled smart shelves could have on item pricing; better reflecting the supply and demand relationship of store inventory and/or sell rates. If you could do a search for products locally, and that inventory was updated in real time, then stores could practices supply and demand on a local level, raising the price on products no one currently has in stock, and lowering prices to be more competitive when other local stores DO have it in stock. Further, if pricing was handled in this manner, you might want potential customers to print out a page indicating what price they got through this system so that when the person came into the store they got this price (if it was within the expiration period of the offer), which would allow you to track leads and potentially ongoing customer information tracking.
Local Product Search
Wouldn’t it be nice if you could do a search for a product you want to buy and be told all of the stores within a certain radius that carry it? Why do you think there is no such thing (or, if there is, I haven’t been able to find it)? Is it primarily an issue with local inventory systems not being generally sophisticated enough to support this? Do you think major retailers like Office Depot and Best Buy might be able to do this, or are even they not prepared to make available local inventory.