Kirby argues that diegetic prototypes have the ability to transcend the cinematic space and influence the real world. Speculative scenarios, on the other hand, claims Kirby, are dismissed as fantasies by the audience. But who judges what is accepted as “realistic” and what is not? The examples that Kirby cites appear to use hindsight as a basis for judgement. All of his examples seem to feature technologies that later became reality, which raises the question: is it possible to determine what technology is a diegetic prototype and what is not at the time it is released in theaters?
Is the measure of realism — in other words, whether something is a diegetic prototype or not — dependent on how much the audience desires the technology? It is measured by how realistically the characters treat the technology? How does one measure realism in a space (cinema) that is supposed to present falsity as the truth?
Dourish and Bell’s ideas run directly counter to Kirby’s; they seem to state that diegetic prototypes are unnecessary for public acceptance of technology. The examples they cite (Dr. Who, Star Trek, etc.) hardly attempt to be scientifically accurate. Is exposure the key factor to public acceptance? Dourish and Bell purposely cite television shows because they believe that shows’ long running times result in them having more of a presence and more of an impact on audience’s lives. Could fantastical technologies that viewers are exposed to over long periods of time have a similar impact to diegetic prototypes?
I feel the same way. Speaking of Dr. Who and Star Trek examples, I don’t think it’s easy to measure which movie/tv series has more direct effect on our lives in the long run. It’s true that long running times certainly reinforce the technologies/ideas/prototypes, add value and perhaps help the audience adopt the technologies easier as if it is already a part of their lives. Is there a way to measure, how successfully a show/movie reinforced a prototype into our lives? Can we only do this after a long time has passed and say : “hey we have iphones thanks to communicators..”? Minority Report’s gestural interface didn’t take too long to enter our lives and become a product of oblong industries/raytheon, and later commercialized with microsoft kinect. It’s not the same thing obviously, but how can we predict (or can we predict) if it’s going to become the main way of interaction after 50 years? Star Trek had communicators in 1964, and it’s easy to look back and admire or laugh. I feel like the fact that they’ve added an intentional false reading in Minority Report, to make it look and feel more real is amazing. Do you think it is important to keep in mind that technology comes with errors and limitations, while designing prototypes/environments? Could this potentially get the prototypes adopted faster?
Really good points! I agree that the line between what’s diegetic and what’s speculative seems like it’s much easier to make in retrospect, unless intentionality is part of the distinction? Can something with the intention of being fanciful and absurdist with no holds in reality end up being seen as a diegetic prototype if it unintentionally convinces the audience it’s a good idea?
I’m especially interested in the question about the role of exposing viewers to fantastical technologies over time. I wonder if that distinction matters when features from that media take hold in daily conversation anyway — even if no one is necessarily expecting the technology to be realistic. Every time I reference The Tardis when I talk about wanting to edit something from my past, does it serve the same role as other diegetic prototypes? I think it does.
First of all, dare I say it, the “Resistance is futile” paper by Dourish and Bell is the best paper of all time (best I’ve read in a while at least). However, I am confused by their goals. They begin by explaining that design-oriented research is an act of collective imagining where we (royal we? who’s we?) work together to imagine a better future. Then, the cultural backdrop for many researchers in the community is presented (extracted painstakingly from popular science fiction shows), and finally themes common to ubiquitous computing research and science fiction are presented. Is the key take-away then that researchers should also design the cultural environments that their creations live in as well? Is this even possible?
My concern with the way in which the fiction mentioned influences the future and the direction of research and development is in that the futures portrayed through the diegetic prototypes are of technologies which respond to a “cool” expectation rather than an actual important need.
There is certainly a lot of coolness to the gestural interfaces of Minority Report, and it is certainly cool to see them present to a degree in the World today, but could the resources invested in the gestural interfaces be used in a different way that could create not just a more futuristic way to interact with our technology systems but actual technologies that responded to problems such as hunger as an off the top of my head example.
I’m struck by the Dourish and Bell bit about minor failures. In Star Trek, everything’s gleaming and functional (besides the odd catastrophe) because the Enterprise crew are part of the Federation. Yeah, in Doctor Who or Blake’s 7, stuff breaks down, but the message we get is that it’s because they’re renegades or runaways.
This seems like a really optimistic future and kind of opposite to the way things work today. The larger and more powerful organizations get, the harder it is to make something that works smoothly, and the more likely you are to get “devices that operate with creaks and groans, or erratically, or not at all”. (e.g. every government website, or even big company products like MS/Google/Apple)
Anyway, this seems like a problem with analyzing science fiction: there’s not a lot of boring everyday. We see Data the robot on Star Trek, but we don’t see the million crummy robot ATMs or toasters or websites that half work. Is that okay? Or does sci-fi leave us with a few big ideas and a lot of blind spots?
This diverges a bit from the articles, but reading about the diegetic prototypes for rockets in films promoting space travel in the 60s reminded me of the stories my dad has told me about launching model rockets when he was a kid.
(not a picture of of my dad, but he did let me have couple photos of him at a model rocket launch meet)
Before reading the articles, I hadn’t really thought about movies as a method to expose current research to stimulate action, inspiration, or funding from the public. The example of model rockets, I think, shows how there are other approaches to engaging the public out there besides producing a multi-million dollar film. My dad’s hobbies in model rockets and astronomy was a part of the larger public excitement in space travel. It certainly had an impact on him later in life (he’s a computer scientist and has worked with a team of researchers analyzing data from supernovae).
The contemporary equivalent would be the hacking and maker culture with electronics, robotics, programing, and 3d printing.
Although these projects have a much lower fidelity than diegetic prototypes, it still gives the public accessibility to understanding and appreciating the technologies involved, making them seem less foreign. Especially now creativity and designing your own projects play a large part in these activities, and places like Assemble (assemblepgh.org) in Pittsburgh and FIRST Robotics (http://www.usfirst.org/roboticsprograms/frc – started by Dean Kamen) come to mind as just two examples of organizations that use these approaches.
Can hands-on experience for the public be as impactful to the public as high-fidelity technology visualized in the entertainment industry?
One of my favorite recent science fiction movies is Primer, which is a notoriously low budget ($7000) film that deals with the discovery and implications of time travel. However, unlike many of the films and diegetic prototypes mentioned in the readings, this film feels most convincing because of the characters. These guys who accidentally invent a time machine in their garage are, to quote the director, “scientifically accomplished but ethically, morons.” They immediately start using their invention to play the stock market, and things go downhill from there. And did I mention that their discovery was accidental, like so many other great examples in science. They are as ignorant (almost) as the audience as to what the side effects will be. So I guess my question is do we as thinkers need a sleek prototype showing how great things could be, or can a convincingly unstable, hacked-together and dangerous one actually be more effective at highlighting the social/cultural aspects that come with a technology?
In the Future is Now article, we see how diegetic prototypes affect the current development and social perception of technology in big ways. I wonder if this is an ethical trend though. Kirby shows how we give widespread theoretical precedence to directors and those in the entrainment industry. Without discrediting the goodnatured intentions of these figures, I wonder if we should put more emphasis on the ethics and humanistic relevance of diegetic prototypes in the media. With such potential to incite change in the world, shouldn’t we focus on the most pressing issues we are currently facing instead of more glamorous depictions of gestural computing and virtual realities?
Fill in your details below or click an icon to log in:
You are commenting using your WordPress.com account. ( Log Out / Change )
You are commenting using your Twitter account. ( Log Out / Change )
You are commenting using your Facebook account. ( Log Out / Change )
You are commenting using your Google+ account. ( Log Out / Change )
Connecting to %s
Notify me of new comments via email.