<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	xmlns:media="http://search.yahoo.com/mrss/"
	>

<channel>
	<title>the Kurzweil Libraryresources &#8211; the Kurzweil Library</title>
	<atom:link href="https://www.thekurzweillibrary.com/resources/feed" rel="self" type="application/rss+xml" />
	<link>https://www.thekurzweillibrary.com</link>
	<description>Tracking breakthroughs in tech, science, and world progress.</description>
	<lastBuildDate>Wed, 17 Dec 2025 10:54:08 +0000</lastBuildDate>
		<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
		<item>
		<title>Google I/O 2014 &#124; video: Ray Kurzweil presents &#8220;Biologically Inspired Models of Intelligence&#8221;</title>
		<link>https://www.thekurzweillibrary.com/google-io-2014</link>
		<comments>https://www.thekurzweillibrary.com/google-io-2014#respond</comments>
		<pubDate>Fri, 20 Jun 2014 06:30:04 +0000</pubDate>
		
								<media:thumbnail url="https://www.thekurzweillibrary.com/images/IO-2014-140x62.jpg" width="140" height="62" />
		
				<category><![CDATA[events]]></category>
		<category><![CDATA[resources]]></category>

		<guid isPermaLink="false">http://www.thekurzweillibrary.com/?p=230382</guid>
		<description><![CDATA[Google I/O 2014 &#124; Ray Kurzweil: &#8220;Biologically Inspired Models of Intelligence,&#8221; filmed June 25, 2014 Google &#124; For decades Ray Kurzweil has explored how artificial intelligence can enrich and expand human capabilities. In his latest book How to Create a Mind, he takes this exploration to the next step: reverse-engineering the brain to understand precisely how [&#8230;]]]></description>
			<content:encoded><![CDATA[<p>Google I/O 2014 | Ray Kurzweil: &#8220;Biologically Inspired Models of Intelligence,&#8221; filmed June 25, 2014</p>
<p><object width="640" height="360"><param name="movie" value="//www.youtube.com/v/MG_nOddk01E?hl=en_US&amp;version=3&amp;rel=0"/><param name="allowFullScreen" value="true"/><param name="allowscriptaccess" value="always"/><embed src="//www.youtube.com/v/MG_nOddk01E?hl=en_US&amp;version=3&amp;rel=0" type="application/x-shockwave-flash" width="640" height="360" allowscriptaccess="always" allowfullscreen="true"/></object></p>
<p>Google | For decades Ray Kurzweil has explored how artificial intelligence can enrich and expand human capabilities. In his latest book <em>How to Create a Mind</em>, he takes this exploration to the next step: reverse-engineering the brain to understand precisely how it works, then applying that knowledge to create intelligent machines.</p>
<p>In the near term, Ray&#8217;s project at Google is developing artificial intelligence based on biologically inspired models of the neocortex to enhance functions such as search, answering questions, interacting with the user, and language translation.</p>
<p>The goal is to understand natural language to communicate with the user as well as to understand the meaning of web documents and books. In the long term, Ray believes it is only by extending our minds with our intelligent technology that we can overcome humanity&#8217;s grand challenges.</p>
<hr />
<p style="text-align: center;"><a href="http://googledevelopers.blogspot.com/2014/05/google-io-2014-start-planning-your.html" target="_blank"><img loading="lazy" decoding="async" class="alignnone size-full wp-image-230418" title="IO 2014" src="http://www.thekurzweillibrary.com/images/IO-2014.jpg" alt="" width="347" height="155" srcset="https://www.thekurzweillibrary.com/images/IO-2014.jpg 347w, https://www.thekurzweillibrary.com/images/IO-2014-140x62.jpg 140w, https://www.thekurzweillibrary.com/images/IO-2014-259x115.jpg 259w, https://www.thekurzweillibrary.com/images/IO-2014-280x125.jpg 280w" sizes="auto, (max-width: 347px) 100vw, 347px" /></a></p>
<p style="text-align: center;"><a href="http://www.thekurzweillibrary.com/images/Google-IO-Biologically-inspired-models-of-intelligence-screenshot1.png"><img loading="lazy" decoding="async" class="aligncenter  wp-image-230687" title="Google IO - Biologically inspired models of intelligence screenshot" src="http://www.thekurzweillibrary.com/images/Google-IO-Biologically-inspired-models-of-intelligence-screenshot1.png" alt="" width="522" height="372" srcset="https://www.thekurzweillibrary.com/images/Google-IO-Biologically-inspired-models-of-intelligence-screenshot1.png 745w, https://www.thekurzweillibrary.com/images/Google-IO-Biologically-inspired-models-of-intelligence-screenshot1-140x99.png 140w, https://www.thekurzweillibrary.com/images/Google-IO-Biologically-inspired-models-of-intelligence-screenshot1-259x184.png 259w, https://www.thekurzweillibrary.com/images/Google-IO-Biologically-inspired-models-of-intelligence-screenshot1-680x484.png 680w, https://www.thekurzweillibrary.com/images/Google-IO-Biologically-inspired-models-of-intelligence-screenshot1-280x199.png 280w" sizes="auto, (max-width: 522px) 100vw, 522px" /></a></p>
<p style="text-align: center;"><a href="http://www.thekurzweillibrary.com/images/Google-IO-About-the-speakers-Ray-Kurzweil-screenshot1.png"><img loading="lazy" decoding="async" class="aligncenter  wp-image-230689" title="Google IO - About the speakers Ray Kurzweil screenshot" src="http://www.thekurzweillibrary.com/images/Google-IO-About-the-speakers-Ray-Kurzweil-screenshot1.png" alt="" width="522" height="411" srcset="https://www.thekurzweillibrary.com/images/Google-IO-About-the-speakers-Ray-Kurzweil-screenshot1.png 745w, https://www.thekurzweillibrary.com/images/Google-IO-About-the-speakers-Ray-Kurzweil-screenshot1-140x110.png 140w, https://www.thekurzweillibrary.com/images/Google-IO-About-the-speakers-Ray-Kurzweil-screenshot1-259x204.png 259w, https://www.thekurzweillibrary.com/images/Google-IO-About-the-speakers-Ray-Kurzweil-screenshot1-680x535.png 680w, https://www.thekurzweillibrary.com/images/Google-IO-About-the-speakers-Ray-Kurzweil-screenshot1-280x220.png 280w" sizes="auto, (max-width: 522px) 100vw, 522px" /></a></p>
<p>Google I/O | From making your apps as powerful as they can be to putting them in front of hundreds of millions of users, our focus at Google is to help you <strong>design, develop and distribute</strong> compelling experiences for your users. At Google I/O 2014, happening June 25-26 at Moscone West in San Francisco, we’re bringing you sessions and experiences ranging from design principles and techniques to the latest developer tools and implementations to developer-minded products and strategies to help distribute your app.</p>
<p>If you&#8217;re coming in person, the schedule will give you more time to interact in the Sandbox, where partners will be on hand to demo apps built on the best of Google and open source, and where you can interact with Googlers 1:1 and in small groups. Don’t worry, though&#8211;we’ll have plenty of content online for those following along remotely! Visit the schedule on the <a href="https://www.google.com/events/io">Google I/O website</a> (and check back often for updates). As you start your I/O planning, we want to highlight the experiences we’re working on to help you build and grow your apps:</p>
<ul>
<li><strong>Breakout sessions:</strong> This year, we’ll once again bring you a deep selection of technical content, including sessions such as &#8220;<a href="https://www.google.com/events/io/schedule/session/9cba49df-b7b3-e311-b30e-00155d5066d7">What&#8217;s New in Android</a>&#8220;and &#8220;<a href="https://www.google.com/events/io/schedule/session/a8ff977e-17c0-e311-b297-00155d5066d7">Wearable computing with Google</a>” from Android, Chrome and Cloud, and cross-product, cross-platform implementations. There will be a full slate of design sessions that will bring to life Google’s design principles and teach best practices, and an update on how our monetization, measurement and payment products are better suited than ever to help developers grow the reach of their applications. Sessions from Ray Kurzweil, Ignite and Women Techmakers will take the stage and make us uncomfortably excited about what is possible. The first sessions are now listed, keep checking back for more.</li>
<li><strong>Workshops and code labs:</strong> Roll up your sleeves, dig in to hands-on experiences and code. Learn how to build better products, apply quantitative data to user experiences, and prototype new Glassware through interactive workshops on UX, experience mapping and design principles. To maximize your learning and give you more interaction with Googlers and peers, visit our coding work space, with work stations preloaded with self-paced modules. Dive into Android, Chrome, Cloud and APIs with experts on hand for guidance.</li>
<li><strong>Connect with Googlers in the sandbox:</strong> Check out your favorite Google products and meet the Googlers who built them. From there, join a ‘Box talk or app review, ranging from conceptual prototyping, to performance testing with the latest tools, to turning your app into a successful business.</li>
<li><strong>Learn from peers at the partner sandbox:</strong> We love to see partners build cool things with Google, and have invited a few of them to showcase inspiring integrations of what’s possible. You will be able to see demos and talk in-depth with them about how they designed, created and grew their apps.</li>
<li><strong>Beyond Moscone, with I/O Extended:</strong> Experience I/O around the world, in an event setting, with <a href="https://www.google.com/events/io/io-extended">I/O Extended</a>. The I/O Extended events include everything from live streaming sessions from I/O to local speaker sessions and hackathons. It is great to see so many events taking place around the world, and we can&#8217;t wait to see I/O Extended events have another strong year.&#8212; <a href="http://googledevelopers.blogspot.com/2014/05/google-io-2014-start-planning-your.html" target="_blank">Google Developers Blog</a></li>
</ul>
<hr />
<p><strong>related viewing from Google I/O 2013 highlights reel:</strong></p>
<p><object width="640" height="360" classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,40,0"><param name="allowFullScreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="src" value="//www.youtube.com/v/oWDrQa6jymc?version=3&amp;hl=en_US&amp;rel=0" /><param name="allowfullscreen" value="true" /><embed width="640" height="360" type="application/x-shockwave-flash" src="//www.youtube.com/v/oWDrQa6jymc?version=3&amp;hl=en_US&amp;rel=0" allowFullScreen="true" allowscriptaccess="always" allowfullscreen="true" /></object></p>
<p>Google I/O | Relive the moments of Google I/O 2013, including the keynote, sessions, developer sandbox and after hours.</p>
<hr />
<p>related reading:<br />
Google | <a href="https://www.google.com/events/io" target="_blank">Google I/O 2014</a><br />
Google | <a href="https://www.google.com/events/io/io14videos" target="_blank">Google I/O 2014 videos</a></p>
]]></content:encoded>
			<wfw:commentRss>https://www.thekurzweillibrary.com/google-io-2014/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Drasko Vucevic&#8217;s &#8216;Machine&#8217;s Road Toward Singularity&#8217;</title>
		<link>https://www.thekurzweillibrary.com/drasko-vucevic-machine-singularity</link>
		<comments>https://www.thekurzweillibrary.com/drasko-vucevic-machine-singularity#respond</comments>
		<pubDate>Sun, 20 Feb 2011 01:12:14 +0000</pubDate>
		
								<media:thumbnail url="https://www.thekurzweillibrary.com/images/Drasko-Singularity-140x96.png" width="140" height="96" />
		
				<category><![CDATA[Future Visions]]></category>
		<category><![CDATA[resources]]></category>

		<guid isPermaLink="false">http://www.thekurzweillibrary.com/?p=109954</guid>
		<description><![CDATA[Drasko Vucevic &#124; An audio-visual collaboration between Drasko V, AudioAndroid, Yoko K., Yongsub Song and Yongchan Kim, presenting the idea of Singularity &#8212; &#8220;Machine&#8217;s Road Toward Singularity&#8221;: exponential growth in technology, radical changes in society, expectance and acceptance of technology. The existential threat is yet to be felt, as even a simple smile cannot contain [&#8230;]]]></description>
			<content:encoded><![CDATA[<p>Drasko Vucevic | An audio-visual collaboration between <a href="http://drasko-v.com" target="_blank">Drasko V</a>, <a href="http://www.audioandroid.com" target="_blank">AudioAndroid</a>, <a href="http://aphrodizia.net" target="_blank">Yoko K</a>., <a href="http://vimeo.com/​houseofkiss" target="_blank">Yongsub Song</a> and <a href="http://vimeo.com/​user1751058" target="_blank">Yongchan Kim</a>, presenting the idea of Singularity &#8212; &#8220;Machine&#8217;s Road Toward Singularity&#8221;: exponential growth in technology, radical changes in society, expectance and acceptance of technology.</p>
<p>The existential threat is yet to be felt, as even a simple smile cannot contain the energy and feel humans possess. 판 is an episode in which something occurs or is played. The mask of the machine is used in Korean traditional folk play. In the animation, the machine plays 판 for Singularity. &#8212; Drasko V</p>
<p><object classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" width="600" height="338" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,40,0"><param name="allowfullscreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="src" value="http://vimeo.com/moogaloop.swf?clip_id=20031441&amp;server=vimeo.com&amp;show_title=1&amp;show_byline=1&amp;show_portrait=1&amp;color=00ADEF&amp;fullscreen=1&amp;autoplay=0&amp;loop=0" /><embed type="application/x-shockwave-flash" width="600" height="338" src="http://vimeo.com/moogaloop.swf?clip_id=20031441&amp;server=vimeo.com&amp;show_title=1&amp;show_byline=1&amp;show_portrait=1&amp;color=00ADEF&amp;fullscreen=1&amp;autoplay=0&amp;loop=0" allowfullscreen="true" allowscriptaccess="always"/></object></p>
]]></content:encoded>
			<wfw:commentRss>https://www.thekurzweillibrary.com/drasko-vucevic-machine-singularity/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>PopTech &#124; Graphical expression of human emotion &#8212; video shows surprising consistencies: A new kind of Turing test?</title>
		<link>https://www.thekurzweillibrary.com/graphical-expression-of-emotion</link>
		<comments>https://www.thekurzweillibrary.com/graphical-expression-of-emotion#respond</comments>
		<pubDate>Wed, 16 Feb 2011 00:30:28 +0000</pubDate>
		
								<media:thumbnail url="https://www.thekurzweillibrary.com/images/poptech-logo-140x140.png" width="140" height="140" />
		
				<category><![CDATA[Future Visions]]></category>
		<category><![CDATA[resources]]></category>

		<guid isPermaLink="false">http://www.thekurzweillibrary.com/?p=108979</guid>
		<description><![CDATA[PopTech &#124; Designer Orlagh O’Brien asks, &#8220;What if we try to visually represent the emotions that are running through our body?&#8221; She gave a simple emotion-specific quiz to a group of 250 people. Asking respondents to describe five emotions &#8212; anger, joy, fear, sadness, and love &#8212; in drawings, colors, and words, O’Brien ended up with a [&#8230;]]]></description>
			<content:encoded><![CDATA[<p><a href="http://www.thekurzweillibrary.com/images/poptech-logo.png"><img loading="lazy" decoding="async" class="alignleft size-full wp-image-108986" title="poptech logo" src="http://www.thekurzweillibrary.com/images/poptech-logo.png" alt="" width="140" height="132" /></a>PopTech | Designer Orlagh O’Brien asks, &#8220;What if we try to visually represent the emotions that are running through our body?&#8221; She gave a simple emotion-specific quiz to a group of 250 people. Asking respondents to describe five emotions &#8212; anger, joy, fear, sadness, and love &#8212; in drawings, colors, and words, O’Brien ended up with a set of media she used to create <a href="http://www.emotionallyvague.com/index.php" target="_blank">Emotionally}Vague</a>, an online graphic interpretation of the project’s results. </p>
<p>She planned to gather the data and then figure out how to visually represent the responses. As the results trickled in, O’Brien realized, as she explained at PopTech, “there was enough data from what people were drawing to suggest patterns of feelings.”</p>
<hr />
<p>http://vimeo.com/18957198</p>
]]></content:encoded>
			<wfw:commentRss>https://www.thekurzweillibrary.com/graphical-expression-of-emotion/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Cloverfield special effects from DVD release &#8212; computer graphics, compositing</title>
		<link>https://www.thekurzweillibrary.com/cloverfield-special-effects-computer-graphics-compositing</link>
		<comments>https://www.thekurzweillibrary.com/cloverfield-special-effects-computer-graphics-compositing#respond</comments>
		<pubDate>Tue, 15 Feb 2011 17:22:54 +0000</pubDate>
		
								<media:thumbnail url="https://www.thekurzweillibrary.com/images/cloverfield-poster-140x207.jpg" width="140" height="207" />
		
				<category><![CDATA[Future Visions]]></category>
		<category><![CDATA[resources]]></category>

		<guid isPermaLink="false">http://www.thekurzweillibrary.com/?p=108884</guid>
		<description><![CDATA[The making of visual effects for Cloverfield. In the second video: &#8220;Subway Parasites&#8221; segment from the Cloverfield DVD&#8217;s &#8220;Cloverfield Visual Effects&#8221; extra. This 2008 disaster/monster &#8220;mockumentary&#8221; was directed by Matt Reeves, produced by J. J. Abrams and written by Drew Goddard. The film follows six New Yorkers attending a party on the night that a gigantic monster [&#8230;]]]></description>
			<content:encoded><![CDATA[<p><a href="http://www.thekurzweillibrary.com/images/cloverfield-poster.jpg"><img loading="lazy" decoding="async" class="alignleft size-medium wp-image-108957" title="cloverfield poster" src="http://www.thekurzweillibrary.com/images/cloverfield-poster-259x383.jpg" alt="" width="139" height="207" srcset="https://www.thekurzweillibrary.com/images/cloverfield-poster-259x383.jpg 259w, https://www.thekurzweillibrary.com/images/cloverfield-poster-140x207.jpg 140w, https://www.thekurzweillibrary.com/images/cloverfield-poster-345x512.jpg 345w, https://www.thekurzweillibrary.com/images/cloverfield-poster.jpg 586w" sizes="auto, (max-width: 139px) 100vw, 139px" /></a>The making of visual effects for <em>Cloverfield</em>. In the second video: &#8220;Subway Parasites&#8221; segment from the<em> Cloverfield</em> DVD&#8217;s &#8220;<em>Cloverfield</em> Visual Effects&#8221; extra. This 2008 disaster/monster &#8220;mockumentary&#8221; was directed by Matt Reeves, produced by J. J. Abrams and written by Drew Goddard.</p>
<p>The film follows six New Yorkers attending a party on the night that a gigantic monster of unknown origin attacks the city. All footage is shot from the perspective of the (fictional) handheld cameras carried by the characters, who film the action as the events unfold. </p>
<p>The careful replication of the New York cityscape, as it&#8217;s destroyed by the massive Godzilla-like monster, was impressive for its complexity and accuracy.</p>
<p>Visual and computer generated effects were produced by <a title="Double Negative (VFX)" href="http://www.dneg.com" target="_blank">Double Negative</a> and <a title="Tippett Studio" href="http://www.tippett.com/" target="_blank">Tippett Studio</a>.  </p>
<p>(<em>footage credit: Paramount Studios/Bad Robot/Double Negative</em>)</p>
<p><object classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" width="640" height="390" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,40,0"><param name="allowFullScreen" value="true" /><param name="allowScriptAccess" value="always" /><param name="src" value="http://www.youtube.com/v/YT_yh1r3cQg&amp;hl=en_US&amp;feature=player_embedded&amp;version=3" /><param name="allowfullscreen" value="true" /><embed type="application/x-shockwave-flash" width="640" height="390" src="http://www.youtube.com/v/YT_yh1r3cQg&amp;hl=en_US&amp;feature=player_embedded&amp;version=3" allowscriptaccess="always" allowfullscreen="true"/></object></p>
<p><object classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" width="640" height="510" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,40,0"><param name="allowFullScreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="src" value="http://www.youtube.com/v/GQ3hOy2yf_c?fs=1&amp;hl=en_US&amp;rel=0" /><param name="allowfullscreen" value="true" /><embed type="application/x-shockwave-flash" width="640" height="510" src="http://www.youtube.com/v/GQ3hOy2yf_c?fs=1&amp;hl=en_US&amp;rel=0" allowfullscreen="true" allowscriptaccess="always"/></object></p>
]]></content:encoded>
			<wfw:commentRss>https://www.thekurzweillibrary.com/cloverfield-special-effects-computer-graphics-compositing/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Microsoft Surface&#8217;s vision system can see and interact with objects on the tabletop</title>
		<link>https://www.thekurzweillibrary.com/microsoft-surface-vision-system-can-interacting-with-objects</link>
		<comments>https://www.thekurzweillibrary.com/microsoft-surface-vision-system-can-interacting-with-objects#respond</comments>
		<pubDate>Fri, 11 Feb 2011 22:45:44 +0000</pubDate>
		
								<media:thumbnail url="https://www.thekurzweillibrary.com/images/microsoft-surface-object-interaction-140x108.jpg" width="140" height="108" />
		
				<category><![CDATA[Future Visions]]></category>
		<category><![CDATA[resources]]></category>

		<guid isPermaLink="false">http://www.thekurzweillibrary.com/?p=108058</guid>
		<description><![CDATA[Microsoft Surface &#124; Microsoft Surface&#8217;s vision system can see what&#8217;s going on on the tabletop. This allows for all manner of natural user interfaces to be employed both with everyday objects, and objects specifically crafted to work with Surface. Wikipedia &#124; Microsoft Surface is a surface computing platform that responds to natural hand gestures and real [&#8230;]]]></description>
			<content:encoded><![CDATA[<div id="attachment_108072" style="width: 252px" class="wp-caption alignleft"><a href="http://www.thekurzweillibrary.com/images/microsoft-surface-object-interaction.jpg"><img loading="lazy" decoding="async" aria-describedby="caption-attachment-108072" class="size-medium wp-image-108072 " title="microsoft surface object interaction" src="http://www.thekurzweillibrary.com/images/microsoft-surface-object-interaction-259x201.jpg" alt="" width="252" height="195" srcset="https://www.thekurzweillibrary.com/images/microsoft-surface-object-interaction-259x201.jpg 259w, https://www.thekurzweillibrary.com/images/microsoft-surface-object-interaction-140x108.jpg 140w, https://www.thekurzweillibrary.com/images/microsoft-surface-object-interaction.jpg 473w" sizes="auto, (max-width: 252px) 100vw, 252px" /></a><p id="caption-attachment-108072" class="wp-caption-text">(credit: Microsoft)</p></div>
<p>Microsoft Surface | Microsoft Surface&#8217;s vision system can see what&#8217;s going on on the tabletop. This allows for all manner of natural user interfaces to be employed both with everyday objects, and objects specifically crafted to work with Surface.</p>
<p>Wikipedia | Microsoft Surface is a surface computing platform that responds to natural hand gestures and real world objects. It has a 360-degree user interface, a 30 in (76 cm) reflective surface with a XGA DLP projector underneath the surface which projects an image onto its underside, while five cameras in the machine&#8217;s housing record reflections of infrared light from objects and human fingertips on the surface.</p>
<p>The surface is capable of object recognition, object/finger orientation recognition and tracking, and is multi-touch and is multi-user. Users can interact with the machine by touching or dragging their fingertips and objects such as paintbrushes across the screen, or by placing and moving placed objects. This paradigm of interaction with computers is known as a natural user interface (NUI).</p>
<p>Using the specially-designed barcode-style &#8220;Surface tags&#8221; on objects, Microsoft Surface can offer a variety of features, for example automatically offering additional wine choices tailored to the dinner being eaten based on the type of wine set on the Surface, or in conjunction with a password, offering user authentication. A commercial Microsoft Surface unit is $12,500 (unit only), whereas a developer Microsoft Surface unit costs $15,000 and includes a developer unit, five seats and support.</p>
<p><a href="http://www.youtube.com/user/mssurface" target="_blank"></a></p>
<p>http://www.youtube.com/watch?v=Pytn0b9o_Mw</p>
<p>Wikipedia | Partner companies use the Surface in their hotels, restaurants, and retail stores. The Surface is used to choose meals at restaurants, plan vacations and spots to visit from the hotel room. Starwood Hotels plan to allow users to drop a credit card on the table to pay for music, books, and other amenities offered at the resort.  MSNBC&#8217;s coverage of the 2008 US presidential election used Surface to share with viewers information and analysis of the race leading up to the election. The anchor analyzes polling and election results, views trends and demographic information and explores county maps to determine voting patterns and predict outcomes, all with the flick of his finger. In some hotels and casinos, users can do a range of things, such as watch videos, view maps, order drinks, play games, and chat and flirt with people between Surface tables.</p>
<p>In AT&amp;T stores, use of the Surface includes interactive presentations of plans, coverage, and phone features, in addition to dropping two different phones on the table and having the customer be able to view and compare prices, features, and plans</p>
<p>http://www.youtube.com/watch?v=D1IpDStL23M</p>
<p>http://www.youtube.com/watch?v=IbCORzYW6lQ</p>
<p><strong>Also see:<br />
</strong><a href="http://www.microsoft.com/surface/" target="_blank">Microsoft Surface YouTube Channel<br />
Microsoft Surface website</a></p>
]]></content:encoded>
			<wfw:commentRss>https://www.thekurzweillibrary.com/microsoft-surface-vision-system-can-interacting-with-objects/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Earth is under seige by alien technology in new sci-fi thriller Battle: Los Angeles</title>
		<link>https://www.thekurzweillibrary.com/alien-technology-in-battle-los-angeles</link>
		<comments>https://www.thekurzweillibrary.com/alien-technology-in-battle-los-angeles#respond</comments>
		<pubDate>Thu, 10 Feb 2011 13:24:20 +0000</pubDate>
		
								<media:thumbnail url="https://www.thekurzweillibrary.com/images/Battle-LA-poster-140x207.jpg" width="140" height="207" />
		
				<category><![CDATA[Future Visions]]></category>
		<category><![CDATA[resources]]></category>

		<guid isPermaLink="false">http://www.thekurzweillibrary.com/?p=107875</guid>
		<description><![CDATA[Promos say this film is &#8220;an action thriller about a global offensive initiated by unknown extraterrestrial hostiles.&#8221; Looks scary, check out the two trailers below. Here&#8217;s the film&#8217;s official website from Sony Pictures. The release date is March, 2011. And in case you&#8217;re curious, the ominous song lyrics featured in the teaser trailer are: &#8220;The Sun&#8217;s [&#8230;]]]></description>
			<content:encoded><![CDATA[<p><a href="http://www.thekurzweillibrary.com/images/Battle-LA-poster.jpg"><img loading="lazy" decoding="async" class="alignleft size-full wp-image-107893" title="Battle LA poster" src="http://www.thekurzweillibrary.com/images/Battle-LA-poster.jpg" alt="" width="180" height="274" /></a>Promos say this film is &#8220;an action thriller about a global offensive initiated by unknown extraterrestrial hostiles.&#8221; Looks scary, check out the two trailers below. Here&#8217;s the film&#8217;s <a href="http://www.battlela.com/" target="_blank">official website</a> from Sony Pictures. The release date is March, 2011.</p>
<p>And in case you&#8217;re curious, the ominous song lyrics featured in the teaser trailer are:</p>
<p>&#8220;The Sun&#8217;s Gone Dim and The Sky&#8217;s Turned Black,&#8221; by <a href="http://www.johannjohannsson.com/" target="_blank">Jóhann Jóhannsson</a> | &#8220;The sun&#8217;s gone dim,and the sky&#8217;s turned black. &#8216;Cause I loved her, and she﻿ didn&#8217;t love back. The battle is won, and the war goes on. For she moves on, and I can&#8217;t look back. The stars still shine, and the world is done. And she came back, and I was gone.&#8221;</p>
<p>Here&#8217;s the synopsis from the film&#8217;s website:</p>
<p><a href="http://www.thekurzweillibrary.com/images/Battle-LA-synopsis.png"><img loading="lazy" decoding="async" class="alignleft size-full wp-image-107892" title="Battle LA synopsis" src="http://www.thekurzweillibrary.com/images/Battle-LA-synopsis.png" alt="" width="539" height="303" srcset="https://www.thekurzweillibrary.com/images/Battle-LA-synopsis.png 539w, https://www.thekurzweillibrary.com/images/Battle-LA-synopsis-140x78.png 140w, https://www.thekurzweillibrary.com/images/Battle-LA-synopsis-259x145.png 259w, https://www.thekurzweillibrary.com/images/Battle-LA-synopsis-512x287.png 512w" sizes="auto, (max-width: 539px) 100vw, 539px" /></a></p>
<hr />
<p><object classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" width="640" height="390" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,40,0"><param name="allowFullScreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="src" value="http://www.youtube.com/v/Yt7ofokzn04?fs=1&amp;hl=en_US&amp;rel=0" /><param name="allowfullscreen" value="true" /><embed type="application/x-shockwave-flash" width="640" height="390" src="http://www.youtube.com/v/Yt7ofokzn04?fs=1&amp;hl=en_US&amp;rel=0" allowfullscreen="true" allowscriptaccess="always"/></object></p>
<p><object classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" width="640" height="390" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,40,0"><param name="allowFullScreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="src" value="http://www.youtube.com/v/tAdm9ssE6gk?fs=1&amp;hl=en_US&amp;rel=0" /><param name="allowfullscreen" value="true" /><embed type="application/x-shockwave-flash" width="640" height="390" src="http://www.youtube.com/v/tAdm9ssE6gk?fs=1&amp;hl=en_US&amp;rel=0" allowfullscreen="true" allowscriptaccess="always"/></object></p>
]]></content:encoded>
			<wfw:commentRss>https://www.thekurzweillibrary.com/alien-technology-in-battle-los-angeles/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>3D projection mapping taking the advertising world by storm</title>
		<link>https://www.thekurzweillibrary.com/3d-projection-mapping-taking-the-advertising-world-by-storm</link>
		<comments>https://www.thekurzweillibrary.com/3d-projection-mapping-taking-the-advertising-world-by-storm#respond</comments>
		<pubDate>Tue, 08 Feb 2011 13:44:39 +0000</pubDate>
		
								<media:thumbnail url="https://www.thekurzweillibrary.com/images/3D-Projection-Mapping-140x128.jpg" width="140" height="128" />
		
				<category><![CDATA[Future Visions]]></category>
		<category><![CDATA[resources]]></category>

		<guid isPermaLink="false">http://www.thekurzweillibrary.com/?p=107455</guid>
		<description><![CDATA[3D projection mapping has recently emerged as one of the coolest forms of advertising, with big companies like Nokia, Samsung and BMW projecting beautiful 3D video displays on buildings around the world and sharing their campaigns on the web. 3D projection mapping has become something of a recent obsession for me, as well as for [&#8230;]]]></description>
			<content:encoded><![CDATA[<p><img loading="lazy" decoding="async" class="alignleft size-full wp-image-107492" title="3D-Projection-Mapping" src="http://www.thekurzweillibrary.com/images/3D-Projection-Mapping.jpg" alt="" width="200" height="184" srcset="https://www.thekurzweillibrary.com/images/3D-Projection-Mapping.jpg 200w, https://www.thekurzweillibrary.com/images/3D-Projection-Mapping-140x128.jpg 140w" sizes="auto, (max-width: 200px) 100vw, 200px" />3D projection mapping has recently emerged as one of the coolest forms of advertising, with big companies like Nokia, Samsung and BMW projecting beautiful 3D video displays on buildings around the world and sharing their campaigns on the web. 3D projection mapping has become something of a recent obsession for me, as well as for the advertising world. Read more about this technique and how it’s being used by brands after the jump.</p>
<p>A couple of months ago I was picking my mom up from Union Station in Washington, DC. While waiting for her train to arrive I sat outside the train station and noticed that video was being projected across the entire front of the building. Union Station was transformed into the Wild West, a mountain scene and a brick building right before my eyes. As it turns out, the projection was a <a href="http://andysjourney.com/?tag=environmental-projection" target="_blank">promotion</a> for a new History Channel show about the history of America. I was intrigued, and when I got home I decided to do some research and found that similar campaigns, and much cooler ones, were going on all over the world.</p>
<h4>What is 3D Projection Mapping?</h4>
<p>3D projection mapping, according to <a href="http://en.wikipedia.org/wiki/3D_projection" target="_blank">Wikipedia</a>, “is any method of mapping three-dimensional points to a two-dimensional plane.” Using this technique, video artists are able to match video to buildings that they are projecting on and create cool 3D effects, making it look as though buildings are crumbling, changing their structure and more. It really is amazing. Check out the following clips as examples of phenomenal things that can be done using this technique, and then check out some great 3D projection mapping ad campaigns below.</p>
<div class="plyr-vimeo" data-plyr-provider="vimeo" data-plyr-embed-id="3297097"></div>
<div class="plyr-vimeo" data-plyr-provider="vimeo" data-plyr-embed-id="10692284"></div>
<h4>Samsung 3D Projection Mapping in Amsterdam</h4>
<p>I really have no words to describe how amazing Samsung’s 3D projection mapping display in Amsterdam is. They take a building and destroy it, beautify it and transform it. You won’t believe your eyes.</p>
<p>http://www.youtube.com/watch?v=GN3kuVuyxEw</p>
<h4>BMW 3D Projection Mapping Singapore</h4>
<p>BMW launched a fantastic 3D projection mapping installation in Singapore and it was uploaded to YouTube at the beginning of this month. The campaign is projected on two buildings in an intersection and it is truly breathtaking. The campaign was Asia’s first interactive 3D building projection.</p>
<p>http://www.youtube.com/watch?v=WnPFroX9Oa8</p>
<h4>Nokia Ovi Maps – Interactive Projection Mapping</h4>
<p>Nokia paired up with interactive Arts and Technology Collective <a href="http://www.seeper.com/" target="_blank">Seeper</a> to create a cool interactive projection installation. Onlookers were incorporated into the projection, and different things happened based on how each person moved.</p>
<div class="plyr-vimeo" data-plyr-provider="vimeo" data-plyr-embed-id="11188067"></div>
<h4>ACDC Vs Iron Man 2</h4>
<p>Following their collaboration with Nokia, Seeper was asked to create a 3D projection installation for Sony to promote the new ACDC Iron Man 2 soundtrack. The installation took place at Rochester Castle. Once again, a clip so phenomenal you won’t believe your eyes!</p>
<div class="plyr-vimeo" data-plyr-provider="vimeo" data-plyr-embed-id="11160666"></div>
<p>The great thing about these campaigns is that they really get the attention of passersby (I would kill to see one of these live), but they also capture the attention of people on the web. These clips are just too cool not to go viral and as more and more people find out about 3D projection mapping these clips are sure to garner hundreds of thousands, if not millions, of views on sites like Vimeo and YouTube.</p>
<p>Have you seen 3D projection installations before? What do you think about them and their place in the future of advertising?</p>
]]></content:encoded>
			<wfw:commentRss>https://www.thekurzweillibrary.com/3d-projection-mapping-taking-the-advertising-world-by-storm/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Singularity video game by Activision</title>
		<link>https://www.thekurzweillibrary.com/singularity-video-game-by-activision</link>
		<comments>https://www.thekurzweillibrary.com/singularity-video-game-by-activision#respond</comments>
		<pubDate>Tue, 08 Feb 2011 07:48:34 +0000</pubDate>
		
								<media:thumbnail url="https://www.thekurzweillibrary.com/images/Singularity-by-Activision-140x199.jpg" width="140" height="199" />
		
				<category><![CDATA[Future Visions]]></category>
		<category><![CDATA[resources]]></category>

		<guid isPermaLink="false">http://www.thekurzweillibrary.com/?p=107345</guid>
		<description><![CDATA[While the concept behind this video game is likely riffing on the various defintions of singularity from physics, as opposed to the metaphorical &#8220;technological Singularity,&#8221; it&#8217;s clear that the term has wormed its way into mainstream pop culture, and is having a strong impact on the cultural zeitgeist. Wikipedia &#124; Singularity is a video game developed by Raven [&#8230;]]]></description>
			<content:encoded><![CDATA[<div id="attachment_107350" style="width: 331px" class="wp-caption alignleft"><a href="http://www.thekurzweillibrary.com/images/Singularity-by-Activision.jpg"><img loading="lazy" decoding="async" aria-describedby="caption-attachment-107350" class="size-full wp-image-107350 " title="Singularity by Activision" src="http://www.thekurzweillibrary.com/images/Singularity-by-Activision.jpg" alt="" width="331" height="462" /></a><p id="caption-attachment-107350" class="wp-caption-text">(Image: Activision Blizzard, Inc. )</p></div>
<p>While the concept behind this video game is likely riffing on the various <a href="http://en.wikipedia.org/wiki/Singularity" target="_blank">defintions of singularity</a> from physics, as opposed to the metaphorical &#8220;technological Singularity,&#8221; it&#8217;s clear that the term has wormed its way into mainstream pop culture, and is having a strong impact on the cultural zeitgeist.</p>
<p>Wikipedia | <em>Singularity</em> is a video game developed by<a href="http://www.ravensoft.com/" target="_blank"> Raven Software</a>, published by <a title="Activision" href="http://www.activision.com/index.html" target="_blank">Activision</a> Blizzard, Inc. and released for Microsoft Windows, Xbox 360, and PlayStation 3. <em> </em></p>
<p><em>Singularity</em> is Raven Software&#8217;s second title based on <a href="http://en.wikipedia.org/wiki/Epic_Games" target="_blank">Epic Games&#8217;</a> <a title="Unreal Engine" href="http://en.wikipedia.org/wiki/Unreal_Engine#Unreal_Engine_3" target="_blank">Unreal Engine 3</a>. The title was announced at Activision&#8217;s E3 2008 press conference.</p>
<p><em>Singularity</em> takes place on a fictional island known as Katorga-12, where Russian experiments involving Element 99 took place during the height of the Cold War. In 1955, a catastrophe involving experiments attempting to form a &#8220;Singularity&#8221; occurred on the island, causing the island&#8217;s very existence to be covered up by the Russian government.</p>
<p>In 2010, a sudden electromagnetic surge from Katorga-12 damages an American spy satellite.</p>
<p>A military reconnaissance team is sent to investigate the uninhabited island, but a second surge causes their helicopter to crash. Captain Nathaniel Renko, a member of the reconnaissance team, enters the abandoned scientific complex on the island, where he phases between 1955 and 2010.</p>
<p><object classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" width="640" height="373" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,40,0"><param name="allowFullScreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="src" value="http://www.youtube.com/v/Le7DvxPY_MY?fs=1&amp;hl=en_US&amp;rel=0&amp;hd=1" /><param name="allowfullscreen" value="true" /><embed type="application/x-shockwave-flash" width="640" height="373" src="http://www.youtube.com/v/Le7DvxPY_MY?fs=1&amp;hl=en_US&amp;rel=0&amp;hd=1" allowfullscreen="true" allowscriptaccess="always"/></object></p>
]]></content:encoded>
			<wfw:commentRss>https://www.thekurzweillibrary.com/singularity-video-game-by-activision/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Animusic&#8217;s virtual reality instruments of the future</title>
		<link>https://www.thekurzweillibrary.com/instrument-of-the-future</link>
		<comments>https://www.thekurzweillibrary.com/instrument-of-the-future#respond</comments>
		<pubDate>Mon, 07 Feb 2011 11:22:55 +0000</pubDate>
		
								<media:thumbnail url="https://www.thekurzweillibrary.com/images/Animusic-LLC-logo.jpg" width="88" height="88" />
		
				<category><![CDATA[Future Visions]]></category>
		<category><![CDATA[resources]]></category>

		<guid isPermaLink="false">http://www.thekurzweillibrary.com/?p=103824</guid>
		<description><![CDATA[Animusic&#8217;s fascinating and novel approach to creating and animating virtual instruments is full of possibility for the future of augmented and virtual reality. Wikipedia &#124; Animusic is an American company specializing in the 3D visualization of MIDI-based music. Founded by Wayne Lytle, the company is known for its Animusic compilations of computer-generated animations, based on MIDI [&#8230;]]]></description>
			<content:encoded><![CDATA[<div id="attachment_107270" style="width: 398px" class="wp-caption alignleft"><a href="http://www.thekurzweillibrary.com/images/animusic.jpg"><img loading="lazy" decoding="async" aria-describedby="caption-attachment-107270" class="size-large wp-image-107270 " title="animusic" src="http://www.thekurzweillibrary.com/images/animusic-512x384.jpg" alt="" width="398" height="270" /></a><p id="caption-attachment-107270" class="wp-caption-text">A scene from an Animusic animated musical video. (credit: Animusic, LLC)</p></div>
<p>Animusic&#8217;s fascinating and novel approach to creating and animating virtual instruments is full of possibility for the future of augmented and virtual reality.</p>
<p>Wikipedia | Animusic is an American company specializing in the 3D visualization of MIDI-based music. Founded by <a title="Wayne Lytle" href="http://en.wikipedia.org/wiki/Wayne_Lytle" target="_blank">Wayne Lytle</a>, the company is known for its <em>Animusic</em> compilations of computer-generated animations, based on MIDI events processed to simultaneously drive the music and on-screen action, leading to and corresponding to every sound. Unlike many other music visualizations, the music drives the animation.</p>
<p>While other productions might animate figures or characters to the music, the animated models in <em>Animusic</em> are created first, and are then programmed to follow what the music &#8220;tells them&#8221; to. &#8216;Solo cams&#8217; in the Animusic DVD shows how each instrument actually plays through a piece of music from beginning to end. Many of the instruments appear to be robotic or play themselves using curious methods to produce and visualize the original compositions. The animations typically feature dramatically-lit rooms or landscapes.</p>
<p><img loading="lazy" decoding="async" class="alignleft size-full wp-image-118404" title="Animusic LLC logo" src="http://www.thekurzweillibrary.com/images/Animusic-LLC-logo.jpg" alt="" width="76" height="70" />The music of <em>Animusic</em> is principally pop-rock based, consisting of straightforward sequences of triggered samples and digital patches mostly played &#8220;dry&#8221;; i.e., with few effects. There are no lyrics or voices, save for the occasional chorus synthesizer. According to the director, most instrument sounds are generated with software synthesizers on a music workstation.</p>
<p>Many sounds resemble stock patches available on digital keyboards, subjected to some manipulation, such as pitch or playback speed, to enhance the appeal of their timbre. The animation is created procedurally with their own proprietary MIDImotion software. Discreet 3D Studio Max was used for modeling, lighting, cameras, and rendering. Maps were painted with Corel Painter, Deep Paint 3D, and Photoshop. They have also created their own software called AnimusicStudio.</p>
<hr />
<p>Animusic LLC | What is Animusic? Virtual Instruments performing with precision timing. Individual music animations, or &#8220;music videos&#8221; ranging from about 3 to 6 minutes. As a collection, they form &#8220;visual albums,&#8221; in the form of DVDs. Like records where you can see the music. <em>Animusic 1</em> has 7 completely different animations; <em>Animusic 2</em> has 8. Both have quite a few bonus features, too. It&#8217;s all created digitally, utilizing a process similar to that used to produce computer animated movies (although we apply our own &#8220;secret formula,&#8221; essentially causing the instruments to magically animate themselves). The purely imaginary instruments perform in their native settings. The designs are fairly concrete. We aim for enjoyable virtual settings &#8212; existing only on the screen.</p>
<p><em>This digitally created video, below, shows a virtual instrument performing music. The CGI imagery is generated and matched to the music using Animusic&#8217;s MIDIMo. (video credit: Animusic, LLC)</em></p>
<p><object classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" width="640" height="385" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,40,0"><param name="allowFullScreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="src" value="http://www.youtube.com/v/toXNVbvFXyk?fs=1&amp;hl=en_US&amp;rel=0" /><param name="allowfullscreen" value="true" /><embed type="application/x-shockwave-flash" width="640" height="385" src="http://www.youtube.com/v/toXNVbvFXyk?fs=1&amp;hl=en_US&amp;rel=0" allowfullscreen="true" allowscriptaccess="always"/></object></p>
<p>People have often asked us what software we use, and if it&#8217;s available commercially. Animusic uses a production pipeline based on proprietary software we call ANIMUSIC|studio. It is a MIDI sequencer and animation system based on a visual programming language (looks like boxes connected with wires). At the core is our motion generation software library (in its 5th generation) which we have come to call MIDImotion.</p>
<p>None of our software is currently available commercially. We use commercial software for modeling, shading and rendering, while the instrument animation is always calculated procedurally using custom created software.Our current pipeline hinges on a total rewrite of ANIMUSIC|studio from the ground up (more about that in this Newsletter). It&#8217;s based on scene graph technology, has a new sequencer, and even MIDImotion was re-written to be much more real-time.</p>
<p>http://www.youtube.com/watch?v=hyCIpKAIFyo</p>
<p>And as much as we liked so many things about 3ds Max, it was time to make all things new. So we&#8217;ve moved to SoftImage XSI (and maybe a little Z-Brush) for modeling, and back to RenderMan for rendering. ANIMUSICstudio does all the sequencing internally, so no more exporting and importing MIDI files. Instead MIDI is sent over Gigabit Ethernet to a second workstation dedicated to hosting VST Software Synthesizers.</p>
<p>More about MIDImotionWithout MIDImotion, animating instruments using traditional &#8220;keyframing&#8221; techniques would be prohibitively time-consuming and inaccurate. By combining motion generated by approximately 12 algorithms (each with 10 to 50 parameters), the instrument animation is automatically generated with sub-frame accuracy. If the music is changed, the animation is regenerated effortlessly.</p>
<p>Our technique differs significantly from reactive sound visualization technology, as made popular by music player plug-ins. Rather than reacting to sound with undulating shapes, our animation is correlated to the music at a note-for-note granularity, based on a non-real-time analysis pre-process. Animusic instruments generally appear to generate the music heard, rather than respond to it. At any given instant, not only do we take into account the notes currently being played, but also notes recently played and those coming up soon. These factors are combined to derive &#8220;intelligent,&#8221; natural-moving, self-playing instruments. And although the original instruments created for our DVDs are often somewhat reminiscent of real instruments, the motion algorithms can be applied to arbitrary graphics models, including non-instrumental objects and abstract shapes.</p>
<p>http://www.youtube.com/watch?v=XBOQcQO0IFI</p>
]]></content:encoded>
			<wfw:commentRss>https://www.thekurzweillibrary.com/instrument-of-the-future/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Dr. Steel&#8217;s &#8216;Singularity&#8217; from People of Earth</title>
		<link>https://www.thekurzweillibrary.com/inspired-music-dr-steel-singularity</link>
		<comments>https://www.thekurzweillibrary.com/inspired-music-dr-steel-singularity#respond</comments>
		<pubDate>Sat, 15 Jan 2011 06:00:20 +0000</pubDate>
		
								<media:thumbnail url="https://www.thekurzweillibrary.com/images/Dr.-Steel-logo-140x140.jpg" width="140" height="140" />
		
				<category><![CDATA[Future Visions]]></category>
		<category><![CDATA[resources]]></category>

		<guid isPermaLink="false">http://www.thekurzweillibrary.com/?p=105444</guid>
		<description><![CDATA[Dr. Steel&#8217;s track “The Singularity,” from his 2002 album People of Earth, below. Wikipedia &#124; Doctor Steel is an American musician located in Southern California, popular in the Steampunk, Goth, and Rivethead scenes. He has performed on rare occasions with a &#8220;backup band&#8221;, claiming that a fictitious robot band had malfunctioned. Shows have incorporated puppetry, [&#8230;]]]></description>
			<content:encoded><![CDATA[<p>Dr. Steel&#8217;s track “The Singularity,” from his 2002 album <em>People of Earth, </em>below.</p>
<p>Wikipedia | Doctor Steel is an American musician located in Southern California, popular in the <a title="Steampunk" href="/wiki/Steampunk">Steampunk</a>, <a title="Goth subculture" href="/wiki/Goth_subculture">Goth</a>, and Rivethead scenes. He has performed on rare occasions with a &#8220;backup band&#8221;, claiming that a fictitious robot band had malfunctioned. Shows have incorporated puppetry, multimedia and performances by his <a title="Street team" href="/wiki/Street_team">street team</a>, The <a title="Doctor Steel" href="/wiki/Doctor_Steel#Army_of_Toy_Soldiers">Army of Toy Soldiers</a>. Steel has begun breaking into the mainstream media, having made a brief appearance on <em>The Tonight Show</em>, been interviewed by numerous genre magazines and podcasts, and an atricle in <em>Wired</em> magazine.</p>
<p><object classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" width="640" height="385" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,40,0"><param name="allowFullScreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="src" value="http://www.youtube.com/v/iZw0tVwtATY?fs=1&amp;hl=en_US&amp;rel=0" /><param name="allowfullscreen" value="true" /><embed type="application/x-shockwave-flash" width="640" height="385" src="http://www.youtube.com/v/iZw0tVwtATY?fs=1&amp;hl=en_US&amp;rel=0" allowfullscreen="true" allowscriptaccess="always"/></object></p>
<p>And a &#8220;Public Service Announcement&#8221;:</p>
<p><object classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" width="480" height="385" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,40,0"><param name="allowFullScreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="src" value="http://www.youtube.com/v/ejOyiOJPdv4?fs=1&amp;hl=en_US&amp;rel=0" /><param name="allowfullscreen" value="true" /><embed type="application/x-shockwave-flash" width="480" height="385" src="http://www.youtube.com/v/ejOyiOJPdv4?fs=1&amp;hl=en_US&amp;rel=0" allowfullscreen="true" allowscriptaccess="always"/></object></p>
<p><img loading="lazy" decoding="async" class="alignleft size-full wp-image-106705" title="Dr. Steel logo" src="http://www.thekurzweillibrary.com/images/Dr.-Steel-logo.jpg" alt="" width="224" height="224" srcset="https://www.thekurzweillibrary.com/images/Dr.-Steel-logo.jpg 224w, https://www.thekurzweillibrary.com/images/Dr.-Steel-logo-140x140.jpg 140w" sizes="auto, (max-width: 224px) 100vw, 224px" /><br />
<strong>Related:</strong><br />
<a href="http://worlddominationtoys.com/drsteel/enter.html" target="_blank">Dr. Steel&#8217;s official website</a><br />
<a href="http://www.youtube.com/user/DoctorSteel" target="_blank">Dr. Steel&#8217;s official YouTube channel</a></p>
]]></content:encoded>
			<wfw:commentRss>https://www.thekurzweillibrary.com/inspired-music-dr-steel-singularity/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Inspired Singularity album by American band Mae</title>
		<link>https://www.thekurzweillibrary.com/inspired-music-mae-singularity</link>
		<comments>https://www.thekurzweillibrary.com/inspired-music-mae-singularity#respond</comments>
		<pubDate>Fri, 18 Jun 2010 01:32:54 +0000</pubDate>
		
								<media:thumbnail url="https://www.thekurzweillibrary.com/images/mae-140x140.jpg" width="140" height="140" />
		
				<category><![CDATA[Future Visions]]></category>
		<category><![CDATA[resources]]></category>

		<guid isPermaLink="false">http://www.thekurzweillibrary.com/?p=93336</guid>
		<description><![CDATA[Wikipedia &#124; Mae is an American rock bandthat formed in Norfolk, Virginia in 2001. The band&#8217;s name is an acronym for &#8220;Multi-sensory Aesthetic Experience,&#8221; based on a course taken by drummer Jacob Marshall while a student at Old Dominion University. Singularity is Mae&#8217;s third full-length release and their major label debut. The album was originally [&#8230;]]]></description>
			<content:encoded><![CDATA[<p><img loading="lazy" decoding="async" class="alignleft size-medium wp-image-93337" title="mae" src="http://www.thekurzweillibrary.com/images/mae-259x259.jpg" alt="" width="259" height="259" srcset="https://www.thekurzweillibrary.com/images/mae-259x259.jpg 259w, https://www.thekurzweillibrary.com/images/mae-140x140.jpg 140w, https://www.thekurzweillibrary.com/images/mae.jpg 500w" sizes="auto, (max-width: 259px) 100vw, 259px" />Wikipedia | Mae is an American rock bandthat formed in Norfolk, Virginia in 2001. The band&#8217;s name is an acronym for &#8220;Multi-sensory Aesthetic Experience,&#8221; based on a course taken by drummer Jacob Marshall while a student at Old Dominion University.</p>
<p><em>Singularity</em> is Mae&#8217;s third full-length release and their major label debut. The album was originally to be released in April 2007 on Tooth &amp; Nail Records like their previous two albums, but the band signed a deal with major label Capitol Records soon after the new album announcement which pushed the release date back to August 14, 2007.</p>
<p>Mae headed to Los Angeles to record <em>Singularity</em> with producer Howard Benson in October 2006. The band came up with the title <em>Singularity</em> from a book that Marshall and Sweitzer were reading by Australian scientist <a title="Paul Davies" href="http://en.wikipedia.org/wiki/Paul_Davies" target="_blank">Paul Davies</a>. Jacob Marshall referred to the term as being &#8220;the ultimate unknowable in science&#8230; the interface between the natural and the supernatural. We realized through those conversations that there is so much more for us to learn and to understand and these ideas inspired us to question everything.&#8221;</p>
<p>The band was inspired by bands like Pearl Jam, U2, The Smashing Pumpkins, Nirvana, and Rage Against the Machine when they were creating <em>Singularity</em>. Rob will for the first time play the Vox organ, Companion organ, Casio portatone, and the kazoo instruments on an album.</p>
<p><object classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" width="480" height="385" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,40,0"><param name="allowFullScreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="src" value="http://www.youtube.com/v/0hGP4P_kUww&amp;hl=en_US&amp;fs=1&amp;" /><param name="allowfullscreen" value="true" /><embed type="application/x-shockwave-flash" width="480" height="385" src="http://www.youtube.com/v/0hGP4P_kUww&amp;hl=en_US&amp;fs=1&amp;" allowscriptaccess="always" allowfullscreen="true"/></object></p>
<p><a href="http://en.wikipedia.org/wiki/Paul_Davies" target="_blank">Visit here to find out more about Paul Davies</a>, whose books inspired this album. Paul Davies is a British physicist, writer and broadcaster, currently a professor at Arizona State University as well as the Director of <a href="http://beyond.asu.edu/" target="_blank">BEYOND: Center for Fundamental Concepts in Science</a>. His research interests are in the fields of cosmology, quantum field theory, and astrobiology.</p>
<p><strong>Related:<br />
</strong><a href="http://www.whatismae.com/" target="_blank">Mae&#8217;s official website</a><br />
<a href="http://www.youtube.com/user/maevideos" target="_blank">Mae&#8217;s official YouTube Channel</a></p>
]]></content:encoded>
			<wfw:commentRss>https://www.thekurzweillibrary.com/inspired-music-mae-singularity/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Title track from Foals&#8217; new album, Total Life Forever, inspired by futurist Ray Kurzweil</title>
		<link>https://www.thekurzweillibrary.com/inspired-music-foals-total-life-forever</link>
		<comments>https://www.thekurzweillibrary.com/inspired-music-foals-total-life-forever#respond</comments>
		<pubDate>Fri, 30 Apr 2010 00:27:01 +0000</pubDate>
		
								<media:thumbnail url="https://www.thekurzweillibrary.com/images/foals-album-140x140.jpg" width="140" height="140" />
		
				<category><![CDATA[Future Visions]]></category>
		<category><![CDATA[resources]]></category>

		<guid isPermaLink="false">http://www.thekurzweillibrary.com/?p=92147</guid>
		<description><![CDATA[&#8220;I don&#8217;t think it was a conscious decision to change any of our sounds, more that we have progressed as a band,&#8221; explains bass player Walter Gervers of Foals&#8217; new album Total Life Forever. &#8220;Our tastes have changed. What we were trying to create was a record with more space and more freedom than the [&#8230;]]]></description>
			<content:encoded><![CDATA[<p><a href="http://www.yorkshireeveningpost.co.uk/news/MUSIC-INTERVIEW-Foals.6262587.jp" target="_blank"><em><img loading="lazy" decoding="async" class="alignleft size-medium wp-image-92148" title="foals album" src="http://www.thekurzweillibrary.com/images/foals-album-259x259.jpg" alt="" width="259" height="259" srcset="https://www.thekurzweillibrary.com/images/foals-album-259x259.jpg 259w, https://www.thekurzweillibrary.com/images/foals-album-140x140.jpg 140w, https://www.thekurzweillibrary.com/images/foals-album.jpg 452w" sizes="auto, (max-width: 259px) 100vw, 259px" /></em></a>&#8220;I don&#8217;t think it was a conscious decision to change any of our sounds, more that we have progressed as a band,&#8221; explains bass player Walter Gervers of Foals&#8217; new album <em>Total Life Forever</em>. &#8220;Our tastes have changed. What we were trying to create was a record with more space and more freedom than the first time.&#8221;</p>
<p>The album&#8217;s title track was inspired by Raymond Kurzweil, the American inventor and futurist writer. &#8220;Yannis had read <em>The Singularity is Near</em> (Kurzweil&#8217;s book about artificial intelligence) &#8212; that struck chords in him. We all therefore became involved.&#8221;</p>
<p>&#8220;Artificial Intelligence is terrifying but fascinating &#8212; the idea that the future is at a point where we can almost see it. I think in the past people thought, &#8216;Yes, eventually things will be like this.&#8217; Now, terrifying intelligence is becoming a reachable point.&#8221;</p>
<p>The first taste fans got of the sea-change in the band&#8217;s music came last month, with the release of Spanish Sahara as a freedownload on Foals&#8217; website. At the time keyboardist Edwin Congreave described it as a &#8220;provocative gesture&#8221; but Walter Gervers hopes &#8220;our audience, as it were, would enjoy seeing us making something like that.&#8221;</p>
<p>&#8220;So far the response has been really good,&#8221; he adds. &#8220;It was a chance to give people a flavour of the record. It was not like the centrepiece of the album, it was just a message to say, &#8216;This is a segment of what&#8217;s coming.&#8217; I think everyone thought it was a single. It got a few radio plays, but it was supposed to be like a viral.&#8221; For a band like Foals, that pride themselves on constructing albums as a coherent whole, how frustrating is it that, in the pick&#8217;n&#8217;mix era of iTunes, listeners are likely to dip into <em>Total Life Forever</em> rather than hearing it in its entirety?</p>
<p>&#8220;We were worried about that,&#8221; admits Walter. &#8220;We really did set out to make a whole piece with this record. But what can you do? It&#8217;s impossible. You can try to get the message across as much as possible but if people use shuffle on their iPods you can&#8217;t stop them. People listen to snippets of single songs now. I find myself doing it. Attention spans have become shorter.</p>
<p>&#8220;It reflects in a lot of pop music. Big chart songs at the top of radio playlists &#8212; all the hooks in the song are immediately there in the first few seconds so it can be used as a ringtone.&#8221;  The real money-earner for any band these days is touring. Needless to say, Foals will be spending much of the coming months on the road. They visit Leeds on Monday.</p>
<p><strong>Related:</strong><br />
<a href="http://www.foals.co.uk/" target="_blank">Foals official website</a></p>
]]></content:encoded>
			<wfw:commentRss>https://www.thekurzweillibrary.com/inspired-music-foals-total-life-forever/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Experimental band Yeasayer&#8217;s Odd Blood  inspired by Kurzweil&#8217;s vision of human-machine intelligence</title>
		<link>https://www.thekurzweillibrary.com/inspired-music-yeasayer-odd-blood</link>
		<comments>https://www.thekurzweillibrary.com/inspired-music-yeasayer-odd-blood#respond</comments>
		<pubDate>Thu, 29 Apr 2010 23:12:54 +0000</pubDate>
		
								<media:thumbnail url="https://www.thekurzweillibrary.com/images/yeasayer-odd-blood-140x140.jpg" width="140" height="140" />
		
				<category><![CDATA[Future Visions]]></category>
		<category><![CDATA[resources]]></category>

		<guid isPermaLink="false">http://www.thekurzweillibrary.com/?p=92136</guid>
		<description><![CDATA[Yeasayer have created a decadent, densely produced mess of a second album. Like other bands trying to do art rock in 2010, they confront us with the irony that their world of genre-melding futurism (a/k/a Brooklyn) can sound dated from the moment you get off the plane. This aside, Odd Blood is a sprawling trip [&#8230;]]]></description>
			<content:encoded><![CDATA[<p><a href="http://www.thekurzweillibrary.com/images/yeasayer-odd-blood.jpg"><img loading="lazy" decoding="async" class="alignleft size-full wp-image-107251" title="yeasayer odd blood" src="http://www.thekurzweillibrary.com/images/yeasayer-odd-blood.jpg" alt="" width="226" height="214" /></a>Yeasayer have created a decadent, densely produced mess of a second album. Like other bands trying to do art rock in 2010, they confront us with the irony that their world of genre-melding futurism (a/k/a Brooklyn) can sound dated from the moment you get off the plane.</p>
<p>This aside, Odd Blood is a sprawling trip through Yeasayer&#8217;s uniquely rhythmic takes on rock and roll, art rock, R&amp;B, electronic, and dance pop. They certainly know how to build big tracks in the studio. [&#8230;]</p>
<hr />
<p>ArtistWiki<em> </em>| Yeasayer is an experimental band based in Brooklyn, New York. Live performances sometimes include trippy psychedelic graphics projected in the background. Band members Anand Wilder, Chris Keating and Ira Wolf Tuton describe their music as &#8220;Middle-psych-snap-gospel.&#8221;</p>
<p>Yeasayer recently revealed in an interview with <em>Pitchfork</em> that they had completed their second album with a release date of February 9, 2010. On October 30, 2009, Yeasayer revealed details for the first single release from <em>Odd Blood</em> titled &#8220;Ambling Alp.&#8221; <em>Odd Blood</em> is partly inspired by uppfinnaren Ray Kurzweil&#8217;s theory that computer intelligence will eventually replace the human brain.</p>
<p>Peter Gabriel&#8217;s drummer Jerry Marotta helped record <em>Odd Blood</em> in an upstate New York studio full of synthesizers and unusual percussion instruments from around the world. Yeasayer has big plans for their upcoming tour including custom lighting columns and giant illuminated balloons.</p>
<p><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-92143" title="yeasayer group photo" src="http://www.thekurzweillibrary.com/images/yeasayer-group-photo.jpg" alt="" width="449" height="286" srcset="https://www.thekurzweillibrary.com/images/yeasayer-group-photo.jpg 449w, https://www.thekurzweillibrary.com/images/yeasayer-group-photo-140x89.jpg 140w, https://www.thekurzweillibrary.com/images/yeasayer-group-photo-259x164.jpg 259w" sizes="auto, (max-width: 449px) 100vw, 449px" /></p>
<p><strong>Related Links:</strong><br />
<a href="www.myspace.com/yeasayer" target="_blank">Yeasayer&#8217;s website<br />
Yeasayer&#8217;s blog<br />
</a><a href="http://artistwiki.com/yeasayer" target="_blank">Yeasayer on ArtistWiki</a></p>
]]></content:encoded>
			<wfw:commentRss>https://www.thekurzweillibrary.com/inspired-music-yeasayer-odd-blood/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>The Matrix loses its way: Reflections on The Matrix and The Matrix Reloaded</title>
		<link>https://www.thekurzweillibrary.com/the-matrix-loses-its-way-reflections-on-matrix-and-matrix-reloaded</link>
		<comments>https://www.thekurzweillibrary.com/the-matrix-loses-its-way-reflections-on-matrix-and-matrix-reloaded#respond</comments>
		<pubDate>Mon, 19 May 2003 01:34:35 +0000</pubDate>
		
								<media:thumbnail url="https://www.thekurzweillibrary.com/images/Matrix-Reloaded-poster-140x208.jpg" width="140" height="208" />
		
				<category><![CDATA[Future Visions]]></category>
		<category><![CDATA[resources]]></category>

		<guid isPermaLink="false">http://www.thekurzweillibrary.com/the-matrix-loses-its-way-reflections-on-matrix-and-matrix-reloaded</guid>
		<description><![CDATA[The Matrix Reloaded is crippled by senseless fighting and chase scenes, weak plot and character development, tepid acting, and sophomoric dialogues. It shares the dystopian, Luddite perspective of the original movie, but loses the elegance, style, originality, and evocative philosophical musings of the original.]]></description>
			<content:encoded><![CDATA[<div id="attachment_121186" style="width: 243px" class="wp-caption alignleft"><a href="http://www.thekurzweillibrary.com/images/Matrix-Reloaded-poster.jpg"><img loading="lazy" decoding="async" aria-describedby="caption-attachment-121186" class="size-full wp-image-121186 " title="Matrix Reloaded poster" src="http://www.thekurzweillibrary.com/images/Matrix-Reloaded-poster.jpg" alt="" width="243" height="362" srcset="https://www.thekurzweillibrary.com/images/Matrix-Reloaded-poster.jpg 506w, https://www.thekurzweillibrary.com/images/Matrix-Reloaded-poster-140x208.jpg 140w, https://www.thekurzweillibrary.com/images/Matrix-Reloaded-poster-259x386.jpg 259w, https://www.thekurzweillibrary.com/images/Matrix-Reloaded-poster-343x512.jpg 343w" sizes="auto, (max-width: 243px) 100vw, 243px" /></a><p id="caption-attachment-121186" class="wp-caption-text">(credit: Warner Bros. Pictures)</p></div>
<p>You&#8217;re going to love <em>Matrix Reloaded</em> &#8212; that is, if you&#8217;re a fan of endless Kung Fu fights, repetitive chase scenes, a meandering and poorly paced plot, and sophomoric philosophical musings. For much of its 2 hours and 18 minutes, I felt like I was stuck looking over the shoulder of a ten-year-old playing a video game. <span id="more-80853"></span></p>
<p>It&#8217;s too bad, because the original <em>Matrix</em> was a breakout film, introducing audiences to a new approach to movie making, while reflecting in an elegant way on pivotal ideas about the future.</p>
<p>Although I disagree with its essentially Luddite stance, it raised compelling issues that have drawn intense reactions, including thousands of articles and at least a half dozen books</p>
<h4>Is <em>Matrix</em>-style VR feasible?</h4>
<p>There is a lot more to say about the original <em>Matrix</em> than this derivative and overwrought sequel, so let me start with that. <em>The Matrix</em> introduced its vast audience to the idea of full-immersion virtual reality, to what Morpheus (Laurence Fishburne) describes as a &#8220;neural interactive simulation&#8221; that is indistinguishable from real reality.</p>
<p>I have been asked many times whether virtual reality with this level of realism will be feasible and when.</p>
<p>As I described in my chapter &#8220;The Human Machine Merger: Are We Heading for <em>The Matrix</em>?&#8221; in the book <em>Taking the Red Pill</em><sup><a href="#endnotes">1</a></sup>, virtual reality will become a profoundly transforming technology by 2030. By then, nanobots (robots the size of human blood cells or smaller, built with key features at the multi-nanometer—billionth of a meter—scale) will provide fully immersive, totally convincing virtual reality in the following way.</p>
<p>The nanobots take up positions in close physical proximity to every interneuronal connection coming from all of our senses (e.g., eyes, ears, skin). We already have the technology for electronic devices to communicate with neurons in both directions that requires no direct physical contact with the neurons.</p>
<p>For example, scientists at the Max Planck Institute have developed &#8220;neuron transistors&#8221; that can detect the firing of a nearby neuron, or alternatively, can cause a nearby neuron to fire, or suppress it from firing. This amounts to two-way communication between neurons and the electronic-based neuron transistors. The Institute scientists demonstrated their invention by controlling the movement of a living leech from their computer.</p>
<p>Nanobot-based virtual reality is not yet feasible in size and cost, but we have made a good start in understanding the encoding of sensory signals. For example, Lloyd Watts and his colleagues have developed a detailed model of the sensory coding and transformations that take place in the auditory processing regions of the human brain. We are at an even earlier stage in understanding the complex feedback loops and neural pathways in the visual system.</p>
<p>When we want to experience real reality, the nanobots just stay in position (in the capillaries) and do nothing. If we want to enter virtual reality, they suppress all of the inputs coming from the real senses, and replace them with the signals that would be appropriate for the virtual environment. You (i.e., your brain) could decide to cause your muscles and limbs to move as you normally would, but the nanobots again intercept these interneuronal signals, suppress your real limbs from moving, and instead cause your virtual limbs to move and provide the appropriate movement and reorientation in the virtual environment.</p>
<p>The Web will provide a panoply of virtual environments to explore. Some will be recreations of real places, others will be fanciful environments that have no &#8220;real&#8221; counterpart. Some indeed would be impossible in the physical world (perhaps because they violate the laws of physics). We will be able to &#8220;go&#8221; to these virtual environments by ourselves, or we will meet other people there, both real and virtual people.</p>
<p>By 2030, going to a web site will mean entering a full-immersion virtual-reality environment. In addition to encompassing all of the senses, these shared environments could include emotional overlays, since the nanobots will be capable of triggering the neurological correlates of emotions, sexual pleasure, and other derivatives of our sensory experience and mental reactions.</p>
<p>The portrayal of virtual reality in <em>The Matrix</em> is a bit more primitive than this. The use of bioports in the back of the neck reflects a lack of imagination on how full-immersion virtual reality from within the nervous system is likely to work. The idea of a plug is an old fashioned notion that we are already starting to get away from in our machines. By the time the Matrix is feasible, we will have far more elegant means of wirelessly accessing the human nervous system from within.</p>
<p>Virtual reality, as conceived of in <em>The Matrix</em>, is evil. Morpheus describes the Matrix as &#8220;a computer-generated dream world to keep us under control.&#8221; We saw similar portrayals of the Internet prior to its creation. Early fiction, such as the novels <em>1984</em> and <em>Brave New World</em>, portrayed the worldwide communications network as essentially evil, a means for totalitarian control of humankind. Now that we actually have a worldwide communications network, we can see that the reality has turned out rather different.</p>
<p>Like any technology, the Internet empowers both our creative and destructive inclinations, but overall the advent of worldwide decentralized electronic communication has been a powerful democratizing force. It was not Yeltsin standing on a tank that overthrew Soviet control during the 1991 revolt after the coup against Gorbachev. Rather it was the early forms of electronic messaging (such as fax machines and an early form of email based on teletype machines), forerunners to the Internet, that prevented the totalitarian forces from keeping the public in the dark. We can trace the movement towards democracy throughout the 1990s to the emergence of this worldwide communications network.</p>
<p>In my view, the advent of virtual reality will reflect a similar amplification of creative human communication. We have one form of virtual reality already. It&#8217;s called the telephone, and it is a way to &#8220;be together&#8221; even if physically apart, at least as far as the auditory sense is concerned. When we add all of the other senses to virtual reality, it will be a similar strengthening of human communication.</p>
<h4>A Dystopian, Luddite Perspective</h4>
<div id="attachment_121191" style="width: 312px" class="wp-caption alignleft"><a href="http://www.thekurzweillibrary.com/images/The-Matrix-poster.jpg"><img loading="lazy" decoding="async" aria-describedby="caption-attachment-121191" class="size-full wp-image-121191" title="The Matrix poster" src="http://www.thekurzweillibrary.com/images/The-Matrix-poster.jpg" alt="" width="312" height="450" srcset="https://www.thekurzweillibrary.com/images/The-Matrix-poster.jpg 312w, https://www.thekurzweillibrary.com/images/The-Matrix-poster-140x201.jpg 140w, https://www.thekurzweillibrary.com/images/The-Matrix-poster-259x373.jpg 259w" sizes="auto, (max-width: 312px) 100vw, 312px" /></a><p id="caption-attachment-121191" class="wp-caption-text">(credit: Warner Bros. Pictures)</p></div>
<p>The dystopian, Luddite perspective of the Wachowski brothers can be seen in its view of the birth of artificial intelligence as the source of all evil. In one of Morpheus&#8217; &#8220;sermons,&#8221; he tells Neo (Keanu Reeves) that &#8220;in the early 21st century, all of mankind united and marveled at our magnificence as we gave birth to AI [artificial intelligence], a singular construction that spawned an entire race of machines.&#8221; Morpheus goes on to explain how this singular construction became a runaway phenomenon as it reproduced itself and ultimately enslaved humankind.</p>
<p>The movie celebrates those humans who choose to be completely unaltered by technology, even spurning the bioport. Incidentally, in my book <em>The Age of Spiritual Machines</em><sup><a href="#endnotes">2</a></sup>, I refer to such people as MOSHs (Mostly Original Substrate Humans). The movie&#8217;s position reflects a growing sentiment in today&#8217;s world to maintain a distinct separation of the natural- and human-created worlds. The reality, however, is that these worlds are rapidly merging. We already have a variety of neural implants that are repairing human brains afflicted by disease or disability, for example, an FDA-approved neural implant that replaces the region of neurons destroyed by Parkinson&#8217;s Disease, cochlear implants for the deaf, and emerging retinal implants for the blind.</p>
<p>My view is that the prospect of &#8220;strong AI&#8221; (AI at or beyond human intelligence) will serve to amplify human civilization much the same way that our technology does today. As a society, we routinely accomplish intellectual achievements that would be impossible without the level of computer intelligence we already have. Ultimately, we will merge our own biological intelligence with our own creations as a way of continuing the exponential expansion of human knowledge and creative potential.</p>
<p>However, I do not completely reject the specter of AI turning on its creators, as portrayed in the Matrix. It is a possible downside scenario, what Nick Bostrom calls an &#8220;existential risk<sup><a href="#endnotes">3</a></sup>.&#8221; There has been a great deal of discussion recently about future dangers that Bill Joy<sup><a href="#endnotes">4,5,6</a></sup> has labeled &#8220;GNR&#8221; (genetics, nanotechnology, and robotics). The &#8220;G&#8221; peril, which is the destructive potential of bioengineered pathogens, is the danger we are now struggling with. Our first defense from &#8220;G&#8221; will need to be more &#8220;G,&#8221; for example bioengineered antiviral medications.</p>
<p>Ultimately, we will provide a true defense from &#8220;G&#8221; by using &#8220;N,&#8221; nanoengineered entities that are smaller, faster, and smarter than mere biological entities. However, the advent of fully realized nanotechnology will introduce a new set of profound dangers. Our defense from &#8220;N&#8221; will also initially be created from defensive nanotechnology, but the ultimate defense from &#8220;N&#8221; will be &#8220;R,&#8221; small robots that are intelligent at human levels and beyond, in other words, strong AI. But then the question arises: what will defend us from malevolent AI? The only possible answer is &#8220;friendly AI<sup><a href="#endnotes">7</a></sup>.&#8221;</p>
<p>Unfortunately there is nothing we can do today to assure that AI will be friendly. Based on this, some observers such as Bill Joy call for us to relinquish the pursuit of these technologies. The reality, however, is that such relinquishment is not possible without instituting a totalitarian government that bans all of technology (which is the essential theme of <em>Brave New World</em>). It&#8217;s the same story with human intelligence. The only defense we have had throughout human history from malevolent human intelligence is for more enlightened human intelligence to confront its more deviant forms. Our imperfect record in accomplishing this is at least one key reason that there is so much concern with GNR.</p>
<h4>Glitches</h4>
<p>There are problems and inconsistencies with the conception of virtual reality in <em>The Matrix</em>. The most obvious is the absurd notion of the machines keeping all of the humans alive to use them as energy sources. Humans are capable of many things, but being an effective battery is not one of them. Our biological bodies do not generate any significant levels of useful energy. Moreover, we require more energy than we produce. Morpheus acknowledges that the machines needed more than just humans for energy when he tells Neo &#8220;25,000 BTU of body heat combined with a form of fusion [provide] the machines all the energy they need.&#8221; But if the machines have fusion technology, then they clearly would not need humans.</p>
<p>In his chapter &#8220;Glitches in <em>The Matr</em>ix. . ..And How to Fix Them,&#8221; (also in the book <em>Taking the Red Pill</em>) Peter Lloyd surmises that &#8220;the machines are harnessing the spare brainpower of the human race as a colossal distributed processor for controlling the nuclear fusion reactions.&#8221; This is a creative fix, but equally unfounded. Human brains are not an attractive building block for a distributed processor. The electrochemical signaling pathway in the human brain is extremely slow: about 200 calculations per second, which is at least 10 million times slower than today&#8217;s electronics. The architecture of our brains is relatively fixed and unsuitable for harnessing into a parallel network. Moreover, the human brains in the story are presumably being actively used to guide the human lives in the virtual Matrix world. If the AI&#8217;s in the matrix are smart enough to create fusion power, they would not need a network of human brains to control it.</p>
<p>There are other absurdities, such as the requirement to find an old fashioned &#8220;land line&#8221; (telephone) to exit the Matrix. Lloyd provides a creative rationalization for this also (the land lines have fixed network addresses in the Matrix operating system that the Nebuchadnezzar&#8217;s computer can access), but given the inherent flexibility in a virtual reality environment, it is clear that the reason for this requirement has more to do with the Wachowski brothers&#8217; desire to celebrate old-fashioned technology as embodying human values.</p>
<p>There are many arbitrary rules and limitations in the Matrix that don&#8217;t make sense. Why bother fighting the agents at all (other than for the obvious &#8220;Kung Fu&#8221; cinematic reasons) when they cannot be destroyed? Why not just run away, or in the new movie, fly away?</p>
<p>Another attractive feature of the original <em>Matrix</em> movie was its philosophical musings, albeit a hodge podge of metaphorical allusions. There&#8217;s Neo as the Christian Messiah who returns to deliver humanity from evil. There&#8217;s the Buddhist notion that everything we see, hear and touch is an illusion. Of course, one might point out that the true reality in the Matrix is a lot grimier and grimmer than the Buddhist idea of enlightenment. We hear the martial arts philosophy (borrowed from Star Wars) of freeing yourself from rational thinking to let one&#8217;s inner warrior emerge.</p>
<p>Then there is the green philosophy of humanity as inimical to its natural environment. This view is actually articulated by Agent Smith, who describes humanity as &#8220;a virus that does not maintain equilibrium with its environment.&#8221; Most of all, we are treated to a Luddite celebration of pure humanity, along with the 19th century and early 20th century technologies of rotary phones and old gear boxes, which presumably reflect human purity.</p>
<p>My overall reaction to this conception is that the human rebels will need advanced technology at least comparable to that of the evil AI&#8217;s if they are to prevail. The film&#8217;s notion that advanced technology is inherently evil is misplaced. Technology is power, and whoever has its power will prevail. The &#8220;machines&#8221; as portrayed in the Matrix do appear to be malevolent, but the rebels are not likely to survive with their old fashioned gear boxes. However, with the script in the hands of the Wachowski brothers, we can assume that the Rebels will nonetheless have a fighting chance.</p>
<h4>Matrix Reloaded</h4>
<p>Which brings us to <em>The Matrix Reloaded</em>. Like <em>Star Wars</em> and <em>Alien</em>, also breakout movies in their time, this sequel loses the elegance, style, and originality of the original. The new film wallows in endless battle and chase scenes. Moreover, these confrontations lack any real dramatic tension. The producers are constantly changing the rules of engagement so one never thinks, &#8220;how are they going to get out of this jam?&#8221; One has only the sense that a particular character will continue if the Wachowski brothers want that character around for their own cinematic reasons. They are continually coming up with arbitrary new rules and exceptions to the rules.</p>
<p>Much of the fighting makes little sense. Given that the evil twin apparitions are able to magically transport themselves directly into Trinity&#8217;s vehicle, and Neo is able to fly like Superman, the hand to hand combat and use of knives and poles lacks even the logic of a video game. For that matter, the two scenes of Neo battling the 100 Smiths looked exactly like a video game. Like so much of the action, these scenes seemed superfluous and time wasting. Smith is no longer an agent, and plays no clear role in the story, to the extent that there was any attempt to tell a coherent story.</p>
<p>About two thirds of the way through this sequel, I turned to my companion and asked &#8220;whatever happened to the plot, wasn&#8217;t there something about 250,000 Sentinels attacking Zion, the last human city?&#8221; My companion responded that it seemed that &#8220;plot&#8221; was a four letter word to the movie makers. Of course, there wasn&#8217;t much time for plot development, given all of the devotion to chasing and fighting, not to mention an equally drawn out gratuitous sex scene (well, at least there is one reason to go see this film).</p>
<p>If plot development was weak, character development was worse. Many reviewers of the first Matrix movie noted that Keanu Reeves could not act. But his acting in the first <em>Matrix</em> is downright Shakespearian compared to the sequel. At least in the original, there was some portrayal of Neo&#8217;s struggle with his discovery of the true nature of the Matrix, of his grappling with his role as &#8220;the one,&#8221; and his coming-of-age tutorials.</p>
<p>In <em>Reloaded</em>, Reeves acts like he&#8217;s had a lobotomy, sleepwalking or rather sleep-flying through the whole movie. His lover, Trinity (Carrie-Anne Moss), is equally distant and unemotional, acting like a frustrated librarian with a black belt. Morpheus was appealing in the first movie with his earnest confidence and wisdom. In the new film, he&#8217;s like a preacher on morphine, which quickly gets tiresome.</p>
<p>The philosophical dialogues, which were refreshing in the original, sound like late-night college banter in the sequel. As for the technology of the movie itself, there was really nothing special here. They did trash about 100 General Motors cars on a multi-million dollar roadway built especially for the movie, but aside from bigger explosions, the effects were the opposite of riveting. Some of the organic backgrounds of the city of Zion were attractive, but they were all illustrated, and lacked the genuine warmth of a real human environment, which the movie professes to celebrate. The Wachowski brothers&#8217; notion of human celebration is also a bit weird as portrayed in the retro rave festivities on Zion to honor the return of the rebels.</p>
<p>Although I take issue with the strong Luddite posture of the original Matrix, I recognized its importance as a forceful and stylish articulation in cinematic terms of salient 21st century issues. Unfortunately, the sequel throws away this metaphysical mantle.</p>
<hr />
<p><a name="endnotes"></a></p>
<p><span style="font-size: x-small;">1. Glenn Yeffeth, Ed., <em><a href="http://www.amazon.com/Taking-Red-Pill-Philosophy-Religion/dp/1932100024#_" target="_blank">Taking the Red Pill: Science, Philosophy and Religion in The Matrix</a></em> (Ben Bella Books, April 2003)</span></p>
<p><span style="font-size: x-small;">2. Ray Kurzweil, <em><a href="/meme/frame.html?m=14">The Age of Spiritual Machines</a></em>, Penguin USA, 1999</span></p>
<p><span style="font-size: x-small;">3. Nick Bostrom, &#8220;<a href="/meme/frame.html?main=/articles/art0194.html">Existential Risks: Analyzing Human Extinction Scenario and Related Hazards</a>,&#8221; 2001 </span></p>
<p><span style="font-size: x-small;">4. Bill Joy, &#8220;<a href="http://www.wired.com/wired/archive/8.04/joy.html" target="_blank">Why the future doesn&#8217;t need us</a>,&#8221; <em>Wired</em>, April 2000</span></p>
<p><span style="font-size: x-small;">5. Ray Kurzweil, &#8220;<a href="/meme/frame.html?main=/articles/art0226.html">In Response to</a>,&#8221; KurzweilAI.net July 25, 2001 </span></p>
<p><span style="font-size: x-small;">6. Ray Kurzweil, &#8220;<a href="/meme/frame.html?main=/articles/art0556.html">Testimony of Ray Kurzweil on the Societal Implications of Nanotechnology</a>,&#8221; KurzweilAI.net, April 9, 2003</span></p>
<p><span style="font-size: x-small;">7. Eliezer S. Yudkowsky, &#8220;<a href="/meme/frame.html?main=/articles/art0172.html">What is Friendly AI?</a>,&#8221; KurzweilAI.net, May 3, 2001</span></p>
]]></content:encoded>
			<wfw:commentRss>https://www.thekurzweillibrary.com/the-matrix-loses-its-way-reflections-on-matrix-and-matrix-reloaded/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM</title>
		<link>https://www.thekurzweillibrary.com/glitches-in-the-matrix-and-how-to-fix-them</link>
		<comments>https://www.thekurzweillibrary.com/glitches-in-the-matrix-and-how-to-fix-them#respond</comments>
		<pubDate>Sun, 02 Mar 2003 23:26:39 +0000</pubDate>
								<dc:creator>Peter B. Lloyd</dc:creator>
		
		
				<category><![CDATA[classics]]></category>
		<category><![CDATA[resources]]></category>

		<guid isPermaLink="false">http://www.thekurzweillibrary.com/glitches-in-the-matrix-and-how-to-fix-them</guid>
		<description><![CDATA[Why, exactly, do the rebels have to enter the Matrix via the phone system (which after all doesn't physically exist)? And what really happens when Neo takes the red pill (which also doesn't really exist)? And how does the Matrix know what fried chicken tastes like? Technologist and philosopher Peter Lloyd answers these questions and more.]]></description>
			<content:encoded><![CDATA[<p><em>To be published in</em> Taking the Red Pill: Science, Philosophy and Religion in <em>The Matrix</em> (<a href="http://benbellabooks.com/cgi-bin/merchant2/merchant.mv?Screen=PROD&amp;Store_Code=BB&amp;Product_Code=RP&amp;Category_Code=RP" target="_blank">Ben Bella Books</a>, April 2003). <em>Published on KurzweilAI.net March 3, 2003.</em></p>
<p>As the essays throughout this book demonstrate, the Wachowski Brothers designed <em>The Matrix </em>to work at many levels. They carefully thought through the film&#8217;s philosophical underpinnings, religious symbolism, and scientific speculations. But there are a few riddles in <em>The Matrix</em>, aspects of the film that seem nonsensical or defy the laws of science. These apparent glitches include:<span id="more-80826"></span></p>
<p>• The Bioport—how can a socket in your head control your senses? How can it be inserted without killing you?</p>
<p>• The Red Pill—since the pill is virtual, how can it throw Neo out of the Matrix?</p>
<p>• The Power Plant—can people really be an energy source?</p>
<p>• Entering and Exiting the Matrix—why do the rebels need telephones to come and go?</p>
<p>• The Bugbot—what&#8217;s the purpose of the bugbot?</p>
<p>• Perceptions in the Matrix—how do the machines know what fried chicken tastes like?</p>
<p>• Neo&#8217;s Mastery of the Avatar—how can Neo fly?</p>
<p>• Consciousness and the Matrix—are the machines in the Matrix alive and conscious? Or are they only machines, intelligent but mindless?</p>
<p>This essay addresses these questions and shows how these seeming glitches can be resolved.</p>
<h2>THE BIOPORT</h2>
<p>Can the machines really create a virtual world through a bioport? And how does it work? The bioport is a way of giving the Matrix computers full access to the information channels of the brain. It is located at the back of the neck—probably between the occipital bone at the base of the skull, and the first neck vertebra. Wiring would best enter through the soft cartilage that cushions the skull on the spinal column, and pass up through the natural opening that lets the spinal cord into the skull. This avoids drilling through bone, and maintains the mechanical and biological integrity of the skull&#8217;s protection. A baby fitted with a bioport can easily survive the operation.</p>
<p>The bioport terminates in a forest of electrodes spanning the volume of the brain. In a newborn, the sheathed mass of wire filaments is pushed into the head through the bioport. On reaching the skull cavity, the sheath would be released, and the filaments spread out like a dandelion, gently permeating the developing cortex. Nested sheaths would release a branching structure of filamentary electrodes. As each sheathed wire approaches the surface of the brain, it releases thousands of smaller electrodes. In the neonate, brain cells have few synaptic connections, so the slender electrodes can penetrate harmlessly.</p>
<p>With its electrodes distributed throughout the brain, the Matrix could deliver its sensory signals in either of two places: at the sensory portals or deep inside the brain&#8217;s labyrinth. For example, vision could be driven by electrodes on the optic nerves where they enter the brain. Artificial signals would then pass into the visual cortex at the back of the brain, which would handle them as if they had come from the eyes. Correspondingly, outgoing motor nerves would also have electrodes at the boundary of brain and skull. This simple design mirrors the natural state of the brain most closely. It is not, however, the only possibility. Electrodes could alternatively be attached in the depths of the brain, beyond the first stages of the visual cortex. This would greatly simplify the data processing. In normal perception, most of the incoming information isn&#8217;t processed; information you aren&#8217;t paying attention to is filtered out. If the Matrix were to deliver information directly to the output axons from the sensory cortex—as opposed to the input to the cortex—then it would save itself the job of filling in all those details.</p>
<p>One scene tells us which method the Matrix uses. When Neo wakes and finds himself in a vat, he pulls out the oxygen and food tubes, drags himself out of the gelatinous fluid, and—perceives the world. The fact that he can see and hear proves that the visual and auditory cortices of his brain are working. This wouldn&#8217;t be possible if the Matrix had put its sensory data into the deeper centers of his brain. For then his sensory cortex would have been bypassed: it would never have received any stimulation, and would have wasted away. In that case, Neo would wake from his vat and find himself blind and deaf, with no sense of smell or taste, no feeling of touch or heat in his skin, no awareness of whether he was vertical or horizontal, or where his arms or legs were. The Matrix must have input its visual data just where the optic nerve from the eyeball passes into the skull, rather than in the midst of the brain&#8217;s vision processing. Likewise, Neo&#8217;s ability to walk and use his arms shows that the motor cortex is also developed and functioning. Indeed, even the cerebellum, which controls balance, must be working. So, the Matrix must be capturing its motor signals from the brain&#8217;s efferent nerves after they have finished with the last stage of cortical processing, but before the nerves pass out of the skull.</p>
<p>The rebels use the bioport to load new skills into their colleagues&#8217; brains—writing directly into permanent memory. The Matrix itself never implants skills in this way; folks in the virtual world learn things in the usual manner by reading books and going to college. So, why did the architects of the Matrix build into the bioport this capability to download skills? It is actually a byproduct of how the bioport is installed. They could have attached electrodes to just the sensory and motor nerve fibers. That, though, is difficult: the installer must predict where each nerve fiber will be anchored, which is hard to do reliably, given the plasticity of the neonate brain; and it must navigate through the brain tissue to find these sites. A more robust and adaptable method is to lay a carpet of electrodes throughout the whole brain, and let the software locate the sensory and motor channels by monitoring the data flows on the lines.</p>
<p>That spare capacity remains available for others to exploit, and the rebels use it to download kung-fu expertise into Neo&#8217;s brain and to implant helicopter piloting skills into Trinity&#8217;s. If the Matrix ever learned this technique, it could create havoc for the rebels, implanting impulses to serve its own ends.</p>
<h2>THE RED PILL</h2>
<p>Morpheus offers Neo the choice of his lifetime, in the form of the famous red and blue pills. But what can a virtual pill do to a real brain? We have seen that the Matrix interacts with the brain only in the sensory and motor nerve fibers. It does not affect the inner workings of the brain, where a real psychoactive chemical would have to act. Minor analgesics such as aspirin would work by having their effect outside the brain centers, canceling out pain inputs from the avatar software.</p>
<p>The blue pill is probably a placebo. Morpheus says only, &#8220;You take the blue pill and the story ends. You wake in your bed and you believe whatever you want to believe.&#8221; We never know what, if anything, the blue one would do.</p>
<p>So, how does the active pill, the red one, work? Since virtual aspirin can work as a painkiller, the avatar&#8217;s software module must be able to accept instructions to cancel out any given sensory input. Evidently, the red pill gives the avatar a blanket command to cancel all such input. It thereby obliterates Neo&#8217;s perception of the virtual world, which the Matrix has been feeding to him throughout his life. Instead of sitting on a chair in a hotel room, Neo sees and feels for the first time that he is immersed in a fluid. The perception of this filters through into his perceptions of the Matrix&#8217;s own imagery. Neo touches a mirror, and finds it a viscous fluid that clings to his finger and then seeps along his arm, covering his chest and slithering down his throat. A blend of bodily perceptions and mental imagery is typical of what happens when you wake from a dream; external perceptions are distorted to fit the contents of the dream. Your dream of falling off a cliff might fade into falling out of bed. In the film, the liquefied mirror is seen only by Neo, not the others in the room. His real bodily sensations are, for the first time, sweeping into his brain, which struggles to integrate them into the stable narrative he has lived in up to that moment.</p>
<p>Another route out of the Matrix, besides the red pill, would be meditation. The Buddhist practice of <em>vipassa</em><sup><a href="#1">1</a></sup> gives adepts penetrating insights into their own mental processes. It rolls back the barrier between conscious awareness and the subconscious. An adept of <em>vipassana, </em>living in the Matrix, would discover the interface between the Matrix&#8217;s electrodes and the brain&#8217;s wetware. The expert practitioner could override the Matrix&#8217;s stream of imagery, and see reality. Morpheus mentions that someone did break free from the Matrix. Perhaps meditation was the key. To attain that expertise, however, would take years of effort. Leading other people to the truth would require a school of meditation to train new recruits for years, to pursue what one individual claimed was the truth, but everyone else dismissed as fantasy. No doubt this is what the Oracle is gently encouraging. But it is not surprising that the red pill was invented as a fast-track route.</p>
<p>Morpheus&#8217;s team monitors Neo&#8217;s progress. As he realizes that he is immersed in fluid, Neo panics, and his instinct to escape drowning compels him to drag the tubes out of his mouth. Like waking out of a dream, Neo finds the sensible world rushing in on him, and it is remarkable that his manual coordination has been so well preserved by the Matrix system. He grabs the tubes and yanks them out, using weak hands that had never before grasped anything.</p>
<p>When Neo&#8217;s exit from the Matrix is detected, a robot inspects him and flushes him out of his pod. Too weak to swim, he must be pulled out of the wastewater pool without delay. How are the rebels to find him? In a power plant vast enough to house the human race, there would be thousands of effluent drains. As Morpheus mentions to Neo, &#8220;the pill you took is part of a trace program.&#8221; Besides canceling Neo&#8217;s sensory inputs, the red pill also puts a unique reference signal onto the Matrix network. When the <em>Nebuchadnezzar</em>&#8216;s computer locates that signal, it can work out Neo&#8217;s physical location and order the hovercraft to the appropriate chute. In the tense moment before that reference signal is located, the worried Morpheus says, &#8220;We&#8217;re going to need the signal soon,&#8221; and Trinity exclaims that Neo&#8217;s heart is fibrillating as the panic threatens to bring on a heart attack. Apoc finds the reference signal just in time, before Neo&#8217;s brain disengages from the Matrix network and the signal vanishes.</p>
<h2>THE POWER PLANT</h2>
<p>During the armchair scene, we have what is probably the most criticized element in <em>The Matrix </em>story line. Morpheus claims that the human race is imprisoned in a power station, where human bodies are used as a source of bioelectricity. This is engineering nonsense; it violates the fundamental law of energy conservation. The humans would have to be fed, and the laws of physics demand that the energy consumed as food must be greater than the energy generated by the human body. That Morpheus has misunderstood what is going on is underscored by his mention in the same speech of the machines&#8217; discovery of a new form of nuclear fusion. Evidently, the fusion is the real source of energy that the machines use. So what are humans doing in the power plant? Controlled fusion is a subtle and complex process, requiring constant monitoring and micromanaging. The human brain, on the other hand, is a superb parallel computer. Most likely, the machines are harnessing the spare brainpower of the human race as a colossal distributed processor for controlling the nuclear fusion reactions. <em>(Sawyer comes to a similar conclusion elsewhere in this volume—Ed.)</em></p>
<h2>ENTERING AND EXITING THE MATRIX</h2>
<p>The virtual world of the Matrix is not bound by physical laws as we know them, but for the virtual world to be consistently realistic, the laws of physics must be followed where they can be observed by humans. Access into and out of a virtual world is a problem, because materializing and dematerializing violate the conservation of mass and energy. Furthermore, whatever was previously in the space occupied by the materializing body must be pushed out of the way; and would be pushed with explosive speed if the materialization is instantaneous. Conversely, on dematerialization, the surrounding air would rush in to the vacated space with equal implosive force. There are no such explosions and implosions in <em>The Matrix, </em>so how do the rebels do it?</p>
<p>In the Matrix computer, software modules represent the observable objects in the virtual world, and these modules interact by means of predefined messages. One such message issued by a virtual human body, or &#8220;avatar,&#8221; is, &#8220;What do I see when I look in the direction V?&#8221; A module whose object lies on the line of sight along V will respond with a message specifying the color, luminosity, and texture that the human should see in that direction. If a rebel&#8217;s avatar is to be visible to other people who are immersed in the Matrix world, the <em>Nebuchadnezzar</em>&#8216;s computer must pick up those &#8220;What-do-I-see&#8221; requests and reply with its own &#8220;You-see-this&#8221; message.</p>
<p>A virtual human body does not send &#8220;What-do-I-see?&#8221; message to all other modules in the Matrix, or else it would overload the network. It refers to &#8220;registers&#8221; of modules, which record the virtual objects&#8217; shape, size, and position. Simple geometry then tells it which modules to target. For efficiency, each visible volume of space, such as the room of a building, has its own register.</p>
<p>The key step in materializing a body in a given space is for its module to be inserted into that space&#8217;s register. For dematerializing, it is deleted from the register. Once it is registered, anyone looking in that direction will see that module&#8217;s virtual body. The Matrix cannot let a software module insert itself arbitrarily into a register, since that could violate the conservation of mass if it led to an object&#8217;s materializing in an area that has a conscious observer.</p>
<p>Registers for unobserved spaces are not constrained in this way. If nobody is watching a room and its entrances, then a body can safely materialize in it without observably breaking the simulated laws of physics.</p>
<p>This does not mean that the laws of physics break down as soon as all observers leave a room. The table and chair do not start to float around against the law of gravity when nobody is looking. Rather, the Matrix simply does not bother to run the simulation for a room that nobody is looking at. In its register, it retains details of where each object is, but the room is no longer rendered as visual and tactual imagery.</p>
<p>So, when the <em>Nebuchadnezzar</em>&#8216;s computer wants to materialize a rebel, it must find some unobserved room, and insert the data module for the rebel&#8217;s body into the register for that room. Subsequently, if someone else enters the room, he will see the rebel just like any other object in the room. And the rebel can walk out of the room into any other part of the Matrix world in the normal manner. This is how rebels materialize in the Matrix without causing explosions or breaching the integrity of the simulation.</p>
<p>When a rebel exits, the module that simulates her body is deleted from the register. This must happen only when the body is not being observed. There is, however, an intermediate state, &#8220;imperception,&#8221; which effectively takes the body out of the virtual world even while the data module is still in the register. This is an emergency procedure that the <em>Nebuchadnezzar</em>&#8216;s software uses for fast escapes.</p>
<p>Although the Matrix software cannot insert or delete a module while its object is being observed, it does allow any module to change its appearance. The agents use it whenever they enter the world. An agent never materializes or dematerializes, but changes the appearance of another person&#8217;s avatar to match the personal qualities of the agent.</p>
<p>To make a rebel imperceptible, the <em>Nebuchadnezzar</em>&#8216;s computer changes the body&#8217;s visible appearance to be transparent; and the body&#8217;s mechanical resistance to that of the air. From an observer&#8217;s perspective, the body has melted into air. From a software perspective, the data module is still on the register but simulating a body indistinguishable from thin air. Later, when the scene is no longer being observed by anybody, the module will be deleted.</p>
<p>We see this happen only once, when Morpheus leaves the subway. Once the <em>Nebuchadnezzar</em>&#8216;s computer has located his avatar, it sends an instruction to make it invisible. This does not affect the whole avatar at once: the module has to calibrate its appearance to match exactly its surroundings. The first part of the body to receive the instruction is the nervous tissue of the ear, and this at first glows bright white, before settling down to a state of transparency. The rest of the body follows. Its appearance oscillates around whatever is visible in the background, settling down to transparency: where the Morpheus stood, we see the background shimmer momentarily. The solidity of the body then fades: moments after Morpheus&#8217;s body has become invisible, the telephone handset that had rested in his hand drops, slowly at first, toward the ground. The observed sequence is consistent not with the sudden deletion of the body&#8217;s module, but rather with its changing its appearance.</p>
<h2>HARD LINES</h2>
<p>Telephones play a key role in entering and leaving the Matrix. But the rebels do not travel through the telephone lines as energy pulses. There is no device at the end of the telephone for reconstructing a human body from data: all you would get is noise in the earpiece. Furthermore, the bandwidth of a telephone line is too narrow to ship an entire human being. Finally, nothing at all ever really travels along the lines in the Matrix world, as they are only virtual.</p>
<p>Instead of being a conduit for transporting dematerialized rebels, the telephone line is a means of navigation. It pinpoints where a rebel is to enter or leave the Matrix.</p>
<p>To enter the vast Matrix requires specifying where the avatar is to materialize. To get an avatar into the Matrix world, the rebels must use some strictly physical navigation. This is done with the telephone network, which has penetrated every corner of the inhabited world with electronic devices, each of which has a unique, electronically determined label. Without knowing anything of human society and its conventions, the physics modules of the Matrix can determine where any given telephone number terminates.</p>
<p>How are the rebels to give a telephone number to the Matrix? They must dial it, but they cannot simply pick up a handset and make a call to a number inside the Matrix world, for any handset in the <em>Nebuchadnezzar </em>is connected to the real world telephone network, not the Matrix&#8217;s virtual network. Inside the Matrix, a call must be placed subtly, without observably breaching the simulated laws of electromechanics.</p>
<p>To see how this can be done, we need to know something of the infrastructure of the Matrix. Monolithic computer systems are unreliable, so the Matrix is instead an assemblage of independent modules, each having a unique &#8220;network address.&#8221; For a module to communicate with another, it will put a data message on the network with the address of the intended destination. Neither module need know where the other one is inside the electronic hardware of the Matrix computer. They might be inches apart, or a mile away.</p>
<p>This scheme is robust and flexible. There is no central hub, and individual modules can be plugged into, or taken out of, the network without disturbance. Conversely, the rebels can easily hack into it. Once they are linked into the network, their equipment can simply pretend to be another module. It can place data messages onto the system, which will be routed just like authentic messages, and be received and read by the addressed module. So, to initiate a telephone call, the crew will place a data message on the network, addressed to any module that simulates an aerial for receiving calls from cell phones. Some such node will pick up and read the counterfeit data message just as if the message had been sent by a bona fide source. On getting this message, the aerial module will carry out its role in handling a telephone call.</p>
<p>The <em>Nebuchadnezzar</em>&#8216;s operator maintains contact with rebels who are in the Matrix even while the hovercraft is moving, so they must use radioports onto the network. The rebels might have installed their own rogue radio receiver—mechanically securing it in some dark corner, and plugging its data cable into a spare socket of a router. More likely, the Matrix itself uses radio as part of its network infrastructure, and the rebels broadcast their counterfeit messages on the same frequency.</p>
<p>Materializing or dematerializing, however, needs a network address, which is gotten as follows. When the <em>Nebuchadnezzar </em>makes a &#8220;phone call&#8221; into the Matrix, it places on the network a packet saying &#8220;Place this call for (212) 123-4567&#8221; or whatever the telephone number is, together with the <em>Nebuchadnezzar</em>&#8216;s own network address as a return label, such as 9.54.296.42. When the call is picked up, the Matrix will return a data packet, addressed to the <em>Nebuchadnezzar</em>, saying &#8220;Message for 9.54.296.42: call connected to telephone (212) 123-4567.&#8221; All the <em>Nebuchadnezzar</em>&#8216;s computer has to do is listen out for its own address, and it will find attached to it the network address of the telephone equipment.</p>
<p>As soon as the answering machine picks up the incoming call, the <em>Nebuchadnezzar </em>will get the network address of that destination.</p>
<p>Essentially the same job must be done when a rebel leaves the Matrix world. In order to disengage the rebel from his or her avatar, the <em>Nebuchadnezzar</em>&#8216;s computer must again get a fix on the avatar&#8217;s location within the virtual world. As before, it is not enough to locate the avatar&#8217;s virtual body in terms that relate to human culture. It is no use to say that Neo is at 56th and Lexington. Rather, it needs a network address that the Matrix&#8217;s operating system can follow. Of course, the <em>Nebuchadnezzar </em>gets it by calling a telephone in the Matrix world, which must be answered for the network address to be passed back to the <em>Nebuchadnezzar</em>. Once that has happened, the avatar&#8217;s module can be deleted from the register for that location.</p>
<p>Why don&#8217;t the crew navigate their exits with the stylish cell phones that all the rebels carry? Why hunt for a land line (called a &#8220;hard line&#8221; in the film) under hot pursuit from the agents? The answer is that the cell phones are not part of the Matrix world and do not have network addresses known to the Matrix software. The cell phone is projected into the Matrix world by the <em>Nebuchadnezzar</em>&#8216;s computer, 114 peter b. lloyd along with the avatar&#8217;s body and clothes—and the weapons that Neo and Trinity eventually bring in with them. The software that simulates the cell phones is running inside the <em>Nebuchadnezzar</em>&#8216;s computer, not the Matrix&#8217;s computer, so the rebels must find a land line—which are somewhat scarce in an era when everyone has a cell phone.</p>
<h2>THE BUGBOT</h2>
<p>Before Neo is taken to meet Morpheus, the agents insert a robotic bug into him. Trinity extricates the bugbot before it can do any harm. But what was the bugbot for? Given that it operates inside the human body, the bugbot should be as small as possible. Yet, it is clearly much bigger than a miniature radio beeper needed for tracking Neo&#8217;s whereabouts. Trinity says that Neo is &#8220;dangerous&#8221; to them before he is cleaned. We can infer that the bugbot is actually a munition, probably a semtex device that will detonate when it hears Morpheus&#8217;s voice, killing both Neo and Morpheus and everyone else in the room.</p>
<p>Just before it is implanted, the bugbot takes on the appearance of an animate creature, with claws writhing. Yet, after Trinity has jettisoned it out of the car window, it returns to an inert form. It is another illustration of the agents&#8217; limited use of the shapeshifting loophole in the Matrix software, that lets an object transform its properties under programmed commands.</p>
<h2>PERCEPTIONS IN THE MATRIX</h2>
<p>At dinner on the <em>Nebuchadnezzar</em>, Mouse ponders how the Matrix decided how chicken meat should taste, and wonders whether the machines got it wrong because the machines are unable to experience tastes.</p>
<p>A nonconscious machine cannot experience color any more than taste. A computer can store information about colored light, such as a digitized photograph, but it does so without a glimmer of awareness of the conscious experience of color. The digitized picture will evoke conscious colors only when someone looks at it. All other sensations that you can be conscious of will elude the digital computer.</p>
<p>The feel of silk, the texture of the crust of a piece of toast, feelings of nausea or giddiness: these are all unavailable to insentient machines. This being so, Mouse could have doubted whether the Matrix would know what anything should taste, smell, look, sound, or feel like.</p>
<p>But the Matrix doesn&#8217;t need to experience the perceptual qualities to get them right. As we have seen, the Matrix feeds its signals into the incoming nerves where they enter the brain, not into the deeper nerve centers. So when you eat (virtual) fried chicken inside the Matrix, the Matrix will activate nerves from the tongue and nose, and the brain will interpret them as taste sensations. What the Matrix puts in will be a copy of the train of electrical impulses that would actually be produced if you were eating meat. Because of the way that the Matrix has been wired into the brain, it has less freedom than Mouse assumed. Whilst the Matrix cannot know tastes itself, it can nonetheless know which chemosensory cells in a human&#8217;s nose and mouth yield the requisite smell and taste.</p>
<h2>NEO&#8217;S MASTERY OF THE AVATAR</h2>
<p>For purists of science-fiction plausibility, Neo&#8217;s superhuman control over his avatar body is a troubling element in the film. The final triumphal scene, where Neo flies like Superman, has especially come under criticism. But is it completely at odds with what we have inferred about the Matrix? And how does Neo transcend his human limits?</p>
<p>The Matrix interacts with the brain, but the brain in turn affects the body. When Neo is hurt in training, he finds blood in his mouth. He asks Morpheus, &#8220;If you are killed in the Matrix, you die here?&#8221; and gets the cryptic reply: &#8220;The body cannot live without the mind.&#8221; But it cuts both ways; ultimately, Neo&#8217;s avatar is killed inside the Matrix, causing the vital functions to cease in his real body.</p>
<p>Mental states and beliefs can affect the body in several ways. In the placebo effect, the belief that a pill is a medicine can cure an illness; in hypnosis, imagining a flame on the wrist can induce blisters. In total virtuality, the mind accepts completely what is presented. If the Matrix signals that the avatar&#8217;s body has died, then the mind will shut down the basic organs of the heart and lungs. Actual death will inevitably ensue, unless fast action is taken to get the heart pumping again.</p>
<p>In the climactic scene, Agent Smith kills Neo&#8217;s avatar within the Matrix. Neo&#8217;s brain accepts this fate: it stops his heart and loses conscious awareness. His real brain, however, retains enough oxygenated blood to keep it functioning for approximately three minutes, after which it would begin to suffer irreversible damage and, a few minutes later, brain-death. During this time, the auditory cortex keeps on working and digests what Trinity says, albeit unconsciously. Trinity&#8217;s message is comprehended by Neo&#8217;s subconscious mind, and a deep realization that the Matrix world is illusory crystallizes in his mind. At an intellectual level, Neo already believed this, but now he knows it at the visceral level of the mind, the level that interfaces with his physiology. Empowered by the insight that his avatar&#8217;s death is not his death, Neo regains control of his avatar —not only resurrecting it but attaining superhuman powers: the avatar can stop bullets, and fly into the air.</p>
<p>Neo&#8217;s new powers contrast with the rigid compliance with simulated physical laws that the Matrix generally adheres to. It reveals that Neo has gained direct access to the software modules that simulate his avatar body. That raises two questions: Why does the avatar software accept commands to transform itself, when normally it strictly follows a physical simulation? And, how can Neo&#8217;s brain issue such commands, which are obviously outside the scope of the normal muscular signals?</p>
<p>The software that simulates the avatar must have a special port, intended for use only by agents, which accepts commands to change the internal properties of the avatar&#8217;s body. Agents use this facility to embody themselves in human avatars. Like all software, the avatar will obey such commands wherever they originate, provided that they are correctly formulated. We saw earlier how the <em>Nebuchadnezzar</em>&#8216;s computer used this transformative power to make Morpheus disappear from the subway station. Now Neo&#8217;s brain is directly using the same command port.</p>
<p>Commands to transform the body cannot travel on the wires that carry the regular muscular signals from the brain to the avatar module. So, they use some of the many other, seemingly redundant, data lines that terminate throughout the rest of the brain. That those lines are hooked up at all on the Matrix end is a spin-off from the Matrix architect&#8217;s use of general-purpose interfaces. When a newborn human baby is connected to the software module that runs its avatar, there is no way to predetermine which wires carry which data streams. So, at the Matrix end, each line is free to connect to any data port of the avatar module. Some data ports emit simulated signals from virtual eyes and other sense organs, and they will connect with the brain&#8217;s sensory cortex; others will accept motor commands to carry out simulated contractions of virtual muscles, and they will link up with the motor cortex. In a feedback process that mirrors how the natural plasticity of the brain is molded to its function, useful connections are strengthened and the useless are weakened. As a baby grows into an infant, it gains feedback through using the simulated senses and muscles of the avatar, and therefore its brain builds up the normal strong connections to the conventional input and output channels. But it lacks the abstract concepts needed to use the special port that accepts transformation commands. So the brain&#8217;s connection with those lines atrophies. Nevertheless, the hardware for that potential connection remains in place. In Neo&#8217;s kung fu training, his brain rediscovers the abandoned data lines, and he starts to issue rudimentary transformations, giving his avatar&#8217;s muscles superhuman strength. Only with the deep insight that he gains from being woken after his avatar&#8217;s death, does he acquire the mental attitude needed to harness that transformative function fully.</p>
<p>The existence of the transformational back door into the avatar software is a security hole that the architects of the Matrix never imagined would be used by mere humans—but now it threatens the very existence of the Matrix, as Neo exploits the power it gives him.</p>
<h2>CONSCIOUSNESS AND THE MATRIX</h2>
<p>The last question I will address in this essay is a complex one, and one that continues to be explored and debated in scientific and philosophical circles. Can machines be conscious? In everyday life, the 118 peter b. lloyd machines are so dumb that we can ignore this question, and so we do not have an established criterion for judging whether the intelligent machines of science fiction are conscious. How similar must a machine be to a human for it to be conscious? Humans have a cluster of properties that always hang together: they have conscious perceptions and emotional feelings, they have opinions and beliefs, intuition and intelligence, they use language, and they are alive and warm-blooded, and have a biological brain. We do not, in everyday life, have to separate out those concepts and decide which ones are necessary and sufficient for sentience. The properties all come as a package. In contrast, the lower animals are like us but do not use language and are not as intelligent as we are. So, it is believed that the higher animals probably have basic conscious perceptions—such as colors and sounds, heat and cold—much as we do, but they lack the superstructure of thought. But what about machines that are intelligent and use language, but are not made of biological tissue? Could they be conscious? To respond rationally to this emotive challenge, we need to be clear about the ideas that are involved. The commonest and most damaging conflation is that of &#8220;intelligence&#8221; and &#8220;consciousness.&#8221; Alan Turing, in his celebrated paper that introduced the Turing Test, used the terms interchangeably—but mathematicians are notorious for playing fast and loose with their terms. Philosophers, whose trademark is the careful delineation of concepts, have always insisted on maintaining the distinction. Intelligence is the capacity to solve problems, while consciousness is the capacity for the subjective experience of qualities.</p>
<p>As we shall see, intelligence can be attained without consciousness.<sup><a href="#2">2</a></sup> A digital computer can be programmed to perform intelligent tasks such as playing chess and understanding language by welldefined deterministic processes, without any need to introduce enigmatic conscious experiences into the software. On the other hand, a conscious being can have subjective experiences—such as seeing the color red, or feeling anger—with needing to use intelligence to solve any problems. An android could be vastly more intelligent than any human and still lack any glimmer of interior mental life. On the other hand, a creature might be profoundly stupid and still have subjective experiences.</p>
<p>Agent Smith is an example of a machine that manifests humanlike behavior—which, if you witnessed such words and gestures in a human, you would immediately regard them as showing conscious emotions and volitions. Indeed, it is the immediacy of the interpretation that is deceptive. When you see someone laugh with joy, or scream in pain, you do not knowingly infer the person&#8217;s mental state from those outward signs. Rather, it is as if you see the emotions directly. Yet, we know from accomplished actors that these signs of emotions can be faked. Therefore, you are indeed making an inference, albeit an automatic one. It is a job of philosophy to scrutinize such automatic inference. When you see another human being emoting, your inference is not based wholly on what you see, but also on background information (such as whether the person is acting on the stage). More fundamentally, you are relying on the reasonable assumption that the person&#8217;s behavior arises from a biological brain just as yours does. Whenever those premises are undermined, you inevitably revise any inferences you have made from the emoting. If the emoting stops and people around you clap, you realize it was a piece of street theatre, and the person was only acting out those emotions. Or, if the person has a nasty car accident that breaks open his head, revealing electronic circuitry instead of a brain, you realize that it was only an android and you may conclude that it was only simulating emotions.</p>
<p>A key step in the inference is the premise that the emotion plays a role in the causal loop that produces the outward words and gestures. If, instead, we have established that the observed words and gestures are wholly explained in some other way, without involving those emotions—then the inference collapses. The exterior emoting behavior then ceases to count as evidence for an interior emotional experience. If we know that an actor&#8217;s words and gestures are scripted, then we cease to regard them as evidence for an inward mental state. Likewise, if we know that the words and gestures of an android or avatar are programmed, then they too cease to support any inference of a mental state.</p>
<p>In an android, or in a software simulation of a human such as an agent, words and gestures are produced by millions of lines of programmed software. The software advances from instruction to instruction in a deterministic manner. Some instructions move pieces of information around inside memory, others execute calculations, others send motor signals to actuators in the body. Each line of code references objective memory locations and ports in the physical hardware. It may do so symbolically, and it may do so via sophisticated data structures, for example, using the tag &#8220;vision-field&#8221; to reference the stabilized and edge-enhanced data from the eye cams. Nevertheless, nowhere in the software suite does the code break out of that objective environment and refer to the enigmatic contents of consciousness. Nor could the programmer ever do so, since she would need an objective, third-person pointer to the conscious experience—which, being a subjective, first-person thing, cannot be labeled with such a pointer.</p>
<p>Everything that the android says and does is fully accounted for by its software. There is no explanatory gap left for machine consciousness to fill. When the android says, &#8220;I see colors and feel emotions just as humans do,&#8221; we know that those words are produced by deterministic lines of software that functions perfectly well without any involvement of consciousness. It is because of this that the android&#8217;s emoting does not provide an iota of evidence for any interior mental life. All the outward signs are faked, and the programmer knows in comprehensive detail how they are faked.</p>
<p>This point is systematically ignored by the mathematicians and engineers who enthuse about artificial intelligence. You have to go next door, to the philosophy department, to find people who accord due importance to it. Even if, by some unknown means, the android possessed consciousness, it could never tell us about it. As we have seen, everything the android says is determined by the software. Even if, somewhere in the depths of its circuit boards, there was a ghostly glimmer of conscious awareness or volition, it could never influence what the android says and does.</p>
<p>Could it be that the information in the computer just <em>is </em>the conscious experience? This argument is popular with information engineers, as it seems to allow them to gloss over the whole mind-body problem. It is flawed because information and conscious experience have different logical structures. Namely, information exists only as an artifact of interpretation; but experience does not stand in need of interpretation in order for you to be aware of it. If I give you a disk holding numerical data (21, 250, 11, 47; 22, 250, 15, 39. etc), those numbers could mean anything. In one program, they are meteorological measurements—temperature, humidity, rainfall. In another, they are medical—pulse rate, blood pressure, body fat. The interpretation has no independent reality; the numbers have no inherent meaning by themselves. In contrast, conscious experience is fundamentally different. If you jam your thumb in a door, your sensation does not need first to be interpreted by you as pain. It immediately presents as pain. Nor can you reinterpret it as some other sensation, such as the scent of a rose. Conscious experiences have real, subjectively witnessed qualities that do not depend for their existence on being interpreted this way or that. They intrinsically involve some quality over and above mere information.</p>
<p>Another popular argument is to appeal to &#8220;emergence.&#8221; Higherlevel systems are said to &#8220;emerge&#8221; from lower-level systems. The simple classic example is that of thermodynamic properties, such as heat and temperature, which emerge from the statistical behavior of ensembles of molecules. Yet the concept of &#8220;temperature&#8221; just does not exist for an isolated molecule, although billions of those molecules collectively do have a temperature. In like manner, it has been suggested, consciousness emerges from the collective behavior of billions of neurons, which individually could never be conscious on their own. But emergent properties are, in fact, artifacts of how we describe the world, and have no objective existence outside of mathematical theories. An ensemble of molecules may be described in terms of either the trajectories of individual molecules or their aggregate properties, but the latter are invented by human observers for the sake of simplifications. The external reality comprises only the molecules: the statistical properties, such as average kinetic energy, exist only in the mind of the physicist. Likewise, any dynamic features of the aggregate behavior of brain cells exist only in the models of the neuroscientists. The external reality comprises only the brain cells. Yet, as you know, when you jam your humb in the door, the pain is real and present in the moment; it is not a theoretical construct of a brain scientist.</p>
<p>So there are good reasons for believing that machines are not conscious. But—wouldn&#8217;t these arguments apply equally to brains? Surely a brain is just a bioelectrochemical machine? It obeys deterministic programs that are encoded in the genetic and neural wiring of the brain. Yet, if our argument that machines are not conscious can also apply equally to brains, then the argument must be flawed— since we know that our own brains are indeed conscious!</p>
<p>The answer is that there are certain processes in brain tissue that involve nondeterministic quantum-mechanical events. And, working through the chaotic dynamics of the brain, those minute phenomena can be amplified into overt behavior. The nondeterminism opens a gateway for consciousness to take effect in the workings of the brain.</p>
<p>As we saw earlier, you can report only the conscious experiences that are in the causal loop that gives rise to the speech acts. If you can report that you are in pain, then the pain sensation must exert a causal influence somewhere in the chain of neural events that governs what you say and write. A step that is physically nondeterministic provides a window of opportunity for consciousness to enter into that causal chain. Since we, as humans, know that we do express our conscious perceptions, we can infer that there must be some such nondeterminism somewhere in the brain. So far, quantum-mechanical events constitute the only known candidate for this. For example, Roger Penrose and Stuart Hammeroff have formulated a detailed theory of how quantum actions in the microtubules of brain cells could play this role. The jury is still out on whether the microtubules really are the locus at which consciousness enters the chain of cause and event.</p>
<p>A conventional, deterministic computer has no such gateway into consciousness. So androids, and virtual avatars, that are driven by computers of that kind, cannot express conscious awareness and their behavior therefore can never be evidence for consciousness. But, if a machine were to be built that used quantum computation in the same way that the brain does, then there is no philosophical reason why that machine could not have the same gateway to consciousness that a living being does. This is not because the quantum module lets the machine carry out computations that a classical computer cannot do. Whatever the quantum computer can do, a classical one can also do, albeit more slowly. Rather, it is the specific implementation of the quantum computer that provides the bridge into conscious processes.</p>
<p>In <em>The Matrix, </em>there is no reason to think that the machines are equipped with the kind of quantum computation needed to access consciousness. Quantum computation is not mentioned in the film, and there is circumstantial evidence that the Matrix and its agents are devoid of conscious thought.</p>
<p>Therefore the agents—which are software modules within the Matrix—are intelligent but mindless automata. For the most part, the agents behave unimaginatively, and we might naively think that this corroborates their lack of awareness. Yet, Agent Smith exhibits initiative and seems, in his speech to Morpheus, to evince a conscious dislike of the human world. But is he genuinely conscious, or only mimicking humans? In fact, Smith gives himself away when he says about the human world, &#8220;It&#8217;s the smell, if there is such a thing . . . I can taste your stink and every time I do, I fear that I&#8217;ve somehow been infected by it.&#8221; Smith&#8217;s own logical integrity obliges him to doubt the existence of that noncomputable quality that humans talk about: the conscious experience of smell. When Smith says, &#8220;. . . the smell, if there is such a thing,&#8221; he is exhibiting the mark of the automaton. This is corroborated when he then tells Morpheus that he can &#8220;taste your stink,&#8221; revealing that Smith simply does not understand the differentiation of senses in the human mind. For a computer, data are interchangeable, but for a human, tastes, smells, colors, sounds, and feels, are irreducibly different. This fact eludes Agent Smith.</p>
<p>Smith is mimicking human behavior as a tactic to trick Morpheus into cooperation. As the interrogation is getting nowhere, Brown suggests, &#8220;Perhaps we are asking the wrong questions.&#8221; So Smith pretends to talk like a human, to gain Morpheus&#8217;s empathy. Needless to say, the tactic fails completely.</p>
<hr />
<p><span style="font-size: xx-small;">1. In the oldest form of Buddhism, Theravada, the two major forms of meditation are Vipassana (the Pali word for &#8220;insight&#8221;) and its complement Samatha (&#8220;tranquility&#8221;). Vipassana consists in systematically attending to the individual elements that make up the contents of consciousness. It involves persistently turning away from the ceaselessly arising tide of chatter in the mind. Over time, the chatter subsides, and preconscious activity becomes more readily observed. Laboratory data support claims that long-term practitioners acquire a conscious awareness of brain microprocesses, possibly down to the cellular level. See Shinzeng Young&#8217;s works. </span></p>
<p><span style="font-size: xx-small;"><a name="2"></a>2. For an alternative perspective, see <a href="/meme/frame.html?main=/articles/art0552.html%20" target="_top">Kurzweil&#8217;s essay</a> in this volume. —Ed. </span></p>
<p><em>© 2003 <a href="http://benbellabooks.com/cgi-bin/merchant2/merchant.mv?Screen=PROD&amp;Store_Code=BB&amp;Product_Code=RP&amp;Category_Code=RP" target="_blank">BenBella Books</a>. Published on KurzweilAI.net with permission. </em></p>
]]></content:encoded>
			<wfw:commentRss>https://www.thekurzweillibrary.com/glitches-in-the-matrix-and-how-to-fix-them/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>THE HUMAN MACHINE MERGER: ARE WE HEADED FOR THE MATRIX?</title>
		<link>https://www.thekurzweillibrary.com/the-human-machine-merger-are-we-headed-for-the-matrix</link>
		<comments>https://www.thekurzweillibrary.com/the-human-machine-merger-are-we-headed-for-the-matrix#respond</comments>
		<pubDate>Sun, 02 Mar 2003 22:54:14 +0000</pubDate>
								<dc:creator>Ray Kurzweil</dc:creator>
		
		
				<category><![CDATA[classics]]></category>
		<category><![CDATA[Fix]]></category>
		<category><![CDATA[Future Visions]]></category>
		<category><![CDATA[resources]]></category>

		<guid isPermaLink="false">http://www.thekurzweillibrary.com/the-human-machine-merger-are-we-headed-for-the-matrix</guid>
		<description><![CDATA[Most viewers of The Matrix consider the more fanciful elements--intelligent computers, downloading information into the human brain, virtual reality indistinguishable from real life--to be fun as science fiction, but quite remote from real life. Most viewers would be wrong. As renowned computer scientist and entrepreneur Ray Kurzweil explains, these elements are very feasible and are quite likely to be a reality within our lifetimes.]]></description>
			<content:encoded><![CDATA[<p><em>To be published in</em> Taking the Red Pill: Science, Philosophy and Religion in <em>The Matrix</em> (<a href="http://benbellabooks.com/cgi-bin/merchant2/merchant.mv?Screen=PROD&amp;Store_Code=BB&amp;Product_Code=RP&amp;Category_Code=RP" target="_blank">Ben Bella Books</a>, April 2003). <em>Published on KurzweilAI.net March 3, 2003.</em></p>
<p><em>The Matrix </em>is set in a world one hundred years in the future, a world offering a seemingly miraculous array of technological marvels—sentient (if malevolent) programs, the ability to directly download capabilities into the human brain, and the creation of virtual realities indistinguishable from the real world. For most viewers these developments may appear to be pure science fiction:interesting to consider, but of little relevance to the world outside the movie theatre. But this view is shortsighted. In my view, these developments will become a reality within the next three to four decades.<span id="more-80825"></span></p>
<p>I&#8217;ve become a student of technology trends as an outgrowth of my career as an inventor. If you work on creating technologies, you need to anticipate where technology will be at points in the future so that your project will be feasible and useful when it&#8217;s completed, not just when you started. Over the course of a few decades of anticipating technology, I&#8217;ve become a student of technology trends and have developed mathematical models of how technologies in different areas are developing.</p>
<p>This has given me the ability to invent things that use the materials of the future, not just limiting my ideas to the resources we have today. Alan Kay has noted, &#8220;To anticipate the future we need to invent it.&#8221; So we can invent with future capabilities if we have some idea of what they will be.</p>
<p>Perhaps the most important insight that I&#8217;ve gained, which people are quick to agree with but very slow to really internalize and appreciate all of its implications, is the accelerating pace of technical change itself.</p>
<p>One Nobel laureate recently said to me: &#8220;There&#8217;s no way we&#8217;re going to see self-replicating nanotechnological entities for at least a hundred years.&#8221; And yes, that&#8217;s actually a reasonable estimate of how much work it will take. It&#8217;ll take a hundred years of progress, at today&#8217;s rate of progress, to get self-replicating nanotechnological entities. But the rate of progress is not going to remain at today&#8217;s rate; according to my models, it&#8217;s doubling every decade.</p>
<p>We will make a hundred years of progress at today&#8217;s rate of progress in 25 years. The next ten years will be like twenty, and the following ten years will be like 40. The 21st century will therefore be like 20,000 years of progress—at today&#8217;s rate. The twentieth century, as revolutionary as it was, did not have a hundred years of progress at today&#8217;s rate; since we accelerated up to today&#8217;s rate, it really was about 20 years of progress. The 21st century will be about a thousand times greater, in terms of change and paradigm shift, than the 20th century.</p>
<p>A lot of these trends stem from thinking about the implications of Moore&#8217;s Law. Moore&#8217;s Law refers to integrated circuits and famously states that the computing power available for a given price will double every twelve to twenty-four months. Moore&#8217;s Law has become a synonym for the exponential growth of computing.</p>
<p>I&#8217;ve been thinking about Moore&#8217;s Law and its context for at least twenty years. What is the real nature of this exponential trend? Where does it come from? Is it an example of something deeper and more profound? <em>As I will show, </em>the exponential growth of computing goes substantially beyond Moore&#8217;s Law. Indeed, exponential growth goes beyond just computation, and applies to every area of information-based technology, technology that will ultimately reshape our world.</p>
<p>Observers have pointed out that Moore&#8217;s Law is going to come to an end. According to Intel and other industry experts, we&#8217;ll run out of space on an integrated circuit within fifteen years, because the key features will only be a few atoms in width. So will that be the end of the exponential growth of computing?</p>
<p>That&#8217;s a very important question as we ponder the nature of the 21st century. To address this question, I put 49 famous computers on an exponential graph. Down at the lower left hand corner is the data processing machinery that was used in the 1890 American census (calculating equipment using punch cards). In 1940, Alan Turing developed a computer based on telephone relays that cracked the German enigma code and gave Winston Churchill a transcription of nearly all the Nazi messages. Churchill needed to use these transcriptions with great discretion, because he realized that using them could tip off the Germans prematurely.</p>
<p>If, for example, he had warned Coventry authorities that their city was going to be bombed, the Germans would have seen the preparations and realized that their code had been cracked. However, in the Battle of Britain, the English flyers seemed to magically know where the German flyers were at all times.</p>
<p>In 1952, CBS used a more sophisticated computer based on vacuum tubes to predict the election of a U.S. president, President Eisenhower. In the upper right-hand corner is the computer sitting on your desk right now.</p>
<p><a href="/articles/images/chart03.jpg"><img loading="lazy" decoding="async" src="/images/chart03.jpg" border="0" alt="" width="375" height="295" /></a></p>
<p>One insight we can see on this chart is that Moore&#8217;s Law was not the first but the fifth paradigm to provide exponential growth of computing power. Each vertical line represents the movement into a different paradigm: electro-mechanical, relay-based, vacuum tubes, transistors, integrated circuits. Every time a paradigm ran out of steam, another paradigm came along and picked up where that paradigm left off.</p>
<p>People are very quick to criticize exponential trends, saying that ultimately they&#8217;ll run out of resources, like rabbits in Australia. But every time one particular paradigm reached its limits, another, completely different method would continue the exponential growth. They were making vacuum tubes smaller and smaller but finally got to a point where they couldn&#8217;t make them any smaller and maintain the vacuum. Then transistors came along, which are not just small vacuum tubes. They&#8217;re a completely different paradigm.</p>
<p>Every horizontal level on this graph represents a multiplication of computing power by a factor of a hundred. A straight line in an exponential graph means exponential growth. What we see here is that the rate of exponential growth is itself growing exponentially. We doubled the computing power every three years at the beginning of the century, every two years in the middle, and we&#8217;re now doubling it every year.</p>
<p>It&#8217;s obvious what the sixth paradigm will be: computing in three dimensions. After all, we live in a three-dimensional world and our brain is organized in three dimensions. The brain uses a very inefficient type of circuitry. Neurons are very large &#8220;devices,&#8221; and they&#8217;re extremely slow. They use electrochemical signaling that provides only about 200 calculations per second, but the brain gets its prodigious power from parallel computing resulting from being organized in three dimensions. Three-dimensional computing technologies are beginning to emerge. There&#8217;s an experimental technology at MIT&#8217;s Media Lab that has 300 layers of circuitry. In recent years, there have been substantial strides in developing three-dimensional circuits that operate at the molecular level.</p>
<p>Nanotubes, which are my favorite, are hexagonal arrays of carbon atoms that can be organized to form any type of electronic circuit. You can create the equivalent of transistors and other electrical devices. They&#8217;re physically very strong, with 50 times the strength of steel. The thermal issues appear to be manageable. A one-inch cube of nanotube circuitry would be a million times more powerful than the computing capacity of the human brain.</p>
<p>Over the last several years, there has been a sea change in the level of confidence in building three-dimensional circuits and achieving at least the hardware capacity to emulate human intelligence. This has raised a more salient issue, namely that &#8220;Moore&#8217;s Law may be true for hardware but it&#8217;s not true for software.&#8221;</p>
<p>From my own four decades of experience with software development, I believe that is not the case. Software productivity is increasing very rapidly. As an example from one of my own companies, in 15 years, we went from a $5,000 speech-recognition system that recognized a thousand words poorly, without continuous speech, to a $50 product with a hundred-thousand-word vocabulary that&#8217;s far more accurate. That&#8217;s typical for software products. With all of the efforts in new software development tools, software productivity has also been growing exponentially, albeit with a smaller exponent than we see in hardware.</p>
<p>Many other technologies are improving exponentially. When the genome project was started about 15 years ago, skeptics pointed out that at the rate at which we can scan the genome, it will take 10,000 years to finish the project. The mainstream view was that there would be improvements, but there was no way that the project could be completed in 15 years. But the price-performance and throughput of DNA sequencing doubled every year, and the project was completed in less than 15 years. In twelve years, we went from a cost of $10 to sequence a DNA base pair to a tenth of a cent.</p>
<p>Even longevity has been improving exponentially. In the 18th century, every year we added a few days to human life expectancy. In the 19th century, every year, we added a few weeks. We&#8217;re now adding about 120 days every year to human life expectancy. And with the revolutions now in an early stage in genomics, therapeutic cloning, rational drug design, and the other biotechnology transformations, many observers including myself anticipate that within ten years we&#8217;ll be adding more than a year, every year. So, if you can hang in there for another ten years, we&#8217;ll get ahead of the power curve and be able to live long enough to see the remarkable century ahead.</p>
<p>Miniaturization is another very important exponential trend. We&#8217;re making things smaller at a rate of 5.6 per linear dimension per decade. Bill Joy, in the essay following this one, has, as one of his recommendations, to essentially forgo nanotechnology. But nanotechnology is not a single unified field, only worked on by nanotechnologists. Nanotechnology is simply the inevitable end result of the pervasive trend toward making things smaller, which we&#8217;ve been doing for many decades.</p>
<p><img loading="lazy" decoding="async" src="/images/chart19.jpg" alt="" width="375" height="295" /></p>
<p>Above is a chart of computing&#8217;s exponential growth, projected into the 21st century. Right now, your typical $1000 PC is somewhere between an insect and a mouse brain. The human brain has about 100 billion neurons, with about 1,000 connections from one neuron to another. These connections operate very slowly, on the order of 200 calculations per second, but 100 billion neurons times 1,000 connections creates 100 trillion-fold parallelism. Multiplying that by 200 calculations per second yields 20 million billion calculations per second, or, in computing terminology, 20 billion MIPS. We&#8217;ll have 20 billion MIPS for $1000 by the year 2020.</p>
<p>Now that won&#8217;t automatically give us human levels of intelligence, because the organization, the software, the content and the embedded knowledge are equally important. Below I will address the scenario in which I envision achieving the software of human intelligence, but I believe it is clear that we will have the requisite computing power. By 2050, $1000 of computing will equal one billion human brains. That might be off by a year or two, but the 21st century won&#8217;t be wanting for computational resources.</p>
<p>Now let&#8217;s consider the virtual-reality framework envisioned by The Matrix—a virtual reality which is indistinguishable from true reality. This will be feasible, but I do quibble with one point. The thick cable entering Neo&#8217;s brainstem made for a powerful visual, but it&#8217;s unnecessary; all of these connections can be wireless. Let&#8217;s go out to 2029 and put together some of the trends that I&#8217;ve discussed. By that time, we&#8217;ll be able to build nanobots, microscopic-sized robots that can go inside your capillaries and travel through your brain and scan the brain from inside. We can almost build these kinds of circuits today. We can&#8217;t make them quite small enough, but we can make them fairly small.</p>
<p>The Department of Defense is developing tiny robotic devices called &#8220;Smart Dust.&#8221; The current generation is one millimeter—that&#8217;s too big for this scenario—but these tiny devices can be dropped from a plane, and find positions with great precision. You can have many thousands of these on a wireless local area network. They can then take visual images, communicate with each other, coordinate, send messages back, act as nearly invisible spies, and accomplish a variety of military objectives.</p>
<p>We are already building blood-cell-sized devices that go inside the blood stream, and there are four major conferences on the topic of &#8220;bioMEMS&#8221; (biological Micro Electronic Mechanical Systems). The nanobots I am envisioning for 2029 will not necessarily require their own navigation. They could move involuntarily through the bloodstream and, as they travel by different neural features, communicate with them the same way that we now communicate with different cells within a cell phone system.</p>
<p>Brain-scanning resolution, speeds, and costs are all exploding exponentially. With every new generation of brain scanning we can see with finer and finer resolution. There&#8217;s a technology today that allows us to view many of the salient details of the human brain. Of course, there&#8217;s still no full agreement on what those details are, but we can see brain features with very high resolution, provided the scanning tip is right next to the features. We can scan a brain today and see the brain&#8217;s activity with very fine detail; you just have to move the scanning tip all throughout the brain so that it&#8217;s in close proximity to every neural feature.</p>
<p>Now how are we going to do that without making a mess of things? The answer is to send the scanners inside the brain. By design, our capillaries travel by every interneuronal connection, every neuron and every neural feature. We can send billions of these scanning robots, all on a wireless local area network, and they would all scan the brain from inside and create a very high-resolution map of everything that&#8217;s going on.</p>
<p>What are we going to do with the massive database of neural information that develops? One thing we will do is reverse-engineer the brain, that is, understand the basic principles of how it works. This is an endeavor we have already started. We already have high resolution scans of certain areas of the brain. The brain is not one organ; it&#8217;s comprised of several hundred specialized regions, each organized differently. We have scanned certain areas of the auditory and visual cortex, and have used this information to design more intelligent software. Carver Mead at Caltech, for example, has developed powerful, digitally controlled analog chips that are based on these biologically inspired models from the reverse engineering of portions of the visual and auditory systems. His visual sensing chips are used in high-end digital cameras.</p>
<p>We have demonstrated that we are able to understand these algorithms, but they&#8217;re different from the algorithms that we typically run on our computers. They&#8217;re not sequential and they&#8217;re not logical; they&#8217;re chaotic, highly parallel, and self-organizing. They have a holographic nature in that there&#8217;s no chief-executive-officer neuron. You can eliminate any of the neurons, cut any of the wires, and it makes little difference—the information and the processes are distributed throughout a complex region.</p>
<p>Based on these insights, we have developed a number of biologically inspired models today. This is the field I work in, using techniques such as evolutionary &#8220;genetic algorithms&#8221; and &#8220;neural nets,&#8221; which use biologically inspired models. Today&#8217;s neural nets are mathematically simplified, but as we get a more powerful understanding of the principles of operation of different brain regions, we will be in a position to develop much more powerful, biologically inspired models. Ultimately we can create and recreate these processes, retaining their inherently massively parallel, digitally controlled analog, chaotic, and self-organizing properties. We will be able to recreate the types of processes that occur in the hundreds of different brain regions, and create entities—they actually won&#8217;t be in silicon, they&#8217;ll probably be using something like nanotubes—that have the complexity, richness, and depth of human intelligence.</p>
<p>Our machines today are still a million times simpler than the human brain, which is one key reason that they still don&#8217;t have the endearing qualities of people. They don&#8217;t yet have our ability to get the joke, to be funny, to understand people, to respond appropriately to emotion, or to have spiritual experiences. These are not side effects of human intelligence, or distractions; they are the cutting edge of human intelligence. It will require a technology of the complexity of the human brain to create entities that have those kinds of attractive and convincing features.</p>
<p>Getting back to virtual reality, let&#8217;s consider a scenario involving a direct connection between the human brain and these nanobot-based implants. There are a number of different technologies that have already been demonstrated for communicating in both directions between the wet, analog world of neurons and the digital world of electronics. One such technology, called a neuron transistor, provides this two-way communication. If a neuron fires, this neuron transistor detects that electromagnetic pulse, so that&#8217;s communication from the neuron to the electronics. It can also cause the neuron to fire or prevent it from firing.</p>
<p>For full-immersion virtual reality, we will send billions of these nanobots to take up positions by every nerve fiber coming from all of our senses. If you want to be in real reality, they sit there and do nothing. If you want to be in virtual reality, they suppress the signals coming from our real senses and replace them with the signals that you would have been receiving if you were in the virtual environment.</p>
<p>In this scenario, we will have virtual reality from within and it will be able to recreate all of our senses. These will be shared environments, so you can go there with one person or many people. Going to a Web site will mean entering a virtual-reality environment encompassing all of our senses, and not just the five senses, but also emotions, sexual pleasure, humor. There are actually neurological correlates of all of these sensations and emotions, which I discuss in my book <em>The Age of the Spiritual Machines</em>.</p>
<p>For example, surgeons conducting open-brain surgery on a young woman (while awake) found that stimulating a particular spot in the girl&#8217;s brain would cause her to laugh. The surgeons thought that they were just stimulating an involuntary laugh reflex. But they discovered that they were stimulating the perception of humor: whenever they stimulated this spot, she found everything hilarious. &#8220;You guys are just so funny standing there&#8221; was a typical remark.</p>
<p>Using these nanobot-based implants, you will be able to enhance or modify your emotional responses to different experiences. That can be part of the overlay of these virtual-reality environments. You will also be able to have different bodies for different experiences. Just as people today project their images from Web cams in their apartment, people will beam their whole flow of sensory and even emotional experiences out on the Web, so you can, à la the plot concept of the movie <em>Being John Malkovich</em>, experience the lives of other people.</p>
<p>Ultimately, these nanobots will expand human intelligence and our abilities and facilities in many different ways. Because they&#8217;re communicating with each other wirelessly, they can create new neural connections. These can expand our memory, cognitive faculties, and pattern-recognition abilities. We will expand human intelligence by expanding its current paradigm of massive interneuronal connections as well as through intimate connection to non-biological forms of intelligence.</p>
<p>We will also be able to download knowledge, something that machines can do today that we are unable to do. For example, we spent several years training one research computer to understand human speech using the biologically inspired models—neural nets, Markov models, genetic algorithms, self-organizing patterns—that are based on our crude current understanding of self-organizing systems in the biological world. A major part of the engineering project was collecting thousands of hours of speech from different speakers in different dialects and then exposing this to the system and having it try to recognize the speech. It made mistakes, and then we had it adjust automatically, and self-organize to better reflect what it had learned.</p>
<p>Over many months of this kind of training, it made substantial improvements in its ability to recognize speech. Today, if you want your personal computer to recognize human speech, you don&#8217;t have to spend years training it the same painstaking way, as we need to do with every human child. You can just load the evolved models, it&#8217;s called &#8220;loading the software.&#8221; So machines can share their knowledge. We don&#8217;t have quick downloading ports on our brains. But as we build nonbiological analogs of our neurons, interconnections, and neurotransmitter levels where our skills and memories are stored, we won&#8217;t leave out the equivalent of downloading ports. We&#8217;ll be able to download capabilities as easily as Trinity downloads the program that allows her to fly the B-212 helicopter.</p>
<p>When you talk to somebody in the year 2040, you will be talking to someone who may happen to be of biological origin but whose mental processes are a hybrid of biological and electronic thinking processes, working intimately together. Instead of being restricted, as we are today, to a mere hundred trillion connections in our brain, we&#8217;ll be able to expand substantially beyond this level. Our biological thinking is flat; the human race has an estimated 10<sup>26</sup> calculations per second, and that biologically determined figure is not going to grow. But nonbiological intelligence is growing exponentially. The crossover point, according to my calculations, is in the 2030s; some people call this the Singularity.</p>
<p>As we get to 2050, the bulk of our thinking—which in my opinion is still an expression of human civilization—will be nonbiological. I don&#8217;t believe that the Matrix scenario of malevolent artificial intelligences in mortal conflict with humans is inevitable. At that point, the nonbiological portion of our thinking will still be human thinking, because it&#8217;s going to be derived from human thinking. Its programming will be created by humans, or created by machines that are created by humans, or created by machines that are based on reverse-engineering of the human brain or downloads of human thinking, or one of many other intimate connections between human and machine thinking that we can&#8217;t even contemplate today.</p>
<p>A common reaction to this is that this is a dystopian vision, because I am &#8220;placing humanity with the machines.&#8221; But that&#8217;s because most people have a prejudice against machines. Most observers don&#8217;t truly understand what machines are ultimately capable of, because all the machines that they&#8217;ve ever &#8220;met&#8221; are very limited, compared to people. But that won&#8217;t be true of machines circa 2030 and 2040. When machines are derived from human intelligence and are a million times more capable, we&#8217;ll have a different respect for machines, and there won&#8217;t be a clear distinction between human and machine intelligence. We will effectively merge with our technology.</p>
<p>We are already well down this road. If all the machines in the world stopped today, our civilization would grind to a halt. That wasn&#8217;t true as recently as thirty years ago. In 2040, human and machine intelligence will be deeply and intimately melded. We will become capable of far more profound experiences of many diverse kinds. We&#8217;ll be able to &#8220;recreate the world&#8221; according to our imaginations and enter environments as amazing as that of <em>The Matrix</em>, but, hopefully, a world more open to creative human expression and experience.</p>
<p><span style="font-size: x-small;"><strong>SOURCES</strong></span></p>
<p>BOOKS</p>
<p>Kurzweil, Ray, <em>The Age of Spiritual Machines: When Computers Exceed Intelligence </em>(Penguin USA, 2000).</p>
<p><em>© 2003 <a href="http://benbellabooks.com/cgi-bin/merchant2/merchant.mv?Screen=PROD&amp;Store_Code=BB&amp;Product_Code=RP&amp;Category_Code=RP" target="_blank">BenBella Books</a>. Published on KurzweilAI.net with permission. </em></p>
]]></content:encoded>
			<wfw:commentRss>https://www.thekurzweillibrary.com/the-human-machine-merger-are-we-headed-for-the-matrix/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Synthespianism and anthropomorphization of computer graphics</title>
		<link>https://www.thekurzweillibrary.com/synthespianism-anthropomorphization-of-computer-graphics</link>
		<comments>https://www.thekurzweillibrary.com/synthespianism-anthropomorphization-of-computer-graphics#respond</comments>
		<pubDate>Wed, 02 Oct 2002 06:35:12 +0000</pubDate>
		
		
				<category><![CDATA[Future Visions]]></category>
		<category><![CDATA[resources]]></category>

		<guid isPermaLink="false">http://www.thekurzweillibrary.com/?p=98956</guid>
		<description><![CDATA[The anthropomorphization of computer graphics has been a classiccase of exponential growth powered by technology, art, commerceand culture. Funding for military and aerospace applications likenuclear weapons design, weather prediction and flight simulationpaid for much of the initial heavy lifting required to build thefoundation of the computer graphics industry during the 1960&#8217;s andearly 1970&#8217;s. As the [&#8230;]]]></description>
			<content:encoded><![CDATA[<p>The anthropomorphization of computer graphics has been a classiccase of exponential growth powered by technology, art, commerceand culture. Funding for military and aerospace applications likenuclear weapons design, weather prediction and flight simulationpaid for much of the initial heavy lifting required to build thefoundation of the computer graphics industry during the 1960&#8217;s andearly 1970&#8217;s.</p>
<p>As the sophistication of graphics software marched forward andthe cost of computing slid downward, the annual SIGGRAPH film andvideo show became the crucible in which technologists, filmmakersand artists were introduced to one another: the baton of computergraphics design was passed from those who wrote the programs requiredto create imagery to those who perceived and exploited the incrediblecommunicative potential of this fledgling medium.</p>
<p>Along the way, the natural predilection to create computer graphicsin the image of ourselves has led to a striking body of creativeendeavor, ever growing in realism and resolution, which now knocksat the door of imperceptibility; that is, the rendering of humanperformances indistinguishable from flesh and blood performances.Some of the steps in that evolution are traced here from a personalperspective, and some speculations on future developments are presented.</p>
<p>Initial attempts at simulating human body motion took several formsat Digital Effects, a company I co-founded in 1978 along with collegeassociates. Hierarchical skeletons were created and keyframe-animatedto move in a bipedal fashion, but without any IK (inverse kinematics)solutions available, the results were stilted and difficult to edit.Rotoscoping live-action footage of a subject festooned with witnesspoints at the joints and digitizing the positing of the points onfilm allowed us to imbue our characters with more lifelike motion,but the process yielded primarily 2-D information that could notbe used from all angles.</p>
<p>At that time, 3-D modeling software had been designed for architecturalapplications and was not capable of modeling the human form in asatisfactory manner. Early attempts at creating and linking facialexpressions using software solutions in computer animation werevery disappointing and unconvincing at even the most basic levelsof lip synchronization.</p>
<p>3-D rotoscoping at Robert Abel and Associates yielded the firstmotion-captured animation in the commercial project, Sexy Robot,under the technical direction of Frank Vitz, who used two 35mm camerasto triangulate the 3-D positions of witness points on a live subject.</p>
<p>During the period of 1985-1986, a good deal of seminal researchand development in character animation was conducted at Abel aswell as Digital Productions and Omnibus Computer Graphics, all threeof which joined together and imploded under the weight of theirlargesse.</p>
<h4>First Synthespian</h4>
<p>While at Digital-Omnibus-Abel (to become known as DOA) I met DianaWalczak, a recent college graduate who was searching for ways tocombine science and art. We formed a partnership based on our mutualinterest in developing computer-generated characters and came upwith sculpture-based solutions to the problem of modeling the humanform and creating facial animation.</p>
<p>Diana sculpted a human form in clay and metal armature that wascast in hydrocal, from which individual body parts could be createdand digitized using a magnetic digitizing device called the 3SpaceDigitizer by Polhemus. The body parts were lined with thin tapeto define the optimum topology in polygons and digitized by handusing a magnetic sensor.</p>
<p>Thesebody parts were then assembled digitally into a skeletal hierarchyto form Nestor Sextone, our first Synthespian. Nestor&#8217;s joints wereformed by interpenetrating solids-due to the fact that softwaredid not yet exist that would allow for flexors at the joints-whichgave us seams similar to that of a plastic action figure. For facialanimation, a neutral face was cast in hydrocal, which allowed Dianato make multiple clay copies that could be sculpted into variousphonemes of speech and facial expressions.</p>
<p>Larry Weinberg, a programmer from Digital Effects and Omnibus wholater would write Poser, contributed software that would allow usto link the various digitized facial expressions together by re-orderingthe polygons. With multiple faces re-ordered into exactly the samepolygonal topology, we could interpolate from one to another, enablingus to create scripts that could simulate lip synchronization withour soundtracks. Using keyframe animation and Larry&#8217;s facial animationsoftware, Sextone made his screen debut for SIGGRAPH 1988 in a filmof 30 seconds duration in which he campaigns for the presidencyof the Synthetic Actors Guild.</p>
<p><img loading="lazy" decoding="async" src="/images/dozo.jpg" alt="" width="180" height="258" class="alignright" />Intriguedby the potential of motion capture to link natural human motionto our synthetic characters, we created Don&#8217;t Touch Me, amusic video piece that premiered at SIGGRAPH 1990 in which singer/songwriterPerla Batalla was optically motion captured (by Motion Analysisin Santa Rosa, CA) to drive a singing synthespian called Dozo. Bythis time, we had suitable flexing software for simple joints likeelbows and knees, but the multi-axis requirements for the shoulderjoint meant a solution was still several months of development away.</p>
<p>Facial animation was again created by linking digitized sculpturesof various facial expressions and this technique yielded superiorresults to any of the software solutions of the time that soughtto model the musculature of the face. Software solutions would requiremany years of development before they would overcome the qualityand believability of the sculpture-based technique, which allowedfor the preservation of facial volume and the illusion of the preservationof facial muscle integrity during motion.</p>
<p>This is the result of the fact that Diana&#8217;s keyframe sculpturesall had appropriate muscle definition and maintained that definitionwhile interpolating from one to the next.</p>
<p><img loading="lazy" decoding="async" src="/images/LuxorGlass.jpg" alt="" width="200" height="145" class="alignright" />Ourfirst stereoscopic synthespians were created for In Search ofthe Obelisk, a theme park trilogy for the Luxor Hotel in LasVegas, designed by Doug Trumbull. Using optical motion capture oflive dancers, we created the illusion of glass synthespians dancingon a hovering beach that floated over the audience.</p>
<p>Since we used ray tracing to refract the background through thebodies of the dancers, the stereoscopic image perceived by the audiencewas accurately rendered with slightly different refractions fromthe point of view of the left eye as compared to the right, yieldinga very realistic illusion reminiscent of the optical propertiesof a glass object when viewed stereoscopically.</p>
<p><img loading="lazy" decoding="async" src="/images/dreddmodel.jpg" alt="" hspace="10" width="150" height="214" class="alignright" />Forthe feature film Judge Dredd, digital stunt doubles werecreated to solve a technical problem: many of the shots in the climacticchase sequence required Sylvester Stallone and Rob Schneider toappear to ride on a flying motorcycle that weaves around other flyingvehicles and skyscrapers. The close-ups were shot on a green screenstage with a gimbaled prop of the motorcycle and composited intomocon (motion control) footage of the huge model of the city. Othershots required the motorcycle to fly toward camera from a long distanceand maneuver in a complex flight path as it whizzed past camera.</p>
<p>These shots were not able to be photographed due to limitationsin the length of the green-screen camera rig as well as the reluctanceof the producers to allow a large, heavy, motion-controlled camerarig to careen within a few feet of their lead actors. We used magneticmotion capture (Ascension Technologies&#8217; Flock of Birds) to obtainthe body dynamics of the motorcycle riders during the various changesin attitude of the motorcycle.</p>
<p>Playing back the previsualization on video, the subject (in thiscase Diana Walczak) was affixed with magnetic trackers and was wobbledaround on a gimbaled motorcycle mockup in sync with the previz playback.The way her body moved in response to the motion of the bike was captured and applied to photoreal synthetic versions of Stalloneand Schneider. For the faces, we used CyberScans for the first timeand the results were satisfactory since the camera never lingeredon the faces at close range.</p>
<h4>Organic shape-shifting</h4>
<p>A later project for the feature film X-Men involved the characterMystique, played by Rebecca Romijn-Stamos. Mystique is a shape-shifterwho transforms from her scaly blue form into other characters andback again using a combination of live action photography and CG animation.Director Bryan Singer was looking for a transformation that wouldstand apart from the typical morphing that had risen like a plaguethrough the visual arts, becoming a constant technique used in advertisingand in films to change one object into another using simultaneous2D shape transformation with dissolving texture vertices.</p>
<p><img loading="lazy" decoding="async" src="/images/mystique.jpg" alt="" width="180" height="295" class="aligncenter" /></p>
<p>We designed a technique that would allow for a dimensional transformationthat would begin at various locations and spread across and aroundthe limbs in an organic, infectious fashion accented by 3-D scalesbursting through the surface and settling down like a shaking dog&#8217;scoat to form the scales on her body. In most cases we used CyberScandata of the outgoing actor matched to the 3-D position of the actorin the shot as a matting element to transform into an all-CG Mystique.</p>
<p>This technique required eighteen stages of production to createthe multilayered complex transformation and very careful matchingof CG skin, clothing and hair to the live action footage. Althoughthe mandate was to make the CG Mystique appear photoreal, her blue,scaly body was very different from that of a normal person, yieldingconsiderable visual leeway</p>
<p>For the Revolution Films production of the Jet Li film, TheOne, Jet Li battles his identical doppelganger from anotherdimension. For many shots, a simple split screen or a Patty Duke-styleover- the-shoulder shot would suffice, but for high-speed kung fubattle sequences in which punches and kicks had to land and be feltby the audience, digital face replacement was the technique of choice.</p>
<p>The separation of a facial performance from a physical performancehad been accomplished before in Jurassic Park in the shotwhere the velociraptor leaps up from below to attack a child character.</p>
<p>The adult stunt double&#8217;s face was replaced with that of the childactor, but that was simply a composite of photographic elements.In The One, the complex high-speed motion of the subjectsduring the fight sequences-coupled with the requirement that thetwo subjects sometimes appear to move at different camera framerates-required us to develop a fully CG face-replacement solution.</p>
<p>The stunt double, a kung fu expert with a very similar body typeto Jet Li, was outfitted with a plastic mask that was milled froma CyberScan of Jet Li&#8217;s face. The mask was equipped with retro-reflectivewitness points and the camera was outfitted with a fluorescent,circular light around the lens to ensure that the markers wouldshow up on film.</p>
<div style="width: 252px" class="wp-caption alignright"><img decoding="async" src="/images/theone.jpg" alt="" /><p class="wp-caption-text">Jet Li plays a police officer pursued by his evil alterego from a parallel universe who seeks to kill him and becomeThe One. Advanced face replacement techniques allow Lito fight his twin. Both faces are visible and fully expressivein close-ups.</p></div>
<p>The fight sequences were choreographed so that the force of theimpacts would impart proper reaction in the two participants. Usingthe known positions of the facemask markers, we could determinethe precise orientation of the stunt double&#8217;s face on each frame,allowing us to track a CG face over top of his mask.</p>
<p>Using CyberScan data of Jet&#8217;s face, along with high-resolutionphotographs, we created and rigged a detailed 3-D Jet Li face withblendshapes that would allow us to simulate different facial expressionsduring the fight. The CG face was then animated to give us the appropriateexpression for each sequence, matted into the shot covering up themask and blended into the stunt double&#8217;s natural color around theface. Because it was not possible to photograph Li&#8217;s face in theproper dynamic orientation with the proper expression for a givenmoment of a fight, a CG face was the only solution.</p>
<p>The resulting technology, which allows us to separate the physicalperformance from the facial performance, has far-reaching implicationsfor the future of filmmaking. First of all, stunt sequences thatnormally would be staged in such a way that the face of the stuntdouble is never facing camera can now be staged according to theneeds of the director, and the actor&#8217;s face can be inserted accuratelyand believably. More broadly, the facial performance of an actorwho is incapable of the physical aspects of a performance can becomposited into the footage of a stunt double to multiply the rangeof an actor&#8217;s possible roles. Recent projects making use of ourtechnology include inserting an actor&#8217;s face onto stunt doubleswho are surfing and riding motorcycles.</p>
<h4>Animation trumps mo-cap</h4>
<p>More interesting from our standpoint is the creation of whollyCG characters and their application to entertainment projects. UniversalStudios came to us with the mandate to create the best theme parkattraction in the world based on the Spider-Man characters, andwe spent three years in production on The Amazing Adventuresof Spider-Man, a multimedia, stereoscopic, moving motion-simulatorattraction that was to become the flagship of their new theme park,Islands of Adventure in Orlando, Florida.</p>
<div style="width: 229px" class="wp-caption alignright"><img loading="lazy" decoding="async" src="/images/spidermanride.jpg" alt="" width="229" height="216" /><p class="wp-caption-text">The Amazing Adventures of Spider-Man was createdfor Universal's billion-dollar Islands of Adventure themepark in Orlando. It's the first ride in history to combine stereoscopic3D film projected onto giant screens with the latest in motion-basedvehicle technology. This virtual-reality adventure immersesriders in a comic-book battle between Spider-Man and membersof the sinister syndicate as riders move through a 1.5-acreset environment.</p></div>
<p>Working with our head software designer, Frank Vitz, we developedsoftware that would compensate for the viewing position of the movingaudience, who would sit in six degrees-of-freedom motion simulatorstraveling on a track past 13 large reflective screens. The imagerywas projected in stereoscopic eight-perf 70mm film.</p>
<p>A great deal of attention was paid to matching the physical setsin the ride to the imagery projected onto the screens so that thelines were blurred between the real world and the virtual, projectedworld. In fact, many of the sets adjacent to the screens were dressedwith CG textures that originated from our virtual sets and werescanned onto eight-foot-wide canvas murals so that imagery and sightlines would match up and blend the two worlds into one.</p>
<p>From a design standpoint, our goal was to take the audience intoa comic book world that combines the hard key-lighting and saturated-colorstyle of comic art with enough textural detail to feel like a realplace. It was a balancing act between stylization and realism thatresulted in a unique and exciting environment in which to stagethe epic struggle between Spider-Man and a gaggle of super-villainsled by Dr. Octopus, one that swirls around the audience whom Spider-Manmust protect.</p>
<p>We tested and abandoned motion capture for the project based onthe fact that the superhuman performances of the Marvel characterscould be better realized by talented animators using keyframe techniquesrather than by animators trying to extend the physical range ofmotion-captured athletes.</p>
<p>Our first totally original synthespian project was made possibleby Busch Entertainment, who gave us virtual &#8220;carte blanche&#8221;to design a ride from the ground up for a new area at Busch Gardensin Williamsburg, VA. With only one word of direction, &#8220;Ireland,&#8221;from the client, we wrote a story called Corkscrew Hill thatwould exploit the physical parameters available: two 60-person Reflectonemotion bases in two identical warehouse spaces.</p>
<div style="width: 250px" class="wp-caption alignright"><img loading="lazy" decoding="async" src="/images/corkscrewhill.jpg" alt="" width="250" height="168" /><p class="wp-caption-text">The Corkscrew Hill computer-animated stereoscopicepic ride experience takes audiences on an adventure to OldIreland, populated with humans and mythical creatures. In thepre-show, the audience shrinks to fit in a magic box. Then theyenter a motion base and are strapped into their seats for themain show: one continuous-point-of-view shot from the box ascharacters carry it on a wild adventure on Corkscrew Hill.SensAble's FreeForm System was used to sculpt characterheads. Pieces of character models were joined with Paraformsoftware. Maya was used for modeling, animation, and rendering.Large-format digital projection was engineered by Electrosonic.</p></div>
<p>We specified very large reflective screens and anopen cockpit design for the attraction and—working with theaudio-visual engineers at Electrosonic in Burbank, CA—we cameup with a digital projection system that would give us film resolutionon a large screen despite the fact that digital projectors werecurrently not up to the task.</p>
<p>Byrotating four Barco DLP projectors 90 degrees and edge-blendingdown the middle (using two projectors for each image), we couldget stereoscopic image pairs onto the 30 x 40 foot screens at 2048horizontal by 1280 vertical resolution. Since the brain fuses theleft and right images into a single mental image, any image artifactsfrom the projection were lost in the mental blending process, resultingin excellent stereoscopic imagery. We choreographed a camera movethat takes us on an adventure through ancient Ireland, encounteringIrish townspeople, a magic flying horse, banshees, a troll, a witchand a griffin.</p>
<p>Thiseight-minute attraction allowed us to create a completely syntheticworld and populate it with mythical creatures and characters witha visual style akin to that of a storybook. Again, we opted forkeyframed character animation instead of motion capture, which oftenseems pedestrian when applied to CG characters. When keyframing,an animator enters into and becomes the character, breathing originallife into it that cannot be obtained through motion capture, whichis in effect the three-dimensional &#8220;xeroxing&#8221; of a physicalperformance.</p>
<p>In the same way that a caricature of a person looks more like thesubject than would a tracing off a photograph, or a good sculptureof a person looks more like them than a life cast, a stylized CGcharacter created by a talented keyframe animator looks more believableand lifelike than one created with motion capture and CyberScans.</p>
<p>The limits of photorealism</p>
<p>Looking to the future, one must examine one&#8217;s goals in creatingCG life forms. There are those who hold up photorealism as the ultimategoal: to create a synthespian indistinguishable from a live actor.This idea has intrigued, taunted and tormented programmers for 30years, going back to the films Westworld in 1973 and Lookerin 1981. The broad base of development required to accomplish thisfeat has been gaining momentum at an exponential rate as more applications,competition and funding enter the arena.</p>
<p>There exists a trade-off between what level of realism is possibleversus how much computing time can be spent on each frame. We suppliedvery efficient body databases to Ray Kurzweil&#8217;s <a href="/meme/memelist.html?m=9" target="_top">Ramona</a>project, which was presented at the Technology Entertainment Design(TED) conference in February 2001. This real-time performance tookadvantage of recent developments in hardware rendering that alloweda fairly sophisticated human figure to be rendered and displayedat 30 frames per second. Through the use of real-time motion captureand voice synthesis, Ray was able to inhabit his female alter ego,Ramona.</p>
<p>The performance was designed within the limitations of the technology,in that the &#8220;camera&#8221; did not venture too close to Ramona&#8217;sface, where the &#8220;efficiency&#8221; of our data would becomea liability in terms of image quality. As the camera approachesthe subject, the resolution requirements skyrocket, and to rendera photorealistic close-up on film requires orders of magnitude morecalculation than can be supported by real-time rendering engines.</p>
<p>As Ray points out, computing speeds are increasing at exponentialrates, but current technology still gets slammed to the mat whenit is applied to creating a synthespian who appears real in everydetail. The problem is that we spend so much of our time studyingthe nuances of facial expression in our colleagues, friends andfamily, so we have become quite expert at spotting flaws. Thereare many subtle details in a real face, including how the complexmuscle system perturbs the skin surface, how light scatters insidethe skin, and how surface pores, blemishes and other minute detailslook and react to light.</p>
<p>A spectacular amount of money was poured into solving these problemsin the all-CG feature film Final Fantasy: the Spirits Withinand the results did not pay off at the box office. Many projectshave been proposed that would use CG characters to bring deceasedactors to the screen for a posthumous encore, but the technologyis not yet ready for this task, and many of us cringe at the prospectof this sort of application. The recent release of S1m0nereiterates the basic problem these sorts of projects face: we canget about 90 percent of the way to photorealism in CG actors, butthe last ten percent is extremely expensive and time-consuming incomparison to photographing real actors.</p>
<p>In Final Fantasy, the hair, skin, cloth dynamics and lightingare all in that 95% range that just doesn&#8217;t make it to photoreality,except in stills. In motion, the illusion lacks the subtlety ofmicro-motion and micro-detail of live action photography, and theresults are unsettling and distracting from the storyline.</p>
<p>In Simone, the producers opted to use a live actor who wasdigitally altered to be just slightly idealized through image processing.Coupled with a few shots of a CyberScanned 3-D model being revealedlike an orange peel wipe-on, the processed footage told the storyadequately and carried the story point of a believable CG humancost-effectively.</p>
<p><img loading="lazy" decoding="async" src="/images/simone_6668F.jpg" alt="" width="200" height="133" class="alignright" />Tohave used a CG character throughout would have been many times morecostly and it is unlikely that the audience would believe that theCG character could be mistaken for a real actor. Albeit a valiantattempt, the film presumes a mythical world where Hollywood producersand the general public have no knowledge of the history and progressof visual effects, computer animation and digital compositing.</p>
<h4>Animator as actor</h4>
<p>The exciting areas to be explored are those where the animator becomesanalogous to the actor. By animators using a robust set of tools andtechniques, performances of the quality and richness currently createdby the finest actors will be made possible in a medium free of theconstraints of live-action photography. These will be characters,roles, and plots that exploit our intimate familiarity with the humanform and its subtleties, but don&#8217;t attempt to recreate photorealisticrenderings of it.</p>
<p>When painters developed the skills to recreate realistic images,a golden era of realism followed. But when photography came alongand replaced the role of the painter as visual documentarian, paintersresponded with expressionism and abstraction, modes of image-makingonly possible in the era of post-realism.</p>
<p>In the same way, after the CG industry is able to reproduce realityin its most intricate detail, the next step will be to build uponthat foundation a new and exciting future of non-realistic style.But rather than being limited to the confines of a painted canvasor a physical sculpture, the realm of imagination becomes the onlyouter limit.</p>
<p>Beyond the capability to achieve photorealism, there is a muchmore compelling goal of creating entertainment that takes placebeyond what and where we can photograph. The writer and directorare now the creative overlords, equipped with unlimited theatricalpossibilities in terms of locations, characters, storylines andvisual style. The entire world of science fiction and fantasy-basedliterature can be shot &#8220;on location&#8221; without limitation.New stories heretofore inconceivable will be created and broughtto the world of the visual arts and entertainment.</p>
<p>In this work, the emphasis will be on the concept. The writer anddirector stand at the door of a new space that has thus far beenexplored by precious few—and marvel at the possibilities.</p>
<hr />
<p>Sextone for President Written and Directed byJeff Kleiser and Diana Walczak © 1988 Kleiser-Walczak<br />
Don&#8217;t Touch Me Directed by Diana Walczak andJeff Kleiser © 1989 Kleiser-Walczak</p>
<p>In Search of the Obelisk Computer Animation byKleiser-Walczak. Produced by The Trumbull Company for Circus CircusEnterprises, Inc.</p>
<p>X-Men © 2000 Twentieth Century Fox. Allrights reserved. Image courtesy Kleiser-Walczak</p>
<p>The One © 2001 Revolution Studios DistributionCompany, LLC. Property of Sony Pictures Entertainment, Inc. ©2001 Columbia Pictures Industries, Inc. All rights reserved. Imagecourtesy Kleiser-Walczak.</p>
<p>The Amazing Adventures of Spider-Man © 1999Universal Studios Escape. A Universal Studios/Rank Group Joint Venture.All rights reserved. Image courtesy Kleiser-Walczak.</p>
<p>Corkscrew Hill Original ride film Written andDirected by Jeff Kleiser and Diana Walczak © 2001 Busch EntertainmentCorporation. All rights reserved. Image courtesy Kleiser-Walczak.</p>
<p>Final Fantasy Property of Sony Pictures Entertainment,Inc. © 2001 Columbia Pictures Industries, Inc. All rights reserved.</p>
<p>S1M0NE © 2002 Darren Michaels/New Line Productions</p>
]]></content:encoded>
			<wfw:commentRss>https://www.thekurzweillibrary.com/synthespianism-anthropomorphization-of-computer-graphics/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Encompassing education: immersive interfaces improve learning environments</title>
		<link>https://www.thekurzweillibrary.com/encompassing-education-immersive-interfaces</link>
		<comments>https://www.thekurzweillibrary.com/encompassing-education-immersive-interfaces#respond</comments>
		<pubDate>Tue, 17 Sep 2002 06:43:35 +0000</pubDate>
		
								<media:thumbnail url="https://www.thekurzweillibrary.com/images/advunit1-140x104.jpg" width="140" height="104" />
		
				<category><![CDATA[Future Visions]]></category>
		<category><![CDATA[resources]]></category>

		<guid isPermaLink="false">http://www.thekurzweillibrary.com/?p=98960</guid>
		<description><![CDATA[Originally published in &#8220;2020 Visions: Transforming Education and Training Through Advanced Technologies,&#8221; U.S. Department of Commerce Sept. 17, 2002. Broad dynamic content will feed future education technologies. We will integrate motion and haptic interfaces, display and sound sciences, computer simulation breakthroughs, and next-level communication and information technologies. The vast possibilities created by these merging technologies make [&#8230;]]]></description>
			<content:encoded><![CDATA[<p><em>Originally published in </em><a href="http://www.commerce.gov/opa/press/2002_Sept13_Visions.htm" target="_blank"><em>&#8220;2020 Visions: Transforming Education and Training Through Advanced Technologies</em></a><em>,&#8221; U.S. Department of Commerce Sept. 17, 2002.</em></p>
<p>Broad dynamic content will feed future education technologies. We will integrate motion and haptic interfaces, display and sound sciences, computer simulation breakthroughs, and next-level communication and information technologies. The vast possibilities created by these merging technologies make it crucial to bring together great minds from every discipline to begin building a foundation for the development of massive amounts of evolving content simultaneously and in collaboration with the design of next-generation education technology.<img decoding="async" title="More..." src="http://www.thekurzweillibrary.com/wp-includes/js/tinymce/plugins/wordpress/img/trans.gif" alt="" /></p>
<p>In an effort to illustrate one small but powerful possible piece of the future education puzzle, this conceptual paper includes a vignette set in the 2020s showing a twelve year old girl engrossed in a highly comprehensive though very personal learning experience. To create such a functional education system for the coming century, we must find ways to break down long-standing educational shortcomings:</p>
<p>Problem Set 1:</p>
<ul>
<li>Faster and slower students are alienated because teaching is aimed at the average student</li>
<li>Less aggressive students learn less</li>
<li>Teachers don&#8217;t have time to give individualized instruction</li>
</ul>
<p>Solution 1 &#8211; Customize the learning process</p>
<p>Problem Set 2:</p>
<ul>
<li>Students don&#8217;t experience enough</li>
<li>Students have trouble visualizing abstract concepts</li>
<li>Students don&#8217;t utilize enough of their senses</li>
<li>Students don&#8217;t utilize balance and coordination when learning</li>
</ul>
<p>Solution 2 &#8211; Utilize the senses and experience more</p>
<p>Problem Set 3:</p>
<ul>
<li>Disciplines are too separate</li>
<li>There is too much emphasis on grades, rules and directions rather than creativity</li>
<li>The arts have taken a superfluous position in education</li>
<li>Too few students are interested in learning</li>
</ul>
<p>Solution 3 &#8211; Foster a heightened sense of curiosity</p>
<h4>Adventure Learning</h4>
<p>The drawbacks listed above can be resolved by customizing the learning process, allowing students to experience more, and fostering a heightened level of curiosity. Students should pursue their own inspiration and learn at their own levels. It is a well-known fact that we learn more by doing. But how can we be individually exposed to many different ideas or places? By developing a system for students to experience more through increased usage of their senses.</p>
<p>Students should be compelled more toward seeking creative solutions than merely following instructions. Let&#8217;s allow rivers of subjects to mingle in a confluence of endless possibilities and in the future, incorporate rather than separate the arts. How can technology help us overcome past educational problems, including those caused by technology itself?</p>
<p><img loading="lazy" decoding="async" src="/images/advunit.jpg" alt="" width="400" height="299" /></p>
<p>Though this paper highlights the individual full immersive education experience, a complete, stable, and discrete student learning structure includes teacher, parent/advisor, and group immersive experience nodes. All nodes are interconnected with mobile, home or class communication media.</p>
<p>We will evolve into a more productive society through Adventure Learning. Picture an Adventure Learning structure in which the student is surrounded by four essential education nodes, each of which is connected to the others and to the student. The four surrounding nodes are teacher, parent/advisor, group immersive education experience, and individual full immersive education experience. Mobile, home, or classroom communication media link the nodes to establish a unit of stable and discrete education.</p>
<p>Communication media will become the glue that holds together a more customized and far less simplistic education system. The technologies that will vastly change information and communication are autostereoscopic display systems, 3-D sound, augmented reality, virtual reality, and portable or wearable ubiquitous information machines. Through any or all of these technologies, a teacher can guide or a student can present projects within the node or beyond. And all have access to &#8220;cybraries&#8221; and other information databases.</p>
<p>The group immersive education experience can be an actual or virtual laboratory where students can work together to solve problems and achieve common goals. Telepresence with student avatars would allow students to work closely with other colleagues around the world or beyond. The group immersive experience could also be a variation of the individual immersive experience, placing the group in a larger display dome or providing retinal scanning or other virtual reality eyewear in combination with full-body force-feedback systems.</p>
<p>This discussion will highlight the individual immersive experience portion of the overall system because of its tremendous potential impact on customization, experience and creativity.</p>
<h4>Customize the Learning Process</h4>
<p><img loading="lazy" decoding="async" src="/images/tangitrek.jpg" alt="" width="400" height="401" /></p>
<p>Iona enters the Tangitrek. With a force feedback exoskeleton, motion base, gimbaled harness and autostereoscopic display, a student can go anywhere and do anything.</p>
<p>Fast-forward to the 2020&#8217;s. A 12-year-old student walks up a curving entry platform and enters a Tangitrek, which biometrically identifies her as Iona Sole. A 10-foot sphere envelops her after the entry platform rotates out of the way. Iona, who has two years of Adventure Learning experience, makes some selections. The system will accommodate and keep track of her preferences and skill levels in different areas. Like many students, Iona&#8217;s focus used to be more on getting good grades than learning. Now that she&#8217;s developed a strong desire to learn, she is already several years ahead of turn of the millennium educational standards for her age.</p>
<h4>Utilize the Senses and Experience More</h4>
<p>As we continue to increase our knowledge through an accelerating number of virtual techniques, we rapidly lose touch with the tangible world. By further advancing our simulation capabilities we will actually bring real world challenges back into the virtual experience.</p>
<p>The sphere Iona is now inside is the inner of a three-ring gimbal system that allows her complete freedom of rotation. The outer ring is attached to a three-degree-of-freedom motion base that enhances her sense of acceleration by moving up and down, side to side, and forward and back. Iona sits on a small seat and is gently enveloped by a body harness whose sensors make minute size adjustments and lock into place. The body harness is an intelligent force feedback exoskeleton.</p>
<p>Twenty years ago this haptic, or touch technology was used for digital sculpting and surgical simulation. Now Iona can feel objects, step on surfaces, and fly through spaces.<br />
Like all students, Iona eagerly begins her adventure with her ongoing individual exploration project. From there she will traverse research branches which help to solve problems for her personal project or enlighten her to related subjects. Iona, an enthusiast of ancient mythology, has been studying the theoretical genetics of creating a winged horse.</p>
<p><img loading="lazy" decoding="async" src="/images/protein.jpg" alt="" width="400" height="295" /></p>
<p>On her virtual mobile scaffold, Iona surfs through a giant chemical model whose surfaces and atomic bonds she can feel.</p>
<p>In her visualization class, where students are coached in picturing imagery in their heads, Iona worked out a DNA protein configuration that her teacher was able to analyze from her brain scan. These days teachers expect a lot of the work to be done in the student&#8217;s head in an effort to exercise more of her brain and to avoid over-immersion.</p>
<p>Iona begins where she left off by selecting and boarding a virtual mobile scaffold—not unlike a surfboard—on which she navigates through a giant model of DNA. She deftly arranges and rearranges the chemical model whose surfaces and atomic bonds she can actually feel.</p>
<p>The photo-realistic autostereoscopic computer-generated imagery surrounding Iona appears to jump right off the inner surface of the plasma display sphere. Though the spherical display surface is never more than ten feet away from Iona, she perceives apparently distant images reaching to infinity. This adventure system has access to all the imagery and information in the entire US Public Cybrary; a student can access information without the commercial intrusions of general cyberspace.</p>
<p><img loading="lazy" decoding="async" src="/images/dino.jpg" alt="" width="400" height="335" /></p>
<p>Iona closely observes a prehistoric youngster cracking its way out of the egg. She is able to see, hear, smell, and touch simulated objects and environments.</p>
<p>When Iona reaches a juncture relating to bird origins which requires further research, she quickly traverses the Theropoda branches of Dinosaur classification and selects Caudipteryx &gt; Hatchlings &gt; Reality Simulation.</p>
<p>Suddenly, Iona is standing in a Caudipteryx nesting grounds in an Early Cretaceous landscape. This world even smells different, as a function of the flora, fauna, and climate. The nanoaromatic system delivers and deletes aromas based on the biochemistry of objects in scenes. Aroma levels can be dialed up or down by the Tangitrek user.</p>
<p>All around Iona, the small flightless theropods forage among the ferns. She picks up two eggs, feels their shape and weight, and vocalizes a few observation notes for future reference. Iona hears the slightest of cracking sounds that seem to come from exactly inside one of the eggs. A binaural 3-D acoustics system accurately positions simulated or recorded sounds with respect to objects that she sees in the spherical display system. Iona closely observes a prehistoric youngster cracking its way out of the egg.</p>
<h4>Foster a Heightened Sense of Curiosity</h4>
<p>The industrial revolution, with its inventions and factories, by necessity bred a society in which following sequences of instructions was imperative. This approach has lingered through 20th century education even though we&#8217;ve been transitioning into a society where humans become more creative and machines perform manual or repetitive tasks. Adventure Learning allows students to navigate, investigate and determine their own solutions. Endless interdisciplinary combinations discovered in immersive environments will lead to new kinds of specialists with strong diverse backgrounds.</p>
<p>Iona studies horse anatomy and works on a concept design for the mythical flying horse. Before allowing her to set out on a flying trek to ancient Greece, the system prompts her to solve problems involving English words with Greek origins. Reacting to Iona&#8217;s inadvertent avoidance of language, the system ensures a balanced set of disciplines is reached in each adventure.</p>
<p><img loading="lazy" decoding="async" class="alignleft" src="/images/acropolis.jpg" alt="" width="400" height="300" /> Iona flies over the Acropolis as it may have looked in ancient times. She can converse with synthetic townspeople about their era.</p>
<p>Flying over the Acropolis, Iona can see the structures as they originally looked. Simple hand gestures provide navigation as she observes Synthespians in ancient dress going about their business. She lands on the hill to browse the culture a bit. She makes herself visible and speaks with the simulated Greek townspeople.</p>
<p>Recognizable highlighting allows her to see that there are a few other Tangitrek users in the area. In any cultural recreation, local Synthespians see Iona as one of them and are eager to discuss ideas relating to their era. Iona experiences the original language and can turn translation on as needed. She is directed to a temple full of sculptures of interest. She touches the sculptures to enhance her visual study of them. As she moves her hand over the neck of a marble Pegasus, the Tangitrek informs her to prepare to disembark. Though it&#8217;s difficult to leave the simulator, she smiles with satisfaction, knowing information from today&#8217;s adventure will be accessible from her personal media system.</p>
<h4>Rewinding to the Present</h4>
<p>We should note that though the above-illustrated system demonstrates the exploration of a twelve-year-old girl, the concept has use in education or training for people of all ages. It&#8217;s especially important to awaken the senses of the very young to the underlying structure of nature through geometry, color, and sound. The basics: reading, writing and arithmetic will come quickly and easily after a rich, explorative foundation. Kindergarten as we know it today is a diluted version of the original 1830&#8217;s invention of Friedrich Frobel, which intended children to use their natural ability to discover, reason, and create through the universal language of geometric form. Adventure Learning technology can help us reinvent the nearly lost, yet highly innovative teaching system.</p>
<p>We know that with the development of content to drive a combination of technologies—haptic, autostereoscopic display, 3-D computer simulation, data management, harness and structural materials, personal media communication systems—we will be able to realize a profoundly valuable education system. The customized approach improves student self-esteem, intellectual development, and vocational planning. &#8220;Physical&#8221; experience increases retention, balances the psychological with the intellectual, and brings tangibility to ever-increasing virtual worlds.</p>
<p>We will see a transformation in the workplace whereby a person&#8217;s focus shifts from the pursuit of a paycheck to a daily quest for knowledge and creative solutions when we finally develop education content and technology powerful enough to reawaken the natural curiosity of our students.</p>
<p>© 2002 Diana Walczak. Computer-generated imagery created by Patrick Finley. Images composited by Io Kleiser and Diana Walczak. Text and images are intended for demonstration use only.</p>
<div class="footnotes">
<h6>BIBLIOGRAPHY</h6>
<ol>
<li>Brosterman, Norman. <em>Inventing Kindergarten</em>. New York: Henry N. Abrams, Inc.,1997.</li>
<li>Barrett, Paul. National Geographic Dinosaur. Washington, D.C., 2001</li>
</ol>
</div>
]]></content:encoded>
			<wfw:commentRss>https://www.thekurzweillibrary.com/encompassing-education-immersive-interfaces/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Intuitive music</title>
		<link>https://www.thekurzweillibrary.com/intuitive-music-2</link>
		<comments>https://www.thekurzweillibrary.com/intuitive-music-2#respond</comments>
		<pubDate>Tue, 26 Feb 2002 09:09:42 +0000</pubDate>
		
								<media:thumbnail url="https://www.thekurzweillibrary.com/images/bob-moog-19761-140x187.jpg" width="140" height="187" />
		
				<category><![CDATA[Future Visions]]></category>
		<category><![CDATA[resources]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[Bob Moog changed musical history 37 years ago with the invention of the first electronic music synthesizer. On February 26, 2002, he received the prestigious Technical GRAMMY Award for his achievements. Here, he looks at the next 37 years.]]></description>
			<content:encoded><![CDATA[<div id="attachment_109948" style="width: 279px" class="wp-caption alignleft"><a href="http://www.thekurzweillibrary.com/images/bob-moog-1976.jpg"><img loading="lazy" decoding="async" aria-describedby="caption-attachment-109948" class="size-full wp-image-109948" title="bob moog 1976" src="http://www.thekurzweillibrary.com/images/bob-moog-1976.jpg" alt="" width="279" height="373" srcset="https://www.thekurzweillibrary.com/images/bob-moog-1976.jpg 279w, https://www.thekurzweillibrary.com/images/bob-moog-1976-140x187.jpg 140w, https://www.thekurzweillibrary.com/images/bob-moog-1976-259x346.jpg 259w" sizes="auto, (max-width: 279px) 100vw, 279px" /></a><p id="caption-attachment-109948" class="wp-caption-text">Bob Moog in 1976, Cheektowaga, NY. (Image: Photo courtesy of Moog Archives http://MoogArchives.com)</p></div>
<p>In 1964, physicist-musician Dr. Robert Moog invented a new technology that would revolutionize music: the electronic music synthesizer. Like the personal computer and the Web, the synthesizer has put the tools of creating music in the hands of everyone.</p>
<p>After creating and running several companies<sup>1</sup> to develop and market his products, Moog joined Kurzweil Music Systems as VP of New Product Development in 1984, just after the Kurzweil 250 synthesizer, the first electronic instrument to successfully emulate a grand piano, came out.<span id="more-80726"></span></p>
<p>Moog worked on a series of home products and the Kurzweil 150 (which used additive synthesis, allowing musicians to invent new sounds) and the Kurzweil 1000 line, the first instrument to use proprietary sound modeling technology on a single chip.</p>
<p>&#8220;He was a sage advisor on our plans and designs,&#8221; said Ray Kurzweil, who as CEO of Kurzweil Music Systems at that time. &#8220;He was particularly sensitive to the needs of the users, and articulated the musician&#8217;s perspective. He was very interested in new ways of controlling music.&#8221;</p>
<p>This interest in empowering musicians goes back to 1954, when Moog developed his first commercial product, the Theremin Model 201, named after the Russian inventor, Leon Theremin. The first purely electronic musical instrument, the theremin is played simply by moving the hands near two antennas on the device to control pitch and loudness.</p>
<p>Moog&#8217;s current company, <a href="http://www.bigbriar.com" target="_new">Big Briar, Inc.</a><sup>2</sup>, has recently brought back this instrument as the Ethervox<sup>R</sup> MIDI theremin. &#8220;It enables musicians to incorporate theremin-type control in a wide range of musical gestures,&#8221; said Moog. &#8220;So far, few musicians have explored this feature. I believe that use of theremin-type control &#8212; the direct application of hand gestures to control musical material&#8211;will continue to develop, but over a long period of time.&#8221;</p>
<p>One direction in which this kind of technology might develop is suggested by MIT professor Tod Machover&#8217;s &#8220;<a href="http://www.media.mit.edu/hyperins/" target="_new">hyperinstruments</a>&#8221; project, which is developing devices that allow non-musicians to shape and create complex and interesting musical pieces by using gestures or word descriptions (such as musical &#8220;adjectives&#8221;).</p>
<div id="attachment_109951" style="width: 200px" class="wp-caption alignleft"><a href="http://www.thekurzweillibrary.com/images/theremin.jpg"><img loading="lazy" decoding="async" aria-describedby="caption-attachment-109951" class="size-full wp-image-109951" title="theremin" src="http://www.thekurzweillibrary.com/images/theremin.jpg" alt="" width="200" height="177" srcset="https://www.thekurzweillibrary.com/images/theremin.jpg 200w, https://www.thekurzweillibrary.com/images/theremin-140x123.jpg 140w" sizes="auto, (max-width: 200px) 100vw, 200px" /></a><p id="caption-attachment-109951" class="wp-caption-text">Moog&#39;s 1954 Theremin Model 201, his first commercial product. (Image: Photo courtesy of Moog Archives http://MoogArchives.com)</p></div>
<p>How about creating music using real-time motion-capture of body movements, similar to Ray Kurzweil&#8217;s Ramoma virtual performance <a href="/ramona-debuts-to-music-industry" target="_new">experiments</a>? Multiple musical parameters could be controlled by making specific body motions. This might be especially interesting in generating &#8220;5.1&#8221; surround-sound CDs and DVDs for home theaters.</p>
<p>&#8220;Multidimensional control of musical material is valuable, but it has to be intuitively accessible,&#8221; advised Moog. That&#8217;s one of the appeals of the current &#8220;vintage synth<sup>3</sup>&#8221; movement, he said, which is &#8220;basically a return to the analog technology that shaped the sound of the pop music of the &#8217;70&#8217;s. Musicians now understand that the versatility, accessibility, and relative ease of control of &#8216;vintage analog&#8217; are valuable musical resources, which are on a par with but complementary to the resources of digital sound production.&#8221;</p>
<p>The big advantage of analog technology is that musicians can continuously control the pitch, timbre (quality), envelope (sound buildup and decay for a note) and other parameters and easily experiment with different effects in real time.</p>
<p>That&#8217;s why Moog is bringing back the world&#8217;s best-selling analog synthesizer, the Minimoog (popular with rock musicians in the 70s) as the Minimoog<sup>R</sup> Voyager<sup>TM</sup>, which is going into production shortly. The new device adds MIDI control so it can interface with computer software, velocity- and pressure-sensitive keys, a 3-D touchpad, and other improvements.</p>
<p>&#8220;Once production is under way, we engineers here will be designing products that build on the Minimoog tradition.&#8221; The products will be determined by the results of ongoing market research, he added.<br />
<img decoding="async" src="/images/angelicamoog03.jpg" alt="" vspace="10" /><br />
<span class="PhotoCredit">Photo courtesy of Big Briar, Inc.</span></p>
<p><span class="Caption">Minimoog Voyager</span></p>
<p>But Moog is also concerned about a downside of modern electronic instruments. &#8220;While the synthesizer has made it easier to create music &#8212; even simulate an entire orchestra on the desktop, music, especially mainstream pop music, will continue to become more a craft that is practiced by one person at a time &#8216;offline&#8217; in a studio, and less a real-time group activity (i.e. live performance). If that happens, then I believe that we will have lost a valuable cultural resource. I personally am interested in directing music technology toward live performance. We need lots more activities that bring people together, not isolate them.&#8221;</p>
<p>Moog&#8217;s pioneering work was publicly recognized on Feb. 27, 2002, when the National Academy of Recording Arts &amp; Sciences gave him the Technical GRAMMY Award for &#8220;contributions of outstanding technical significance to the recording field.&#8221;</p>
<p>&#8220;This a recognition richly deserved for your seminal contributions,&#8221; said Kurzweil. &#8220;I greatly value the years we spent working together at Kurzweil Music Systems during the late 1980s, and the times our paths have crossed since. Your consistently thoughtful insights into the art and science of creating music, and the intimate interaction between the musician and her musical instrument, have always deeply impressed me. I have to say that you&#8217;re one of those people whose ideas I always listen to most carefully.&#8221;</p>
<p>See Ray Kurzweil&#8217;s <a href="http://www.thekurzweillibrary.com/articles/images/finalgrammyad_small.jpg" target="_new">congratulatory remarks</a> honoring Bob Moog, to appear in the 2002 GRAMMY Awards program.</p>
<p><a href="/" target="_new">Bob Moog, Interviewed by Electronicmusic.com</a></p>
<p><a href="/" target="_new">Interview: Robert Moog</a></p>
<p>Amara D. Angelica, editor of KurzweilAI.net, is a musician experimenting with both analog and digital synthesizers</p>
<h1>Footnotes</h1>
<p>1. The <a href="http://MoogArchives.com/" target="_new">Moog Archives</a>, created by Roger Luther, General Manager of Bob Moog&#8217;s various companies until 1993, chronicles these companies and products.</p>
<p>2. &#8220;We have recently reacquired ownership of the registration of the Moog Music<sup>R</sup> and Minimoog<sup>R</sup> trademarks and are now in the process of changing the name of our company from Big Briar to Moog Music, Inc.&#8221;&#8211;Bob Moog.</p>
<p><!-- #EndTemplate --></p>
]]></content:encoded>
			<wfw:commentRss>https://www.thekurzweillibrary.com/intuitive-music-2/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Decoupling art and affluence</title>
		<link>https://www.thekurzweillibrary.com/decoupling-art-and-affluence</link>
		<comments>https://www.thekurzweillibrary.com/decoupling-art-and-affluence#respond</comments>
		<pubDate>Wed, 20 Feb 2002 08:58:08 +0000</pubDate>
		
		
				<category><![CDATA[Future Visions]]></category>
		<category><![CDATA[resources]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[Harold Cohen's AARON has grown immensely as an artist in its own right. In this talk presented at the Thirteenth Innovative Applications of Artificial Intelligence Conference (IAAI-2001), Harold Cohen explores AARON's remarkable journey as a cyberartist.]]></description>
			<content:encoded><![CDATA[<p><em>Originally presented as a talk August 2001.</em></p>
<p>I met my first computer in 1968, when I came from London to take up a one-year visiting professorship at UC San Diego. Four years later I spent a couple of years at the AI Lab at Stanford, and while I was there I started work on a program designed to generate original art. I called it AARON because I assumed I&#8217;d be writing a series of programs and it seemed like a good idea to start with a name from the beginning of the alphabet.<span id="more-80722"></span></p>
<p>There never was a series of programs. I&#8217;m still in San Diego and I&#8217;m still working on AARON today.</p>
<p>Here&#8217;s another way to begin the story.</p>
<p>I started in on a painting career in 1952 and in the succeeding years I showed my work in one of London&#8217;s leading galleries, where a six-week exhibition would be seen by two or three hundred people, and in various museums and international shows, where the audiences may have numbered a couple of thousand. That was still the case when I met my first computer in 1968. But it had changed by1985, when I represented the US in the World Fair in Tsukuba (fig. 1). A million people went through the US Pavilion in six months and saw AARON making drawings on one of my drawing machines.</p>
<p>In 1995 I did an exhibition at the Computer Museum in Boston, this time using a painting machine (fig. 2). The size of the walk-in audience was conventionally modest, but the PR people reported that through television programs, syndicated newspaper articles, magazine articles&#8211;the entire gamut of media coverage&#8211;my work had reached about thirteen million people.<br />
<img decoding="async" src="/images/coheniaai01.jpg" alt="" vspace="10" /></p>
<p><span class="Caption">Figure 1. World Fair, 1985.</span></p>
<p><img decoding="async" src="/images/coheniaai01.jpg" alt="" vspace="10" /></p>
<p><span class="Caption">Figure 2. Computer Museum, 1995.</span></p>
<p>AARON entered a new public phase early in 2001. In the preceding months, Ray Kurzweil&#8217;s people had done an excellent job of preparing AARON for a new life on the web and AARON&#8211;the program itself, not its output&#8211;became available as shareware (fig 3). Within a few weeks there were tens of thousands of downloads and the number was increasing by two or three hundred a day. That&#8217;s the same number that I said would have seen a conventional gallery exhibition thirty years ago.<br />
<img decoding="async" src="/images/coheniaai03.jpg" alt="" vspace="10" /></p>
<p><span class="Caption">Figure 3. Screen image, 2002.</span></p>
<p>These numbers aren&#8217;t going to surprise anyone in an industry where exponential growth has been the norm for the past decade. Yet I&#8217;m only talking here about the growth in the <em>size</em> of a <em>potential</em> audience, not what it takes to <em>reach</em> that audience. The difference is important: art has a cultural role that can&#8217;t be served unless it <em>reaches</em> its audience. For that reason, issues of dissemination have never been far behind issues of production for the artist; the two are not as independent of each other as one might imagine.</p>
<p>For artists today using conventional means of production&#8211;painting, sculpting&#8211;the current technology-driven changes in the <em>available</em> means of dissemination and the resultant growth of <em>potential</em> audience means very little. There is less chance of someone stumbling across one of fifty million web-sites than there is of someone walking into a commercial art gallery by mistake. But artists using the web as a medium rather than a new mode of dissemination may face a similar predicament. What does it mean to use a public medium without a public?</p>
<p>I made a decision at the outset affecting both production and dissemination: on the one hand to produce art by writing a program that produced art and, on the other hand, to exhibit the program producing art rather than exhibiting only the things it produced. Had I not made that decision then in all likelihood I&#8217;d still be disseminating my work to two or three hundred people at a time in conventional commercial galleries.</p>
<p>For reasons that I hope to make clear, I&#8217;m going to be talking as much about the reasons we do things as about the things we do, so I have to ask why I made that decision. Not because I knew what would happen thirty years later, obviously. It was simply that, in a world where few people knew anything about computers and most of them had it wrong, I didn&#8217;t want to present myself as a magician with a magic box. I wanted to clarify, not to mystify. I was acting out of a persistent belief that the health of art is strongly determined by its relationship to the culture it serves and that supports it and I was groping for an appropriate form for that relationship.</p>
<p>But notice that nothing in that decision had any direct bearing on what I wanted the program to <em>do</em>. We&#8217;re a long way here from aesthetic considerations.</p>
<p>That decision, made thirty years ago, has left me with a number of unanswered questions, one of which I&#8217;m going to try to answer today. As you will see, both the nature of art and the nature of the potential audience figure in my attempted answer.</p>
<p>To introduce the question: I had an email correspondence a while back with someone who said he thought I should make AARON&#8217;s source code available on the web. I responded that I&#8217;d never hesitated to tell people anything they wanted to know about how the program works so that they could write their own programs, but that AARON itself was my intellectual property and I couldn&#8217;t see any reason to give it away. He responded&#8211;echoes of linux&#8211;that if AARON could be picked up by programmers all over the world we might have generations of AARONs producing original art into the distant future. If I kept it to myself, he said, AARON would come to an end when I stopped working on it.</p>
<p>I replied that he had a point&#8211;which he certainly did!&#8211;but that I didn&#8217;t care for his conclusion. My preference would be to have AAARON modify its own performance over time the way a human artist modifies his own performance over time so that AARON <em>itself</em> could be generating original art into the distant future.</p>
<p>Where&#8217;s the problem? In its current state AARON could go on generating original images forever. Yet both the question and my answer presuppose not just the possibility, but the desirability of a different state; one that could be achieved, as it is by human artists, only through continuous modification.</p>
<p>Well, how about re-writing the program so that it randomly changes the values assigned to some of its variables every so often? With hundreds of variables to change, that should cover the search space pretty thoroughly and come up with some interesting possibilities.</p>
<p>That is certainly not what I mean by modifying its own performance over time the way a human artist modifies his own performance over time. The human artist certainly does not wander blindly and randomly through a search space of possibilities, stubbing his toes from time to time on interesting things to do. If you think of &#8220;interestingness&#8221; as a context-free property&#8211;which I do not&#8211;then there are always many &#8220;interesting&#8221; things to do, almost all of which are ignored by the artist just as almost all of the legal moves in a chess game are ignored by the chess master. They fall outside the steadily narrowing window through which the artist surveys the increasingly constrained game to be played out in the future.</p>
<p>The work of any major artist will give you the feeling that he always knew exactly where he was going, but then, any path will look like a path once it has been trodden. Try <em>predicting</em> today what he will do tomorrow and you&#8217;ll get a very different understanding of what &#8220;moving forward&#8221; means. The artist has a notion of direction, not of destination, and he himself can never predict, other than what he certainly will <em>not</em> do, what tomorrow will bring.</p>
<p>As to whether what he does tomorrow will have the cultural role he wants for it; that must depend to a large extent upon the enormously varied demands and the expectations of the culture. The only thing I know about that for sure, after fifty years in the field, is that the aesthetic value of a work constitutes only a part of what the culture wants from the artist, and the size of that part appears to be inversely proportional to what it costs.</p>
<p>There&#8217;s an old art world story about a man who has a Picasso original hanging in his living room until one day a friend tells him that it isn&#8217;t an original, it&#8217;s a print; and he takes it out of the living room and hangs it in the toilet. Less apocryphally and at the other extreme of exclusivity, a Warhol print&#8211;a print, not a painting&#8211;sold at Sotheby&#8217;s last year for four and a half million dollars, bringing to the buyer the same <em>aesthetic</em> value it had when it was first sold&#8211;when the artist was still alive and able to make more&#8211;probably for a couple of thousand. What was the buyer buying for the other four million plus?</p>
<p>Exclusivity, fashion, whatever&#8230; mostly, what the buyers of art are buying is the artist&#8217;s reputation, his status. And while status may be awarded temporarily on the basis of a single work or a single exhibition&#8211;as Andy Warhol famously said, everyone can be famous for fifteen minutes&#8211;it is withdrawn quite soon if the artist is not seen to be moving forward, not developing; if, like AARON in its current state, he merely produced an endless succession of original images without ever changing the terms of reference.</p>
<p>So: not simply change over time; <em>purposeful, self-directed</em> change over time.</p>
<p>Conventionally, we would probably say that purposeful change must necessarily require a program&#8217;s assessment of whether or not its output had satisfied some criteria. My own contention, at which I have already hinted, is that the aesthetic criteria that relate to the appearance of individual works are only the final link in a chain of criteria leading far back into the reasons the individual does what he does and believes what he believes. The central question for me, then, is not whether a program can self-modify in order to satisfy internal criteria; it is whether enough of that chain of criteria can ever be internal to a program for it to manifest the self-directed development we expect of human artists.</p>
<p>I don&#8217;t think I have an answer to that question. What I do have, however, is the thirty-year history of a continuously developing program and some insight into what was involved in directing it. By way of illustration, I&#8217;d like to review a few key points in the program&#8217;s history where I can locate particularly important shifts in direction.</p>
<p>Most of this will concern recent developments, but I need to begin by looking at the basis of AARON&#8217;s drawing strategy, which was established very early on, because there&#8217;s a lot about recent developments that won&#8217;t make much sense unless I do.</p>
<p>Briefly, then; the form AARON took initially followed from what seemed rather obvious; that making images for human consumption meant addressing the cognitive processes that enable human beings to attach meaning to marks on flat surfaces. So I concentrated initially (fig. 4) on giving the program the ability to differentiate between closed forms and open forms, figure and ground and so on; the standard things I could get from books. After a few years I sensed that the books weren&#8217;t telling me everything I needed to know and I started to examine what actually happens in young children in the first year or two of drawing. They start with a simple motor activity&#8211;scribbling; backward and forward, round and round&#8211;until one day one of the round and round scribbles migrates outwards to become an enclosing form (fig. 5). Some time later, the scribble disappears, leaving the closed form (fig. 6) as an empty but infinitely malleable container for whatever the individual wants to represent.<br />
<img decoding="async" src="/images/coheniaai04.jpg" alt="" vspace="10" /></p>
<p><span class="Caption">Figure 4. Computer generated drawing, ink on paper, 20 x 30, 1975.</span></p>
<p><img decoding="async" src="/images/coheniaai05.jpg" alt="" vspace="10" /></p>
<p><span class="Caption">Figure 5. Child&#8217;s drawing.</span></p>
<p><img decoding="async" src="/images/coheniaai06.jpg" alt="" vspace="10" /></p>
<p><span class="Caption">Figure 6. Child&#8217;s drawing.</span></p>
<p>I never succeeded in simulating the child&#8217;s performance very well, but simply in terms of the diversity of forms I could see being generated by this two-step strategy&#8211;build a core figure, then draw a line around it&#8211;the results were astonishing (fig. 7). Compared to any other way I could think to generate forms of that complexity, it was practically a computational free lunch and, even had there not been other reasons for adopting it, I suspect that it would have become AARON&#8217;s dominant mode.<br />
<img decoding="async" src="/images/coheniaai07.jpg" alt="" vspace="10" /></p>
<p><span class="Caption">Figure 7. Tapestry, 40 x 60, 1983.</span></p>
<p>I think it&#8217;s important to point out that the strategy has nothing in common with current computer graphics methods, which, like the photography they simulate, deal with the light reflected off the surfaces of things. AARON knows nothing of surfaces. The method for generating its enclosing forms is general enough that it can handle any core figure the program can construct from its data and it has undergone only a single modification in twenty years. That means that as the program moved on to figurative imagery, most attention has been required by the design of the core figures&#8211;again, nothing to do with surfaces&#8211;that replaced the scribbles in the earlier work.</p>
<p>By the early &#8216;nineties (fig. 8) I had given AARON some rudimentary knowledge of a few things in the world&#8211;figures and plants and very little else&#8211;and it was making recognizable images; initially from a sort of two-and-a-half dimensional mode in which it could make plausible drawings of the body from a single viewing position but couldn&#8217;t do genuinely 3-dimensional things like drawing an arm crossing in front of the body: and then, later (fig. 9), with a much richer, fully three-dimensional description of the human figure comprised of data representing the parts, as well as procedural knowledge about how the parts articulate, how much the proportions can plausibly vary and something about posture.<br />
<img decoding="async" src="/images/coheniaai08.jpg" alt="" vspace="10" /></p>
<p><span class="Caption">Figure 8. Oil on canvas, 54 x 77, 1988.</span></p>
<p><img decoding="async" src="/images/coheniaai09.jpg" alt="" vspace="10" /></p>
<p><span class="Caption">Figure 9. Oil on canvas, 54 x 78, 1997.</span></p>
<p>The physical data represented key points <em>within</em> the body&#8211;articulation points, muscular attachments and so on&#8211;and the lines joining them then constituted a set of internal core figures equivalent to the child&#8217;s scribble. Each part of the body&#8211;upper arm, lower arm, torso, thigh and so on&#8211;was rendered as a closed form which resulted from drawing an encompassing line around each core.</p>
<p>That was fine as far as it went, but of course the human viewer doesn&#8217;t think of the body as a jointed doll, a collection of discrete parts; visually, the shoulder is an area of transition between the arm and the body, not a boundary, and while we may think of a nose as a conceptual unit it doesn&#8217;t offer much in the way of boundaries. In the two-and-a-half dimensional mode (fig. 8) noses were merely markings on the closed form of the head used to indicate the direction the head was facing. But the three-dimensional version was able to give a convincing account of the position of the head simply by the configuration of the outline, and the nose was drawing attention to itself by not being there.</p>
<p>In fact, it was the need to deal with noses that finally triggered the one modification of the underlying drawing mode, which was to have the program leave out parts of the outline&#8211;for any part, not just for noses&#8211;appropriate to the posture and the angle of view (fig. 10). That sounds simple; but since the functions that generated the outline were quite independent of the data from which the core figure was constructed, the modification required a total overhaul of how the data was represented in the program and a set of new functions to interpret the new form of the data.<br />
<img decoding="async" src="/images/coheniaai10.jpg" alt="" vspace="10" /></p>
<p><span class="Caption">Figure 10. &#8220;Theo,&#8221; oil on canvas, 24 x 34, 1992.</span></p>
<p>But hold on, you may say, what&#8217;s wrong with drawing the outline of a nose the way you draw the outline of the head? Good question. The answer is that there&#8217;s nothing wrong with it; 20<sup>th</sup> Century art invented a number of drawing conventions that stretched the visual basis of representation a lot further than that. But it&#8217;s an important question; because if the program had no reason to think there was anything wrong either with a nose-less head or with a nose as a discrete unit stuck onto the front of the face, then there would be no reason why it should want to do anything about it. Once again, the relevant criteria rested on my own sense of stylistic consistency. Unless I could have given the program a rather sophisticated understanding of stylistic consistency&#8211;and there are few words in the vocabulary of art more difficult to understand than the word &#8220;style&#8221;&#8211;AARON could not have found anything wrong with what it had been doing, much less decide what to do about it.</p>
<p>I&#8217;ll come back to the issue of drawing, but now I want to turn my attention to color, which has occupied me for most of the past decade. For all of the images I&#8217;ve shown you so far I did the coloring myself, either directly on one of AARON&#8217;s drawings or by transferring one of its drawings onto canvas and coloring it there. I did it simply because I felt that the images needed color, yet from the very beginning there was a mild but persistent sense of absurdity about the fact that I seemed to be working for the program when I thought the program should be working for me.</p>
<p>So the major change that was initiated in the early &#8217;90&#8217;s, leading to AARON&#8217;s current role as an independent and capable colorist, actually had a long gestation period in which the pressure was building, not by virtue of something the program was doing wrong, but by virtue of something the program wasn&#8217;t doing at all. Why hadn&#8217;t I tried to deal with the problem earlier? Quite simply, I didn&#8217;t know how. Now I do know how and I also know <em>why</em> I hadn&#8217;t known how.</p>
<p>Before that point, I&#8217;d thought I was just another expert unable to tell the long-suffering knowledge-engineer about his expertise, though in this case the expert and the knowledge engineer were the same person. It turned out not to be that at all; it was that I had failed to bring into focus something that I now consider so fundamental as to be almost a law of nature. It is that behavior is always shaped&#8211;both enabled and constrained&#8211;by our physical and our mental resources. Even if I could have described how I did coloring myself&#8211;and I had tried long enough to know it was quite unlikely that I could&#8211;that description would necessarily have been in terms of my own resources, and they were resources the program didn&#8217;t have.</p>
<p>I mean that the human painter has a sophisticated visual system that is capable of registering very small variations in color; and one that permits continuous feedback, reporting the way every new brush stroke modifies the overall color signature of a painting. The result of working within the <em>capabilities</em> provided by that resource is that, almost without exception, human colorists proceed by what I&#8217;d call expert trial and error, continuously modifying what has been put down as each new element is added. In fact, almost nobody has ever made a painting by making all the coloring decisions in his head and then writing down the instructions for someone else to execute. Our resources do not include any marked ability to build a reliable, stable internal representation of a color scheme that has never been actualized for the eye to see. In fact, I doubt that there&#8217;s much correspondence between the way we imagine color and the way we see it; which in itself would constitute a major reason for the colorist&#8217;s trial and error strategy.</p>
<p>AARON doesn&#8217;t have a visual system and it isn&#8217;t clear that a visual system would help much with respect to coloring for the human viewer unless its color responses closely matched those of the human system. What the program does have is precisely the resource that the human colorist lacks, which is the ability to build a stable internal representation; the ability to design a color scheme &#8220;in imagination&#8221; as it were, without ever seeing anything on the screen or on the canvas.</p>
<p>The realization that AARON&#8217;s resources were different from, but not necessarily inferior to, the human colorist&#8217;s resources proved to be the key that unlocked a door for me. Not the solution, obviously, because I had still to devise a rule-based strategy robust enough and flexible enough to provide good performance in an unpredictable range of compositions. But at least I was able to stop asking the entirely irrelevant question about how human colorists use color, and concentrate upon two pertinent questions: firstly, what do human colorists use color <em>for?</em>: and, secondly, what are the <em>properties</em> of color that can be identified and manipulated to any desired end?</p>
<p>The first question&#8211;what do we use color for?&#8211;has almost as many answers as there are inventive colorists; which isn&#8217;t as many as you might think. But it seemed to me that the first and perhaps the only mandatory obligation on the use of color, binding upon all colorists, is that it has to clarify the physical reading of the image. All the rest&#8211;emotional content, decorative value, whatever&#8211;have to fall into place behind that first obligation. In terms of representational art, that means that color has to help determine the difference between objects and spaces, help to establish boundaries and say where things are in the implied three-dimensional world.</p>
<p>The second question has a simpler answer: there are only three primary properties of color that can be manipulated. One is the hue&#8211;where the sample falls on the spectrum. One is it&#8217;s purity&#8211;how much energy falls within how narrow a part of the spectrum. And one is it&#8217;s brightness&#8211;regardless of its hue and its purity, where the sample would fall on a scale from black to white. Since the eye functions primarily as a brightness descriminator the most important property of color is not its hue, not its purity, but simply how light or dark it is.</p>
<p>Armed with some clear idea of what the program should be using color for and what it had to work with I was able to formulate a reasonably robust rule-set for determining the profiles of colors to be used for the individual areas of a painting to give adequate visual separation at the boundaries. And by &#8217;93 (fig. 11) I was using not only AARON&#8217;s drawings, which hadn&#8217;t changed at all during this phase, but also its color schemes, as photographed off the monitor, to make paintings (fig. 12, 13).<br />
<img decoding="async" src="/images/coheniaai11.jpg" alt="" vspace="10" /></p>
<p><span class="Caption">Figure 11. &#8220;Clarissa,&#8221; oil on canvas, 42 x 54, 1992.</span></p>
<p><img decoding="async" src="/images/coheniaai12.jpg" alt="" vspace="10" /></p>
<p><span class="Caption">Figure 12. Screen image, 1993.</span></p>
<p><img decoding="async" src="/images/coheniaai13.jpg" alt="" vspace="10" /></p>
<p><span class="Caption">Figure 13. Oil on canvas, 54 x 78, 1993.</span></p>
<p>The last part of this story is a bit more complicated.</p>
<p>As I emphasized at the beginning, no artist is very far from the problems of dissemination, exhibiting his work. And it didn&#8217;t take much, all the time I&#8217;d been working on color on the monitor, to recognize that a single monitor in an art gallery doesn&#8217;t make much of an exhibition. Given that I&#8217;d solved the problem with respect to drawing by building several generations of drawing machines, the answer now&#8211;the much too obvious answer&#8211;was to build a painting machine.</p>
<p>I won&#8217;t go into detail about the hardware, a large xy device that carried a small robot arm on its beam, complete with a simple hand. It could mix its colors from a palette of water-based dyes by having the hand open the taps on the dispensers for measured times. And it could pick up and use brushes of different sizes.</p>
<p>After making a drawing with waterproof black ink the machine would move a cup from one dispenser to another to mix the required color, deposit the cup where the hand could reach it as it moved around the table, pick up a brush and start filling in the shapes. When it had finished with a color it would empty the cup and wash it out ready for the next color, then wash out the brush and start over again. It took about six hours to complete a painting about four feet by six and in a show at the Computer Museum in Boston it did one painting a day for seven weeks.</p>
<p>On the software side, the first problem was to characterize the dyes mixtures, about eleven hundred of them, in terms of the three fundamental properties. But characterizing them was one thing, mapping them onto the rgb specifications generated for the screen another thing entirely. The guns that activate the screen are effectively light sources, so that mixing is additive, whereas physical materials act as filters and remove light. Mixing dyes is subtractive mixing. (fig. 14) You get yellow on the screen by adding red and green, while if you mix red and green pigments you get dirty brown.<br />
<img decoding="async" src="/images/coheniaai14.jpg" alt="" vspace="10" /></p>
<p><span class="Caption">Figure 14. Additive mixing and subtractive mixing.</span></p>
<p>There was another difference, too, that proved to be much more troublesome.</p>
<p>As I&#8217;ve said, the most important property of color is its brightness, which is manipulated on the screen by increasing or decreasing the output of the three guns. Assuming the dyes are being applied over white paper, you increase brightness by dilution, while you can&#8217;t decrease brightness at all except by adding a darker dye and thus changing the color.</p>
<p>By the time I&#8217;d made and measured the samples of the dye mixtures themselves I had neither the resources nor the inclination to make enough dilution samples for each of those eleven hundred samples to be useful. I made some not very sound assumptions about how the dyes could be grouped and I provided dilution data for the groups rather than for the individual mixtures and the output during the show (fig. 15) varied from very good to clearly incompetent. I realized later that there was a simpler way of establishing dilution, but by then the program was already changing direction.<br />
<img decoding="async" src="/images/coheniaai15.jpg" alt="" vspace="10" /></p>
<p><span class="Caption">Figure 15. Machine painting, ink and dyes on paper, 53 x 70, 1995.</span></p>
<p>There were three painting machines in all; the first served only to tell me how little I knew about engineering; the second was the one used in the Computer Museum and the third, the much improved model I&#8217;ve been showing you, was filmed for several television programs but never left my studio until it went to its final home in the Museum of Computing History late last year. As part of a questionable balance-sheet of energy expended versus advantage gained, I had to recognize that the better the machine got, the more it risked directing attention away from the program instead of toward it. That risk was already evident in the Computer Museum show, where people were interested in the painting but delighted by the machine washing out its own cups&#8211;housework being the high point for robotics, obviously&#8211;and it was becoming increasingly apparent as I found more and more people referring to AARON as a robot artist; as I got more and more requests for pictures <em>of</em> AARON rather than pictures <em>by</em> AARON</p>
<p>This is not to say that the painting machine represented a dead end and had no significant role in AARON&#8217;s development. For one thing, I hadn&#8217;t gone to the trouble of building the machine to have it do simple-minded raster-scan filling and the algorithm I devised for generating the brush movements has become AARON&#8217;s standard way of filling in color.</p>
<p>And, to come back to drawing&#8211;or, rather, to the relationship between drawing and coloring&#8211;the most important gain was the realization that, given black lines to hold the colors in place, the choice of color wasn&#8217;t nearly as critical as I had thought it should be. On the machine the lines served a critical physical function; it simply wasn&#8217;t possible to have edges established by the colors themselves, since the water-based dyes would run into each other and confuse more than they clarified. Once I was no longer using the machine I could, in principle, do anything I wanted to do. What I wanted&#8211;what I had wanted as a painter for several decades&#8211;was to have color play a critical role, not merely a supporting role, in the structure of images. The black lines of AARON&#8217;s drawing were preventing that happening.</p>
<p>The current version of the program does not draw lines (figure 16) and has gone some way toward that goal. But it was not enough simply to leave out the lines; once they were eliminated, the edges of forms had to be established by the colored brush marks themselves. However, my painting algorithm controlled the <em>path</em> of the brush, not the <em>edge</em> of the brush, and whichever brush the program chose there would always be some concavities in the outlines too small for the brush to get into, which had been left blank in the earlier versions and on the painting machine . The problem wasn&#8217;t intractable, but it did involve substantial modifications to the underlying algorithm.<br />
<img decoding="async" src="/images/coheniaai16.jpg" alt="" vspace="10" /></p>
<p><span class="Caption">Figure 16. Screen image, 2002.</span></p>
<p>As to the coloring part of the change; as I indicated earlier, constraining areas of color within black boundary lines minimizes the degree to which the reading depends upon the colors. When you have two patches of color in contact with each other, on the other hand, there are optical effects generated by the color contrast at the edges that were not there before. And in this case the difference was extreme enough that I had to make substantial changes, not to the values assigned to the colors, but to the rules that assigned those values.</p>
<p>That brings us up to date with respect to AARON&#8217;s history and now I want to make some general observations.</p>
<p>Firstly: each phase of AARON&#8217;s development has involved modifying, but never abandoning, what has gone before. So, for example, AARON still structures its forms the way it has always done, though the delineation no longer shows in the output.</p>
<p>Secondly&#8211;and this may be simply another way of saying the same thing&#8211;there have been no false starts in AARON&#8217;s development. That doesn&#8217;t mean that I&#8217;ve been clever enough always to pick the right thing to do, but simply that this is a game that has to be played with the cards I&#8217;ve dealt myself. Given that human artists don&#8217;t <em>suddenly</em> change their style from one day to the next or one decade to the next, we might assume it to be a characteristic of directed development for the human. For the program, I would assume it to be rather difficult characteristic to emulate and, whatever other difficulties might arise, it certainly couldn&#8217;t emulate it unless it had appropriate access to previous states of the game.</p>
<p>That rules out AARON in its present form, if not programs in general, because AARON doesn&#8217;t have any archival record of its work. It doesn&#8217;t have one for the simple reason that I&#8217;ve never known what should go into one. As I suggested earlier, if the program is to move forward by assessing its efforts against various criteria, then it needs a representation of those aspects of the work that bear upon the criteria, including the decision history that resulted in the particular example.</p>
<p>Lots of problems here, the principle one being that AARON&#8217;s work is intended for human use and its criteria must consequently reflect what the human viewer responds to in an image. That is not the result of a single decision, it rests upon the interaction of thousands of decisions. Even if I were to accept the conventional view that the program would need to be trained by a human expert, it would have no way of knowing in detail what the human expert is responding to when he declares that one image is better than another, because the human expert doesn&#8217;t know,</p>
<p>As you can imagine, I&#8217;ve frequently been told that that&#8217;s exactly the kind of thing neural nets are supposed to do. That leads to the third general observation, which is that all of the changes of direction I&#8217;ve described, and all the others I haven&#8217;t described, have involved sizeable modification to the program itself, often down to the invention of completely new algorithms. I&#8217;ve never done anything that affected the program&#8217;s development significantly by adjusting the values of variables.</p>
<p>The fourth observation I want to make, and perhaps the most crucial with respect to my contention, is that no change of direction was ever made because there was anything wrong with the program&#8217;s output. There was nothing wrong with the output of AARON&#8217;s earliest phase and there is nothing wrong with the colored drawings that are now being generated on tens of thousands of machines around the world. Well, I don&#8217;t mean there was <em>nothing</em> wrong exactly; I mean there was nothing wrong aesthetically with the individual images; no failure to satisfy aesthetic criteria internal to the program that might have prompted it to make changes. To restate my original contention; the criteria that have been most in evidence in requiring changes of direction are at the upper end of the chain and external to the program itself.</p>
<p>I&#8217;d like to finish up this survey, then, not by looking at individual changes, but by trying to say a little about why AARON exists at all and why it took the path it did.</p>
<p>In all the museum shows I did with AARON using drawing machines, I offered its drawings for sale: $25 for a signed original; an hundreds were sold in places dedicated to showing things that ordinary people could not dream of owning. I&#8217;ve discussed the reason for building the drawing machines: I wanted to show what was happening, not just a few examples of the things that fell out in the process. But why on earth did I want to sell drawings at two orders of magnitude less than I could sell my own drawings for in the conventional art world?</p>
<p>You may be inclined to suppose it was an experiment in marketing, but clearly that couldn&#8217;t have been the case. I had no reason to suppose that I could sell a hundred of AARON&#8217;s drawings for every one of my own. With no precedent to look to, there were no reasons to suppose anything at all, except perhaps that making an anti-elitist move like this one might very well endanger my standing in the conventional art world. Which, of course, it has. After all, I could hardly expect approval for saying that a machine could do what we normally assume only talented human artists can do.</p>
<p>Why would an artist who had spent twenty years of his life making luxury objects for wealthy people feel compelled to make a socio-political move of this sort? I never had anything against wealthy people buying luxury objects&#8211;including the ones I made&#8211;and I still don&#8217;t. On the contrary, and regardless of why they do it, wealthy people are actually fulfilling a rather important role on behalf of the culture; we wouldn&#8217;t have museums full of great art unless someone had commissioned it and bought it. I do have to confess to having been quite fed up with an official art establishment that existed, as all establishments do, to maintain what has been established: which, in London at the end of the &#8216;sixties, included me. But being fed up doesn&#8217;t necessarily lead to computing, especially for someone with nothing of the sort in his background and at a time when computing meant sitting up all night punching IBM cards.</p>
<p>I&#8217;ll skip any amateur psychoanalytical speculation. Whatever the reasons for AARON&#8217;s existence and its history, it&#8217;s surely obvious that they are not to be found in the individual works produced by the program, but in my own background, my own history. What, then, would be involved for a program to have its own history of self-directed development?</p>
<p>Now I&#8217;m not much of a programmer and writing a program so that it can rewrite itself is way beyond my capacities; most of the time I don&#8217;t even know how to rewrite things myself without much stumbling and a great deal of debugging. If my more capable colleagues want to assure me that one can indeed write programs that can figure out new algorithms for themselves, rewrite themselves, debug themselves and assess the results by themselves, I&#8217;m quite prepared to believe them.</p>
<p>I am not yet prepared to believe, however, that a program can have the reasons for doing so: that it can embody enough of some equivalent to the chain of criteria that determines what human artists do that it could be capable of the kind of self-directed development I&#8217;ve been describing.</p>
<p>On the other hand&#8230;</p>
<p>My confidence in the logic behind this conclusion needs to be qualified, for it&#8217;s quite possible that I&#8217;m not thinking about this in the right way at all. So I think I must end by acknowledging that there may be other ways of thinking about things; and, in the process, clear up what may seem to have been an oversight. I mean that AARON is one of the very few programs in existence with any claim to creativity and I haven&#8217;t used the word once. Isn&#8217;t creativity, after all, precisely at the core of this discussion?</p>
<p>Creativity is a word I&#8217;ve never used if I could avoid it and I&#8217;ve certainly never made any such claim on AARON&#8217;s behalf. On the contrary, while friends and well-wishers like Margaret Boden and Bruce Buchanan have continued to cite AARON as a prime example of programmed creativity, I&#8217;ve maintained steadfastly that it&#8217;s nothing of the sort, citing precisely the fact that the program is incapable of the kind of development we would require of a major human artist. Of course, that could itself be regarded as a definition of sorts; one that not too many human artists could satisfy, by the way, and one that may not map easily onto other domains, which spread a long way down from the Einsteins and Michelangelos once we leave the more refined strata of academic research.</p>
<p>If I had to place AARON somewhere on a spectrum with creative napkin folding at one end and general relativity at the other, I would certainly place it well above creative napkin folding, so perhaps its time to soften my hard-nosed position, that AARON is not creative at all, and concede that it&#8217;s creative with a small &#8216;c&#8217; but not Creative with a big &#8216;C&#8217;. That little dance step brings a nagging question into focus, however, and I don&#8217;t think we know how to answer it. Is there, in fact, a continuous spectrum from creative napkin folding to general relativity? Is creativity with a small c a small example of Creativity with a big C, or are the two entirely different? If they are entirely different, then we haven&#8217;t even begun on the search for programmed Creativity&#8211;with a big C&#8211;and I certainly don&#8217;t have any clue as to how to begin.</p>
<p>If there is a continuum, then that&#8217;s another story. I&#8217;m reminded of a conversation a long while ago with my good friend Ed Feigenbaum. I&#8217;d written something for an exhibition catalog text along the lines that machines couldn&#8217;t think, but they were capable of making extremely complex decisions. Ed was quite upset and insisted that machines could think. I objected that he was voicing an article of faith rather than a demonstrable fact and he replied that when machines make much more complex decisions than they were then able to make, we would all say they were thinking and we wouldn&#8217;t give the matter another thought.</p>
<p>And so it has come to pass. And it may well be that as AARON becomes more complex, as the range of its abilities and its knowledge of the things in the world and the determinants to what it should do about them becomes much more complex than is now the case and without regard to the nature of the complexity, perhaps even I will be prepared to say that it&#8217;s Creative&#8211;with a big C&#8211;and not give the matter another thought.</p>
<p>I have every reason to hope so, given that I don&#8217;t know how else to proceed.</p>
<p>On the other hand&#8230; who knows what tomorrow will bring&#8230;?</p>
<p><!-- #EndTemplate --></p>
]]></content:encoded>
			<wfw:commentRss>https://www.thekurzweillibrary.com/decoupling-art-and-affluence/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>
