Chapter Four: Achieving the Software of Human Intelligence

IT’S BEEN SO LONG, but I have not given up! I had a dream that I finished The Singularity is Near and it was such a tremendous feeling.

“Software of human intelligence”…now I am NOT one for comparing the brain to a computer, and I greatly appreciate that Kurzweil too does not agree the comparison is so simple. So the previous chapter focused on the computational capacity of the human brain, or the “hardware”. The “software” is a new level that incorporates the subtle and artistic intelligence that the brain is capable of. Yes, you can have circuits that produce effects A then B then C, but Kurzweil (and I) argue that this is not enough to aptly describe the brain. In order to reverse-engineer the brain, we must also understand the subtleties that affect and organize our intelligence and our conscious experience.

It seems like a daunting task to try and replicate every neural connection in our brain. How could we ever simulate the capability of a brain when we don’t even fully understand it? A brilliant way to take the stress out of perfecting every detail is using nature’s self-organizing paradigm to our advantage. The brain is a tool to hold information, designed by nature. It took billions of years to develop the brain, and we’re only a decade or so from COMPLETELY understanding it (even though modern science is only a couple hundred years old). In order to advance our technology, we must embrace biology’s self-organizing paradigm.

Imagine computers that are capable of evolving over time! This is not an outrageous idea (though it does require a good deal of optimism). Nature does not consciously direct evolution, it sets up a paradigm where systems intrinsically evolve! Soon our technology will catch up to this natural logic, at least according to Ray Kurzweil.

How the Human Brain is Different from a Conventional Computer

  • The brain’s circuits are very slow, so imagine the improvements available with future technology.
  • Massively parallel circuitry
  • The brain combines analog and digital phenomena, which is very unique. For example, axon firing can be represented digitally, but it is in actuality an analog process.
  • The brain rewires itself
  • Most of the details in the brain are random, and more efforts are being made to understand neural function through complexity theory and chaos theory.
  • The brain uses emergent properties: “Intelligent behavior is an emergent property of the brain’s chaotic and complex activity. Consider the analogy to the apparently intelligent design of termite and ant colonies…Despite their clever and intricate design, ant and termite hills have no master architects; the architecture emerges from the unpredictable interactions of all the colony members, each following relatively simply rules.”
  • The brain is imperfect
  • We contradict ourselves, and our brain allows us to do so. Our evolution of intelligence and technology depends on the internal diversity of our conscious experience. The fact that our brain can hold contradictory views does not necessarily mean failure, because it serves as a pathway to more superior outcomes. Thus, conscious evolution.
  • The brain uses evolution
  • The patterns are important
  • The brain is holographic
  • The brain is deeply connected
  • (I just realized what a terribly strange word “brain” is)
  • The brain does have an architecture of regions
  • The design of a brain region is simpler than the design of a neuron

Peering into the Brain

The resolution is brain-imaging is constantly being pushed further. When this book was published back in 2004, physicists were already pursuing intracellular surgery, where single mitochondrion could be destroyed without affecting any other part of the cell. Scientists have successfully severed interneuronal connections without damaging surrounding tissue or function. This technology is very young, but it’s important to take pause and understand the state of the field, cause it’s moving right along.

We are heading toward technology that enables us to literally observe neuronal function. As a young neuroscientist, this is obviously incredibly exciting. Right now, brain-imaging falls plenty short of what we need. Most techniques are either poor in temporal resolution and strong in spatial resolution, or vice versa. Overall, there is room for improvement, and I am absolutely thrilled to hear Kurzweil’s perspective on the movement of this technology.

The BBB

Kurzweil predicts that by the 2020s we will be able to scan the brain from the inside using nanobots, which will be the size of human blood cells or smaller. The nanobots would use a network of high-speed communication, allowing nanobots and computers to all communicate using a “wireless local area network”. One of the bigger challenges in the application of nanobots in brain scanning is interfacing with the blood-brain barrier (BBB). The BBB is a semi-permeable barrier, consisting of a series of tight-junction capillaries that separate unwanted fluids and molecules from the blood circulating in the brain.

Here are some methods that Kurzweil suggests in order to cross the BBB with nanobots, some already in development and some that may be developed over the next quarter century:

  • Make nanobots small enough, which is not very likely considering the nanobot would have to be twenty nanometers or less in diameter (the size of 100 carbon atoms).
  • Keep nanobots in bloodstream with robotic arm that extends through BBB
  • Keep nanobots in capillaries and use noninvasive scanning
  • Noninvasively emitting focused signals
  • Creating nanobots that can break through BBB by making a hole, and then immediately repairing the damage, yikes.
  • Triggering/simulating natural processes by which the BBB opens up, this technique is of great interest in cancer treatment studies.
  • Bypass bloodstream and BBB by injecting nanobots directly into brain

Nanotechnology could/will greatly improve brain-imaging resolution, in ways that I have a hard time even imagining. And we won’t even need to observe every neural connection to fully reverse-engineer the brain. We just need to sufficiently understand the basic patterns of each region, which according to Kurzweil will be possible by the 2020s. Time is ticking.

I know a lot of my peers are hesitant to embrace the idea of nanobots in our brains. I hear it, I get it, I respect it. There is a lot at stake as we move forward through the era of biotechnology and into nanotechnology.

Here’s my question: is it better to spend time fearing and half-heartedly preventing the progress of nanobots in our brain, or is it better to dedicate our efforts to ending diseases such as Alzheimer’s or cancer? We have the power and the opportunity to say GOODBYE to neurological disease. Is this not the goal of mankind, to forever improve the human experience?

We have years, maybe even decades to decide what the moral and logistical limits are to improving human well-being. I gotta say though, advancements are moving fast, and before you know it, I think we will have accepted the natural gift that is technology, BECAUSE WE ALWAYS DO.

Will robots inherit the earth? Yes, but they will be our children.
– Marvin Minsky, 1995

Chapter Three: Achieving the Computational Capacity of the Human Brain

BACK TO THE SINGULARITY IS NEAR. Kind of a dry chapter, but I still want to touch on a few basic ideas.

DID YOU KNOW ABOUT NANOTUBES?

So I just thought nanotechnology was just “really small machines” but little did I know. In order to make nanotubes, molecules are assembled into three-dimensional structures that perform logic tasks. MOLECULES.

Granted, there are many skeptics of nanotechnology, but over time research efforts have progressed greatly. By the early 2000s, IBM presented a functional integrated circuit, only one thousand nano-based transistors.

All you need to understand about logic tasks for now is there is an input, or a “question”, and then there is an output or “answer”.

Currently (or during the time the book was published in 2005) the hope is to create electronic/mechanical systems capable of self-replication. This idea is of course inspired by biology. Afterall, DNA is biological “technology”.

Computing with DNA

DNA is nature’s own nanoengineered computer, and its ability to store information and conduct logical manipulations at the molecular level has already been exploited in specialized ‘DNA computers.

I had NO IDEA about this one. DNA computers? For real?

Basically, a strand of DNA will be replicated trillions of times using PCR, and then all of those strands will be placed together in a test tube where they naturally link together. DNA sequences on the strands can represent “different symbols, each of them a possible solution to the problem.”

THEN, SOMEHOW each strand is tested at the exact same time using “specially designed enzymes that destroy strands that do not meet certain criteria.” This may be an abstract idea, but don’t get too stuck on it. The point is, scientists are able to manipulate biology to perform desired tasks. There are incredible limitations to using DNA computers, but I’m mostly just in awe of the possibility.  I barely understand how either. But when you break down systems in terms of an input and an output, absolutely DNA can be a manipulated technology in the future. What would it mean to control functionality (to an extent) on a genetic level? OH the implications.

Computing with Spin

In addition to their negative electrical charge, electrons have another property the can be exploited for memory and computation: spin.

Again, this computation is possible using logic tasks. An electron, in theory, can spin “up” or “down” (you’ll have to excuse the spatial terminology, chemists and physicists)

An important role for spintronics in the future of computer memory is clear, and it is likely to contribute to logic systems as well. This spin of an electron is a quantum property (subject to the laws of quantum mechanics), so perhaps the most important application of spintronics will be in quantum computing systems, using the spin of quantum-entangled electrons to represent qubits…

[[[ qubit = quantum bit ]]]

HERE’S A KURZWEILIAN PREDICTION, and I really do believe thatKurzweilian should be an adopted term.

Based on the [analyses], it is reasonable to expect the hardware that can emulate human-brain functionality to be available for approximately one thousand dollars by around 2020…By 2050, one thousand dollars of computing will exceed the processing power of all human brains on Earth. Of course, this figure includes those brains still using only biological neurons

THIS GUY IS SO BOLD. Here I will mention that the only true education is one that questions every claim and every idea. As a scientist, skepticism is a tool. That being said, I’m curious to see what wil happen once 2020 comes around, not to mention 2050. We live in exciting times.

Setting a Date for the Singularity

So in case you’re just tuning in, the general definition of Kurzweil’s Singularity is when technology transcends biology. It is essentially the basis of his theory, the meat and potatoes.

The Singularity is predicted to occur in the year 2045.

I set the date for the Singularity– representing a profound and disruptive transformation in human capability– as 2045. The nonbiological intelligence created in that year will be one billion times more powerful than all human intelligence today.

Unfathomable is one way to describe this prediction, but considering that we are at the “knee of the curve” in terms of technology, I’d say within 10 years there will be MUCH more generally accepted evidence (not necessarily proof) for this dramatic shift in human civilization. Definitely within the next ten years these progressions will become more obvious. Kurzweil predicts that by 2020 computing will not be limited to “rectangular devices”, but will be integrated into most of our environment. “Computing will be everywhere: in the walls, in our furniture, in our clothing, and in our bodies and brains.” This is already happening, on fridges, in automobiles, etc., but imagine how much room there is to expand.

Here’s a great plug to get you to check back in for Chapter 6, where Kurzweil will “examine the computational potential of an Earth-size planet and computers on the scale of solar systems, of galaxies, and of the entire known universe.” He *warns* that the arrival of these paradigms are coming a lot sooner than most people imagine. EEEEEEEEKK!!!!!!!

AGAIN, I CANNOT OVERSTATE HOW INCREDIBLY MIND-BLOWING THESE PREDICTIONS ARE, DON’T YOU AGREE? Yes yes, they aren’t foolproof, but PLEASE can you just IMAGINE? These ideas, they make me feel limitless, endless…absolutely eternal. I believe in the possibility that we will live hundreds, if not thousands of years. I believe in the possibility that we will have to redefine the idea “live”. I believe in the possibility that communication will cease to be restricted by language. Every day, I continue to fantasize the endless possibility. I am wholly and almost neurotically excited for future. Aw man, it’s just gonna smack everybody in the face like WHOA. Whether or not Kurzweil’s predictions are exactly spot on, I’m gonna go ahead and predict we will not be living ANY SORT OF LIFE that our grandparents lived. Certainly not…

The future is so vast.

The Universally Integrated System

Minority Report

Minority Report concepts not so crazy, eh?

Recently I have noticed that the concept of a universally integrated system (UIS) is not generally understood, or simply thought of by the average person. It has become abundantly clear to me that eventually all technology will be integrated. This is already the popular trend, whether noticed or not.

What I mean is that one day….ONE DAY, we will not have to mess around with so many goddamn crazy dumb cords.

But that’s only one tiny feature of the awesomeness that a UIS would give! The further we travel in the era of technology, the more individual pieces of technology are invented, which means there is an overall increase in differences between individual systems.

The leaders of our technological future know this well. In fact, we do see more systems all the time becoming more compatible with each other. But when you actually see the whole scope of possibilities that a UIS would promote, the advancements made so far appear absolutely minute. Doesn’t mean efforts aren’t in the right direction, all I mean to say is OMG HAVE YOU THOUGHT OF ALL THE POSSIBILITIES?!?!?! We might not even have enough existing language to describe all of the possibilities. Thank about THAT.

For the sake of a contemporary example, let’s just focus on publicly common systems, such as Apple’s system. What have been the most convenient advances to your Apple phone or computer? Well, every system update, it becomes significantly easier to connect and share information between Apple devices. But imagine! What if brands and device categories no longer bordered these systems? IMAGINE *gasp* it didn’t matter you left your computer adapter at home. Imagine it didn’t matter that your document was saved in a “certain format”, imagine instead your document just was. 

These ideas are absolute baby steps compared to what is possible. So here’s my homework for you: open your mind to scrutinizing technology. I feel that there is a stale air under which some people assume technology advances by order of the corporation. Technology does not have to be this way, we can make incredible things possible as long as the general public THINKS about them. So next time you can’t remember what your goddamn password is for THAT ONE SITE, allow yourself to imagine what could be in place of a password. Next time you wish you could tap you debit card on the subway to pay for fare, wonder how a system would be able to actually support that action.

Brainstorm, people. There is work to be done. The future is ours to create. OR AT LEAST JUST TO THINK ABOUT. 

Why we hate Hipsterlectuals.

What’s a hipsterlectual?

Quite frankly its a poor title for a seemingly title-less group of people! But I will use it in this post for the sake of consistency.

This post begins with a conversation I started with my friend Evan the other night. We got on the topic of his experience in San Francisco, and the genre of people he met there. Which led to a discussion about the genre of people that live on the West coast, particularly in the Pacific Northwest.

Here’s a list of associated words to most accurately describe this group of people (who btw, live across the country): progressive, natural, liberal, sustainable, vegan, counterculture, organic, co-ops, outdoorsy, crunchy…catch my drift?

So what’s your instant impression of these “hipsterlectuals”?

If you’re a waiter, I’m sure you’re pretty annoyed when one of them asks you where the chicken on the menu was raised.

Which brings me to my next point; the hipsterlectuals are INCREDIBLY easy to mock. They are widely considered an obnoxious and snobby group of new-age idealists.

For some reason, when discussing this group with my friend, for the first time I asked myself WHY these people are so easy to mock. I personally know many people who may be categorized in this genre, and I DEEPLY respect and admire their values and ambitions. I mean, there is NOTHING I find obnoxious about the way these individuals live. Granted, any one person can be any varied degree of obnoxious, no matter who they share the same values with, so there’s some truth to the annoyance many people feel about certain individual hipsterlectuals. Overall though, I could only dream to have as high standards and strong convictions as these people do.

But yet, why are these people who promote progressive ideas and values condemned to comedic intolerance? Their overall unifying characteristics are that they care about the environment, contribute to a healthier, more balanced future, and they limit personal indulgences to socially and environmentally responsible decisions.

What the hell is so goddamn funny about that!!!????

And yes! It is funny to mock them! The show, Portlandia, hits the nail right on the head. If you haven’t seen the show, WATCH IT. It’s absolutely brilliant. The best characteristic of the show is how precisely the characters portray the hipsterlectual stereotypes, in ever variety they exist in. [[It's clear that in order for Fred and Carrie to so aptly mock these stereotypes, they are well acquainted with the existing subculture.]]

AND SO I FIGURED OUT WHAT MAKES THEM SO FUNNY/OBNOXIOUS/STEREOTYPABLE!!

The lifestyle is impractical, BUT ONLY because of the way our culture has forced us to live.

A waiter who is asked where the chicken on the menu was raised finds the customer obnoxious BECAUSE the waiter works at an establishment where he cannot provide for the needs of the customer. If the restaurant’s mission was to inform customers where the food they put in their mouth comes from, the waiter WOULD NOT feel annoyed.

If we lived in a culture that valued sustainability, we would not feel the need to MOCK those who devote endless hours to sustainable lifestyles; we would not be annoyed by the vegan whom we can never properly cook for.

We simply would not hate them so.

Instead we live in a culture where we are not alotted time to care about these core issues. We do not give up our car because our job is miles away across busy roads and intersections. We do not care to eat quality food because we don’t have enough money to afford it, or we don’t have the time or informational resources to make responsible choices. We do not live sustainably because we wouldn’t know where to start, because we are not educated about it from a young age. 

We do not ask where our meat comes from, we cross our fingers and hope it comes from somewhere good. We hope that our lives our not manufactured from an unjust and dishonest world. We hope someone is taking care of us they way we would take care of our own children.

But we are not being taken care of. THAT is what the “hipsterlectual” movement stands for. That easily mockable group are dedicated to fueling the struggling advancement of humanity. They stand for the inconvenient truths and ask the annoying questions that every human being should inherently want to know the answers to anyway.

I’m not suggesting they should not be laughed at ever (because that would mean Portlandia isn’t one of the greatest shows of all time)… I merely suggest we should constantly be asking the why’s and the how’s. There’s so much to learn about our human nature, and our education begins with our own questions. Imagine that.

For instance, where the hell DID this chicken come from???!!!

VEVO CEO Tries To Explain Their Hypocritical Act Of Piracy At Sundance

annaeneumann:

Check it.

Originally posted on TechCrunch:

Yesterday I reported on a bizarre incident I witnessed at Sundance last month: VEVO, the music portal owned by some of the biggest record labels in the US, had a pirated NFL playoff game playing on screens throughout its ‘PowerStation’ venue.

The incident was immensely hypocritical, given that VEVO is owned in part by Universal Music Group and Sony Music Entertainment (with EMI licensing its content to the service) — the same music labels that have made a habit of attacking consumers over alleged acts of piracy.

Now VEVO CEO Rio Caraeff has written a post to the VEVO blog, where he tries to explain what happened. In it, he writes that the game was accessed and streamed by a guest of the event without VEVO’s knowledge.

A guest of our lounge asked for an NFL game to be aired. We said no. There was a laptop hooked up to VEVO.com…

View original 763 more words

The Dance of the College Student

I would like to preface this post with this: my ego is huge. Now, over the years I have learned to balance out my ego with greater respect for humility, rationale and empathy. It’s a work in progress.

What I mean to say is, I am terribly annoyed by college students (even though I am one).

I do realize that “college students” is a broad term, and of course I am not talking about every college student (and I do mean every, Doogs). It is my ego that is annoyed by college students, not my better half.

As I may have mentioned before, the philosophy course I am taking (studying ethics and morality) is discussion-centric. The professor merely chooses a raised hand, and answers questions or debates statements given. It is an EXCELLENT design for a course, in fact it is only the most effective. You must participate, you are held accountable for the information, and you discuss the material for several hours every week in class, not lecture.

So yeah! Great class topic, design, professor! How are the students?

The real question is what do the students do…and I tell you, they dance.

A week or so ago, someone brought Santa Claus up. We were talking about “truth” and “lying”, and someone brought up Santa, not knowing that every other Dick and Jane in the room wanted to debate the moral implications of Santa Claus as well. I don’t know if you can tell already, but I did not care to debate the morality of Santa.

Kindly and patiently, Doogs would reply in different formats, explaining how Santa acts as a tool for moral growth. Santa teaches kids to behave, in order to receive a reward. The hope is that by the time a kid finds out Santa isn’t real, they no longer need the tool to be a good person.

Doogs calls Santa “a part of the dance“. Everyone does it, and no dance is quite the same.

The dance describes the universal process of growing up. Furthermore, the dance describes the games, rituals and methods people use in a culture or as an individual to grow consciously. Beautiful idea, I definitely think about this term more and more now that there’s a quick title for it (thanks Doogs).

And then some kid raised his hand, because he needed to profess that he did not understand WHY people need the “Santa dance”. Why can’t they just see the whole picture like I can? I’m an engimaneer!!!!

[{My head screamed ddduh duddh duh dudddh DURRRRRRPPP}]

Now this is the part when my ego got me in a stranglehold:

I kindly rose my hand to reply, and said something very much like “So basically what you’re saying is that you don’t understand why you are smarter than everyone else, or why people aren’t as quick to learn as you are?”

(Not proud, but I did my “best” to balance out that verbal slap with more positive remarks about something else he said that I pretended to respect. I am constantly working to monitor and restrict these loaded, bold statements. Like I said, work in progress. I have to admit, I am a winner when it comes to being a verbal bitch.)

But as SOOOOON as I was done talking, I realized the absolute irony in what had just happened. After stewing intensely about how my peers didn’t understand the need for the dance, I completely ruined someone else’s dance! This poor engineer, who was definitely not some idiot as I made him out to be, is learning how to learn, or should I say “dancing to learn”, and all I did was tell him that I was smarter and quicker than he. DO YOU SEE THE CIRCULARITY IN THIS SITUATION? He poo pooed the dance, and then I poo pooed his very dance to dance! Absolutely dispicable.

What did I learn from this whole event?

“Anna, you shut your mouth when you know you’re right.”

College students have the right to ask dumb questions, and they have the right to take their time to learn, as long as they are actively trying to learn. It doesn’t prevent me from being apeshit annoyed with them, but I’d love to reach a stage in my life where I never get frustrated over other people’s frustrations.

The freedom of the dance is required for the quest of truth. To guide a dance is to be the dance partner who neither leads nor follows, but shares. It is too easy to ruin the dance by stepping on people’s toes.

Snot-nosed little punks.

annaeneumann:

As a new/future researcher, this is an incredibly exciting post. I have wondered and hypothesized about this advancement before, and I’m happy to read other scientists discussing the implications.

Originally posted on TechCrunch:

This guest post was written by Richard Price, founder and CEO of Academia.edu — a site that serves as a platform for academics to share their research papers and to interact with each other.

Instant distribution


Many academics are excited about the future of instant distribution of research. Right now the time lag between finishing a paper, and the relevant worldwide research community seeing it, is between 6 months and 2 years. This is because during that time, the paper is being peer reviewed, and peer review takes an incredibly long time. 2 years is roughly how long it used to take to send a letter abroad 300 years ago.

Many platforms are springing up which enable research distribution to be instant, so that the time lag between finishing a paper, and everyone in the relevant research community worldwide seeing it, is measured in hours and days, rather than months…

View original 1,290 more words

Continue reading