Achieving Manufactured Common Brains

Technological Singularity By Vernor Vinge

I had hoped that this discussion of IA would yield some clearly safer approaches to the Singularity . Alas, about all I am sure of is that these proposals should be considered, that they may give us more options. But as for safety — some of the suggestions are a little scary on their face. We humans have millions of years of evolutionary baggage that makes us regard competition in a deadly light. Much of that deadliness may not be necessary in today’s world, one where losers take on the winners’ tricks and are coopted into the winners’ enterprises.

Our ability to achieve this understanding, via either the AI or the neuroscience approaches, is itself a human cognitive act, arising from the unpredictable nature of human ingenuity and discovery. Progress here is deeply affected by the ways in which our brains absorb and process new information, and by the creativity of researchers in dreaming up new theories. It is also governed by the ways that we socially organize research work in these fields, and disseminate the knowledge that results. At Vulcan and at the Allen Institute for Brain Science, we are working on advanced tools to help researchers deal with this daunting complexity, and speed them in their research.

When it comes to black holes, singularity is at the center of them. A black hole is a highly dense one-dimensional point where all matter compresses to an infinitely small point. Entrepreneurs and public figures like Elon Musk have expressed concerns over advances in AI leading to human extinction. Significant innovations in genetics, nanotechnology and robotics will lay the foundation for singularity during the first half of the 21st century. To provide a bit of background, John von Neumann — a Hungarian American mathematician, computer scientist, engineer, physicist and polymath — first discussed the concept of technological singularity early in the 20th century. Since then, many authors have either echoed this viewpoint or adapted it in their writing.

They lack the ability to make decisions outside of their programming or use intuition. Without self-awareness and the ability to extrapolate based on available information, machines remain tools. The Singularity Spectre 2.0 is a larger open frame E-ATX chassis perfect for installing complex custom water cooling on high-end systems. The mid-tower open frame chassis also has a distribution plate as a motherboard tray, it can accommodate two 360mm radiators and includes the option for vertical graphics card mounting with mounting bracket and riser cable included. By working through a set of models and historical data, Kurzweil famously calculates that the singularity will arrive around 2045. Good has captured the essence of the runaway, but he does not pursue its most disturbing consequences.

I. J. Good’s “intelligence explosion” model predicts that a future superintelligence will trigger a singularity. Moore’s law, which is based on the observation that t the number of transistors in a dense integrated circuit double about every two years, implies that cost of computing halves approximately every 2 years. However, most experts believe that Moore’s law is coming to an end during this decade. Though there are efforts to keep improving application performance, it will be challenging to keep the same rates of growth. While machines can seem dumb right now, they can grow quite smart, quite soon.

Leave a Reply

Your email address will not be published. Required fields are marked *