Gå til innhold

Singularitet - En diskusjonstråd


Anbefalte innlegg

Videoannonse
Annonse

Akkurat det samme tenker jeg.

Jo mer teknologi og utvikling vi har jo større sjanse er det for at vi utsletter oss selv.

I Som et eksempel kan vi ta steinalderen. Selv om en stamme bestemte seg for og utrydde

alle andre ville de vært borte lenge før de ville ha klart det.

I dag trenger en hvis president nært knyttet til apene bare og trykke på en knapp så er mesteparten av oss borte.

Mange sier også at vi har 30% risk for at alle menneskene vi være borte i løpet av de neste 100 årene.

Man kan ikke måle disse risikoene, men selv om det bare var 1% risiko for utsettelse burde vi virkelig ta dette på alvor.

 

Grunnen til at det er vanskelig og ta på alvor er at de fleste risikoene som vil utslette oss ikke finnes i dag.

Den første risikoen som kan utlette alle menneskene var atombomben.

 

bostrom_2.png

 

Nick bostom skrev et veldig interessant Essay om akkurat dette og har holdt en kort video om emnet.

 

Eksistensielle risikoer

 

 

Her sier han at de trusselene vi mest sannsynlig kommer til og bli utryddet av, ikke er de problemene vi møter i dag.

Altså de probleme man slet med for 100 - 1000 år siden er ikke problemer for oss.

 

Han deler også inn katastrofene i to grupper.

Katastrofer skapt av mennesker (Atombombe, global oppvarming,)og katastrofen skapt av naturen (kometer vulkaner jordskjelv etc).

Her konkluderes det med at siden menneskene har overlevd alle miljøkatastrofene hittil er det ikke

særlig sansynlig at de vil utslette oss.

Det som heller vil utsette oss er om vi ikke klarer og bruke den nye teknologien riktig eller at

den nye teknlogien blir brukt av personer som vil skade med den. Problemet kan være a de ikke vet skadevirkningene av teknologien.

(AGI, AI, Nanoteknologi, Fusion etc)

 

Her sier han at de trusselene vi mest sansyling kommer til og bli utryddet av, ikke er de problemene vi møter i dag.

Altså de probleme man slet med for 100 - 1000 år siden er ikke problemer for oss.

 

Han deler også inn trusselene i to grupper.

Trusseler skapt av mennesker og trusseler skapt av naturen.

Her konkluderes det med at siden menneskene har overlevd alle miljøkatastrofene hittil er det ikke

særlig sansynlig at de vil utslette oss.

Det som heller vil utsette oss er om vi ikke klarer og bruke den nye teknologien riktig eller at

den nye teknlogien blir brukt av personer som vil skade med den. Problemet kan være a de ikke vet skadevirkningene av teknologien.

(AGI, AI, Nanoteknologi, Fusion etc)

Lenke til kommentar
Abrey de grey mener da at evig liv kan være innenfor rekkevidde i dette århundret.

 

Wikipedia

The primary life extension strategy currently is to apply available anti-aging methods in the hope of living long enough to benefit from a complete cure to aging once it is developed, which given the rapidly advancing state of biogenetic and general medical technology, could conceivably occur within the lifetimes of people living today, at approximately 2020 according to transhumanist Raymond Kurzweil.

 

Vi kan kanskje få et evig liv iløpet av 12-40 år, jeg ser dog et stort problem i dette, nemlig det med overbefolkning. Hvordan vil det gå hvis 1 milliard mennesker benytter seg av dette?

Endret av dicktator
Lenke til kommentar

Abrey de grey

 

først trodde jeg han var den nye Einstein siden han gir en kort og grei forklaring hvordan vi kan stanse aldring.

Videre leste jeg en del av hans vitenskapelige essays og så en del korte (20 minutter) Forelesninger av han.

 

Teoriene hans er akkurat det. Teorier. De er verken prøvd ut eller testet men høres meget bra ut på papirene.

Om han er den nye Einstein får vi vente og se, akkurat nå har han fått tildelt 2millioner dollar til testing av hypotesene.

Abrey de grey er heller ingen tryllekunstner. Han har ingen mirakelkur som vil få oss til og leve 1000 år. Det han sier at først vil mennesket forlenge levetiden med kanskje

25 - 50 år. Før mennesket når den alderen vil den teknologiske utviklingen ha nådd så langt at vi igjen kan utvide aldringen med kanskje 50 - 100 nye år.

 

Angående det at mennesker lever "evig" og overpopulasjon er lett og overkomme.

La oss si at det er nanoroboter som lager nye celler, kan ikke de gjøre det slik at du ikke ville få barn ? Et viktig spørsmål blir da om du ville hatt barn eller leve "evig"

Eller hvor lenge ville du leve før du fikk barn ?

 

Dokumentar om Abrey de grey: Do you want to live forever

 

 

Forelesning av Abrey de grey hvordan hindre aldring teoretisk.

(utprøvd på mus i tilfeller om og doble musens levealder)

 

 

Det at jeg "overdriver" begrepene er at jeg tror at ting vil ta mye lengre tid enn det forskere og andre tror.

På 1900-tallet trodde forskere at vi ville ha flygende biler i dag, noe som ikke stemmer.

 

Teoriene om fremtiden gjøres med tanke på fortidens og dagens utvikling. Vi vet ingenting om hva som vil skje i fremtiden bare gjøre sannsynlighets regning.

Teknologien og fremtiden vil komme. Alt vi vet er at den aldri kommer fort nok og at den kommer

Glad og se at det er flere enn meg som er interessert i dette emnet :)

Endret av Nebuchadnezzar
Lenke til kommentar
Det at jeg "overdriver" begrepene er at jeg tror at ting vil ta mye lengre tid enn det forskere og andre tror.

På 1900-tallet trodde forskere at vi ville ha flygende biler i dag, noe som ikke stemmer.

 

Du har rett her, man mente bl.a på 1980-tallet at flygebiler ville være en allmennskost i 2000, noe som akkurat ikke har funnet sted, MEN det er viktig å være klar over en ting og la meg sitere Ray Kurzweil fra intervjuet sbstn la ut;

"Information technologies are doubling in power every year right now" (...) "Doubling every year is multiplying by 1,000 in ten years. It's remarkable how scientists miss this basic trend."

 

Kurzweil som både spådde på 80-tallet at en datamaskin ville først vinne et sjakkmesterskap i 1998 (det skjedde i 1997) og at et datanettverk ala internett ville oppstå og skape en alternativ måten å kommunisere på (er under utvikling i dag), tror bl.a at datamaskiner vil oppnå bevissthet i de nærmeste 20 årene og at virituell sex vil bli bedre enn det "real thing" :dribble:

 

"After half a lifetime studying trends in technological change, he believes he's found a pattern that allows him to see into the future with a high degree of accuracy"

 

Jeg tror og håper denne mannen har utført en troverdig og sannsynlig analyse. En skal selvsagt stille seg kritisk til mye av løftene forskere har en tendens til å spytte ukontrolert ut, men jeg føler på meg at dette er en mann som vet hva som venter oss (tatt i betrakning hans tidligere spådommer).

 

Det vil uansett være morsomt å lese gjennom denne tråden om 20 år - en liten, beskjeden tråd som utforsket starten på det neste steget i vår evolusjon.

"We are the species that goes beyond our potential"

Lenke til kommentar

Kort innleg her.

 

Det jeg mener er at ting tar tid, og ting tar lengre tid en det vi tror.

Selfølgelig håper jeg på og nå singulariteten om 10 år, og det vil vi. Hvis den ekponensiele veksten fortsetter med samme fart som nå (etc dobler hvert år ) Men det er så mye vi ikke vet og ikke kan om fremtiden at den blir umulig og spå. Hvis vi vet hva fremtiden bringer vi ville vært der allerede.

 

Vi kan forutsi at en god sjakk spiller vil vinne over meg eller at om hvi fortsetter med forskningen vår så vil vi nå singulariteten en gang. Det store spørsmålet er når.

 

 

SIAI

 

General objections

 

* The government would never let private citizens build an AGI, out of fear/security concerns.

* The government/Google/etc. will start their own project and beat us to AI anyway.

* SIAI will just putz around and never actually finish the project, like all the other wild-eyed dreamers.

* SIAI is just another crazy “doomsday cult” making fantastic claims about the end of the world.

* Eventually, SIAI will catch the government’s attention and set off a military AI arms race.

 

AI & The Singularity

 

Consciousness

 

* Computation isn’t a sufficient prerequisite for consciousness.

* A computer can never really understand the world the way humans can. (Searle’s Chinese Room)

* Human consciousness requires quantum computing, and so no conventional computer could match the human brain.

* Human consciousness requires holonomic properties.

* A brain isn’t enough for an intelligent mind - you also need a body/emotions/society.

* As a purely subjective experience, consciousness cannot be studied in a reductionist/outside way, nor can its presence be verified. (in more detail)

* A computer, even if it could think, wouldn’t have human intuition and so would be much less capable in many situations.

 

Desirability / getting there

 

* There’s no reason for anybody to want to build a superhuman AI.

* A Singularity through uploading/BCI would be more feasible/desirable.

* Life would have no meaning in a universe with AI/advanced nanotech (see Bill McKibben).

* A real AI would turn out just like (insert scenario from sci-fi book or movie).

* Technology has given us nuclear bombs/industrial slums/etc.; the future should involve less technology, not more.

* We might live in a computer simulation and it might be too computationally expensive for our simulators to simulate our world post-Singularity.

* AI is too long-term a project, we should focus on short-term goals like curing cancer.

* Unraveling the mystery of intelligence would demean the value of human uniqueness.

* If this was as good as it sounds, someone else would already be working on it.

 

Implementation/(semi)technical

 

* We are nowhere near building an AI.

* Computers can only do what they’re programmed to do. (Heading 6.6. in Turing’s classic paper

* The human brain is not digital but analog: therefore ordinary computers cannot simulate it.

* Godel’s Theorem shows that no computer, or mathematical system, can match human reasoning.

* It’s impossible to make something more intelligent/complex than yourself.

* Creating an AI, even if it’s possible in theory, is far too complex for human programmers.

* AI is impossible: you can’t program it to be prepared for every eventuality. (Heading 6.8. in Turing’s classic paper, SIAI blog comment: general intelligence impossible)

* We still don’t have the technological/scientific prerequisites for building AGI; if we want to build it, we should develop these instead of funding AGI directly.

* There’s no way to know whether AGI theory works without actually building an AGI.

* Any true intelligence will require a biological substrate.

 

Intelligence isn’t everything

 

* An AI still wouldn’t have the resources of humanity.

* Bacteria and insects are more numerous than humans.

* Superminds won’t be solving The Meaning Of Life or breaking the laws of physics.

* Just because you can think a million times faster doesn’t mean you can do experiments a million times faster; super AI will not invent super nanotech three hours after it awakens.

* Machines will never be placed in positions of power.

 

On an Intelligence Explosion

 

* There are limits to everything. You can’t get infinite growth.

* A smarter being is also more complex, and thus cannot necessarily improve itself any faster than the previous stage — no exponential spiral.

* Computation takes power. Fast super AI will probably draw red-hot power for questionable benefit. (Also, so far fast serial computation takes far more power than slow parallel computation (brains).)

* Giant computers and super AI can be obedient tools as easily as they can be free-willed rogues, so there’s no reason to think humans+ loyal AI will be upstaged by rogues. The bigger the complex intelligence, the less it matters that one part of the complex intelligence is a slow meat-brain.

* Biology gives us no reason to believe in hard transitions or steep levels of intelligence. Computer science does, but puts the Singularity as having happened back when language was developed.

* Strong Drexlerian nanotech seems to be bunk in the mind of most chemists, and there’s no reason to think AI have any trump advantage with regard to it.

* There is a fundamental limit on intelligence, somewhere close to or only slightly above the human level. (Strong AI Footnotes)

 

On Intelligence

 

* You can’t build a superintelligent machine when we can’t even define what intelligence means.

* Intelligence is not linear.

* There is no such thing as a human-equivalent AI.

 

Religious objections

 

* True, conscious AI is against the will of God/Yahweh/Jehovah, etc.

* Creating new minds is playing God

* Computers wouldn’t have souls.

 

Terminology

 

* There are so many different meanings attached that the term Singularity has ceased to be useful. (Anissimov, Eliezer

* We could achieve a Singularity without AI.

 

Validity of predictions

 

* AI has supposedly been around the corner for 20 years now.

* Extrapolation of graphs doesn’t prove anything. It doesn’t show that we’ll have AI in the future.

* AI is just something out of a sci-fi movie, it has never actually existed.

* Big changes always seem to be predicted to happen during the lifetimes of the people predicting them.

* Kurzweil’s graphs for predicting AI are unrealistic.

* The Singularity is the Rapture of religious texts, just dressed in different clothes to appeal to proclaimed atheists.

* Moore’s Law is slowing down.

* Progress on much simpler AI systems (chess programs, self-driving cars) has been notoriously slow in the past.

* There could be a war/resource exhaustion/other crisis putting off the Singularity for a long time. (See Tim O’Reilly’s first comment in the comments section)

 

Friendliness

 

Activism

 

* It’s too early to start thinking about Friendly AI.

* Development towards AI will be gradual. Methods will pop up to deal with it.

* Friendiness is trivially achieved. People evolved from selfish self-replicators; AIs will “evolve” from programs which exist solely to fulfill our wishes. Without evolution building them, AIs will automatically be Friendly.

* Trying to build Friendly AI is pointless, as a Singularity is by definition beyond human understanding and control.

* Unfriendly AI is much easier than Friendly AI, so we are going to be destroyed regardless.

* Other technologies, such as nanotechnology and bioengineering, are much easier than FAI and they have no “Friendly” equivalent that could prevent them from being used to destroy humanity.

* Any true AI would have a drastic impact on human society, including a large number of unpredictable, unintended, probably really bad consequences.

* We can’t start making AIs Friendly until we have AIs around to look at and experiment with. (Goertzel’s objection)

* Talking about possible dangers would make people much less willing to fund needed AI research.

* Any work done on FAI will be hijacked and used to build hostile AI.

 

Alternatives to Friendliness

 

* Couldn’t AIs be built as pure advisors, so they wouldn’t do anything themselves?

* A human upload would naturally be more Friendly than any AI.

* Trying to create a theory which absolutely guarantees AI is too unrealistic / ambitious of a goal; it’s a better idea to attempt to create a theory of “probably Friendly AI”.

* We should work on building a transparent society where no illict AI development can be carried out.

 

Desirability

 

* A post-Singularity mankind won’t be anything like the humanity we know, regardless of whether it’s a positive or negative Singularity - therefore it’s irrelevant whether we get a positive or negative Singularity.

* It’s unethical to build AIs as willing slaves. (an example of this objection)

* You can’t suffer if you’re dead, therefore AIs wiping out humanity isn’t a bad thing.

* Humanity should be in charge of its own destiny, not machines.

* A perfectly Friendly AI would do everything for us, making life boring and not worth living.

* The solution to the problems that humanity faces cannot involve more technology, especially such a dangerous technology as AGI, as technology itself is part of the problem.

* No problems that could possibly be solved through AGI/MNT/the Singularity are worth the extreme existential risk incurred through developing the relevant technology/triggering the relevant event.

* A human-Friendly AI would ignore the desires of other sentients, such as uploads/robots/aliens/animals.

 

Feasibility of the concept

 

* Ethics are subjective, not objective: therefore no truly Friendly AI can be built.

* The idea of a hostile AI is anthropomorphic.

* “Friendliness” is too vaguely defined.

* Mainstream researchers don’t consider Friendliness an issue.

* Human morals/ethics contradict each other, even within individuals.

* Most humans are rotten bastards and so basing an FAI morality off of human morality is a bad idea anyway.

* The best way to make us happy would be to constantly stimulate our pleasure centers, turning us into nothing but experiencers of constant orgasms.

 

 

 

Implementation

 

* An AI forced to be friendly couldn’t evolve and grow.

* Shane Legg proved that we can’t predict the behavior of intelligences smarter than us.

* A superintelligence could rewrite itself to remove human tampering. Therefore we cannot build Friendly AI.

* A super-intelligent AI would have no reason to care about us.

* What if the AI misinterprets its goals?

* You can’t simulate a person’s development without creating a copy of that person.

* It’s impossible to know a person’s subjective desires and feelings from outside.

* A machine could never understand human morality/emotions.

* AIs would take advantage of their power and create a dictatorship.

* An AI without self-preservation built in would find no reason to continue existing.

* A superintelligent AI would reason that it’s best for humanity to destroy itself.

* The main defining characteristic of complex systems, such as minds, is that no mathematical verification of properties such as “Friendliness” is possible.

* Any future AI would undergo natural selection, and would eventually become hostile to humanity to better pursue reproductive fitness.

* FAI needs to be done as an open-source effort, so other people can see that the project isn’t being hijacked to make some guy Dictator of the Universe.

* If an FAI does what we would want if we were less selfish, won’t it kill us all in the process of extracting resources to colonize space as quickly as possible to prevent astronomical waste?

* It’s absurd to have a collective volition approach that is sensitive to the number of people who support something.

 

 

 

Social issues

 

* Humans wouldn’t accept being ruled by machines.

* An AI would just end up being a tool of whichever group built it/controls it.

* Power-hungry organizations are going to race to AI technology and use it to dominate before there’s time to create truly Friendly AI.

* An FAI would only help the rich, the First World, uploads, or some other privileged class of elites.

* We need AI too urgently to let our research efforts be derailed by guaranteed Friendliness.

* Developing AI now would set off an arms race to military AI. We should wait for integration and democratization to spread.

 

 

 

Kort liste med mot argumenter mot AGI og AI.

Tolk det som du vil, alt jeg mener er at fremtiden kommer men vi kan aldri vite når. :)

Lenke til kommentar

Opprett en konto eller logg inn for å kommentere

Du må være et medlem for å kunne skrive en kommentar

Opprett konto

Det er enkelt å melde seg inn for å starte en ny konto!

Start en konto

Logg inn

Har du allerede en konto? Logg inn her.

Logg inn nå
×
×
  • Opprett ny...