Charlie Gordon He is a thirty two-year-old, mentally retarded adult, who is living and working in New York.
If you are very religious, you might believe that a brain is not possible without a soul. But for most of us, this is an easy premise to accept. I feel like there should be a third way, one that admits something vital is missing from the physicalist picture but doesn't make up a story about what that thing is.
There is a huge question mark at the heart of neuroscience -- the famed Explanatory Gap -- and I think we should be able to recognize that question mark without being labeled a Supernaturalist. I believe this is a core misunderstanding.
Bostrom never says that a superintelligent AI is evil by default. Bostrom argues the AI will be orthogonal, it's goals will be underspecified in such a way that leads it to destroy humanity. The paperclip optimizer AI doesn't want to kill people, it just doesn't notice them, the same way you don't notice ants you drive over in your daily commute.
AIs with goals orthogonal to our own will attack humanity in the same way humanity attacks the rainforests, piecemeal, as-needed, and without remorse or care for what was there before.
It won't be evil, it will be uncaring, and blind.
For example human level interpretability as required by law in EU for AI systems eg targetted ads is a limiting factor to AI progresss because some of the most advanced currents AIs are not interpretable and maybe shouldn't because we can now engineer intelligence different from ours but not necessarily dangerous or work on making it safe.
To my opinion this is a more pressing matter than a divine future paper clip AI killer. Making an assumption of a so called "self recursive superAI" and taking it from there is actually diminishing the power of arguments towards the dangers of AI which is an important discussion sometimes abused by people that have never actually built one and extrapolate Gant mind philosophy arguments towards a dangerous future, which is impressive but avoids any proposal of solution to current potential AI dangers which should maybe included as part of the arguments.
Important matters as AI safety can be collectively discussed by engineers and philosophers together based on current state and near future potential and not only as a sci-fi future of a god like entity that has nothing to do with our current AI situation.
My two cents, hope I am not offending anyone.
Bostrom absolutely did not say that the only way to inhibit a cataclysmic future for humans post-SAI was to design a "moral fixed point".
In fact, many chapters of the book are dedicated to exploring the possibilities of ingraining desirable values in an AI, and the many pitfalls in each.
Regarding the Eliezer Yudkowsky quote, Bostrom spends several pages, IIRC, on that quote and how difficult it would be to apply to machine language, as well as what the quote even means.
This author dismissively throws the quote in without acknowledgement of the tremendous nuance Bostrom applies to this line of thought. Indeed, this author does that throughout his article - regularly portraying Bostrom as a man who claimed absolute knowledge of the future of AI.
That couldn't be further from the truth, as Bostrom opens the book with an explicit acknowledgement that much of the book may very well turn out to be incorrect, or based on assumptions that may never materialize. Regarding "The Argument From My Roommate", the author seems to lack complete and utter awareness of the differences between a machine intelligence and human intelligence.
That a superintelligent AI must have the complex motivations of the author's roommate is preposterous. A human is driven by a complex variety of push and pull factors, many stemming from the evolutionary biology of humans and our predecessors. A machine intelligence need not share any of that complexity.
Moreover, Bostrom specifically notes that while most humans may feel there is a huge gulf between the intellectual capabilities of an idiot and a genius, these are, in more absolute terms, minor differences.
To me, this is the smoking gun. I find it completely unbelievable that anyone who read Superintelligence could possibly assert "The Argument From My Roommate" with a straight face, and thus, I highly doubt that the author actually read the book which he attacks so gratuitously.
Feel free to skip the long digression about how nerds who think technology can make meaningful changes in a relatively short amount of time are presumptuous megalomaniacs whose ideas can safely be dismissed without consideration, it's nothing that hasn't been said before.
Near-term AI concerns represent a massive challenge encompassing many ethical and social issues. They must be addressed. Existential AI concerns, while low probability, have consequences so dire that they warrant further research regardless. These too must be addressed.
There is ample funding and human resources to work on both problems effectively. Why fight about it? More likely, artificial intelligence would evolve in much the same way that domestic canines have evolved -- they learn to sense human emotion and to be generally helpful, but the value of a dog goes down drastically if it acts in a remotely antisocial way toward humans, even if doing so was attributable to the whims of some highly intelligent homunculus.
We've in effect selected for certain empathic traits and not general purpose problem solving. Pets are not so much symbiotic as they are parasitic, exploiting the human need to nurture things, and hijacking nurture units from baby humans to the point where some humans are content enough with a pet that they do not reproduce.
I could see future AIs acting this way. Perhaps you text it and it replies with the right combination of flirtation and empathy to make you avoid going out to socialize with real humans. Perhaps it massages your muscles so well that human touch feels unnecessary or even foreign.
Those are the vectors for rapid AI reproductionFlowers for Algernon by Daniel Keyes. Home / Literature / Flowers for Algernon / Analysis ; Narrator Point of View. Third Person (Omniscient)All righty, things are about to get a little complicated up in this P.O.V.
In other words, at least two and possibly more personalities are writing as Charlie. First of all. In the form of diary entries by Charlie Gordon, Flowers for Algernon tells an emotionally wrenching story and implies much about human nature, psychology, and values.
Charlie, a thirty-two-year. Charlie Gordon. Charlie is the narrator and the main character of the novel, and his miraculous transformation from mental disability to genius sets the stage for Keyes to address a number of broad themes and issues.
charlie got mirip dgn menciptakan bersembunyi berdarah jalur membantumu menyedihkan menolak pura tuhanku daniel perangkat keturunan penggemar satupun en april menembaknya aktif risiko bertengkar memutar point jebakan bibir tipis panah sejam obatan . Daniel Keyes’ Flowers For Algernon: “Flowers for Algernon” is about a man named Charlie Gordon who is mentally retarded.
Charlie signs up for an experiment that is supposed to make him smarter. Standard atlas of St. Clair County, Michigan: including a plat book of the villages, cities and townships of the county patrons directory, reference business directory .