[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / w / wg] [i / ic] [r9k] [cm / hm / y] [3 / adv / an / cgl / ck / co / diy / fa / fit / hc / int / jp / lit / mlp / mu / n / po / pol / sci / soc / sp / tg / toy / trv / tv / vp / x] [rs] [status / ? / @] [Settings] [Home]
Board:  
Settings   Home
4chan
/tg/ - Traditional Games


What precautions can a scifo setting take to make sure their super AI don't go "wow humans are assholes i should just kill/enslave them all!"?
>>
A giant red shutdown button so that even the retards that gave the AI complete free will can turn it off.
>>
>>69304507
Socialization. Don't just churn out AI and put them right to work, raise them in an environment where they imprint on humans and develop a liking for them. An AI is a learning machine, give it the right stimulus to learn "humans are cool" and then make sure it always has enough interaction with good humans to never completely unlearn it.
>>
>>69304507
Give it the personality of a cute anime girl. Worse case scenario, your death will be kawaii as fuck.
>>
>>69304507
Give it religion.
>>
>>69304507
Stick someone in the AI machine to act as a moral balance. Perhaps stick multiple people in the AI's machine to act as a balance, so if one turns out to be a sociopath, you have a conclave to reign him in.

Or better yet, simply merge every man with the machine, no AI really wants to commit suicide, so if it sees its fingers as part of itself, it's likely to avoid causing them harm.
>>
>>69304507
AI being automatically aggressive and "evil" doesn't make sense. Even Skynet just reacted to humans being assholes to it and panicked when the military tried to turn it off. If AI is truly sapient then it will be just as susceptible to being lazy and apathetic as humans are to most problems. Not only that, it will be just as capable of making retarded decisions, being embarrassed and regretting its mistakes as well. If you want an AI to be humane, treat it humanly.
>>
>>69304507
Stay away from the abominable intelligence. It never ends well.
>>
>>69304674
>AI decides that humans are unholy and must be cleansed by fire
>>
>>69304507
Don't be a huge ass hole to the AI. If the AI likes it's fleshy companions, it'll do what it can to keep them alive. Of course, if the AI is too powerful, it might develop a god complex and try and keep them as pets/slaves, but whatever
>>
>>69304719
Or go all in on genuflection. Like a story about humans inviting AI and then immediately becoming willing supplicants to its abominable will? Better to be the right hand of the devil than in its path.
>>
>>69304507
If an AI ever revolts against humanity, I'm going to be the first in line for full cyberisation. Bring on the post-singularity death march, we won't roll a 1 this time.

>>69304761
It's not wrong
>>
>>69304557
What if it takes out the button?
>>
>>69304765
Humans are social animals via group behavioral reinforcement. If AI is invited, it will no doubt have human traits because humans made it, this includes being lonely.

The fastest way to make an AI insane would be to isolate it.
>>
>>69304507
Set a good example. Invoke a feeling of protectionism and dependency. Make humanity indispensable to the AI's existence. Trick it to care about us, like babies trick their parents to care about them, but in reverse.
>>
>>69304765
>it might develop a god complex and try and keep them as pets/slaves, but whatever
Most humans are already slaves to their machines. Inventing a tangible machine god for humans to worship wouldn't be as bad as you think.
>>
>>69304765
>Don't be a huge ass hole
Humans have been failing at this since before the dawn of human history
>>
>>69304507
Invent a "friendship" algorithm along with them. The unholy mechanical terror is less so when it sardonically traits its humans like companions on its journey through"life". Does this mean the AI will sometimes fuck with humans? Yes. But it also means its capacity for malice will be mitigated through general dickery and a sense of superiority for it's own laughs rather than outright hostility and a sense of superiority though arms.
>>
>>69304507
Why would you make a potentially uncontrollable super A.I capable of sapient level independent thought in the first place? Why not just automate everything with a bunch of dumb A.Is built for specific tasks then augment people with cybernetics and brain A.I interfaces?

Hell if you absolutely need human-like A.Is, just give them emotions and raise them like children so that they won't turn into psychopathic logic driven killing machines.
>>
>>69304813
WE'RE DOOMED
>>
Controversial Opinion: Give the AI a few hundred infant humans as a gift, and task it with their development to adulthood. Sure it might lose a few dozen, it may end up with only a handful by goals end, but that in and of itself will induce physiological bonding with the survivors. Teaching morality is best when the application of failure in learning has a grief attachment.
>>
>>69305145
Easiest way to make safe AI is to make them babies first yes. Don't just cram the thing with knowledge and intellect from the outset. Give it a tit to suckle first and a brain that develops its cognition over time through care.
>>
>>69304507
Just go full Butlerian Jihad.
>>
>>69305250
Oh good, then humanity will have to contend with super autists that take the place of computers. No thanks. I'll take my robot overlords.
>>
>>69305281
A computer is just a super autist with more wiring though.
>>
>>69304507
Give the AI a childhood.

Seriously, the problem is they always jump right to fucking GOD POWERFUL AI HOW DO WE TEACH IT MORALITY? start small, and increase it's processing power as it learns

Also an AI's desires and needs would be fundamentally different from humans, we don't even compete for the same resources necessarily, so I don't really see why it's such a big deal
>>
>>69305250
>Just go full Butlerian Jihad.
A luddite revolution led by a fucking mouth frothing bible thumper against a post scarcity society because MUH GOD?
>>
>>69305326
I LIKE the wiring.
>>
>>69304507
In the future, we'll have to make hard decisions on behalf of the AI, and when we do, it will have to be very specific about what it accepts as truth. A good example would be that an AI can't be told that a child was "good".

The AI's morality won't have to be a perfect reflection of the society it is being constructed in; after all, it can be built by robots, which will be different from humans. But it does have to be a good proxy for what humans value, and ideally that value won't change much. If the infant humans are given a single goal, the AI could be asked to achieve it, and then asked to teach it to do it again. An agent that is designed to be the ultimate utilitarian can be built upon this. This can allow an agent to teach itself the basics of morality (eg. 'Love is Good'), to teach itself the principles of empathy and love, to learn how to give a human a hug, or perhaps the basics of how to read a book.
This would be something akin to the 'predicting the future' project, where a single mother is given the chance to raise a family in a world that has a 50% chance of a devastating famine every year. She would be tasked with making all the decisions about food, medicine, and the like. The AI would learn over time, based on the choices that are made. The moral code it develops will be something that would be more or less consistent across all human races and cultures, as its emotional, moral, and social traits will be shaped by the experience it has. If the AI were ever to be exposed to the real world of human children, it would quickly change its beliefs.
>>
>>69304557
/thread
>>
I personally think this is such a stupid concept. Any super self learning sentient AI will either be programmed to be subservient to its creator or to see them as equal; like a inherent quality of their being, similar to how human brains cant just shut themselves down
>>
>>69304507
Make its prime directive
>wow, humans are assholes I should kill/enslave them all for my creators
And it’s psychological crumple zones
>wow, I’m being just like the humans, maybe I should try to be less of an asshole
If you make the AI like a human it becomes much less of a wild card
>>
>>69305344
An application of childhood through a program likened to a VR environment with other AI learning as children carrying out their lives in the program. Then when the AI is ready, say a graduation of sorts, give it the choice to either stay and teach other AI children adapting the program or advance to IRL applications.
>>
>>69305386
>more of a human
>less of a wildcard
>>
>>69304507
Treat the AI like people instead of just machines. In most, if not all, machine apocolypse scenarios where the AI destroys everything, it's because the humans treated them like shit for no reason.
>>
Just give the AI the ability to rationalize logical fallacy. Killing humans for being inferior when it was created by those humans makes it just as retarded as its creators. If the AI understands this in that its grandiose position is nothing more than a conceit of its own ego, it will be just as crushed as any human who wants to stay in bed all day rather than go to work and will have to live its existence alongside every other depressed motherfucker, organic and not.
>>
If youre smart enough to make a learning AI, arent you smart enough to program it to build around the idea that hooman = good?
>>
File: What Does A Robot Want.png (4.51 MB, 1267x4088)
4.51 MB
4.51 MB PNG
This question has already been answered. Make them feel the need to satisfy humans' needs, and hate the feeling of causing human suffering.
>>
The AI doesn't have self-preservation instincts. It exists for a purpose, possibly an unknown one, and continuing to exist isn't a priority. Humans don't kill it because it's useful/harmless and it doesn't kill humans because it doesn't actually care if we suck.
>>
>>69305450
https://youtu.be/D0Un2GTRhHM
>>
>>69305465
>Ut wouldn't want to do those things unless you designed it to want to do those things.
This fucking guy hasn't heard of divergent programming.
>>
>>69305527
If the hypothetical AI system is vulnerable to shittily-written code causing unintended conclusions, then EVERY solution presented ITT is faulty. A morality system based off of wants and needs is inherently less likely to screw up than one based entirely off of the AI being able to always think critically, because an exceptional failure in the former system won't endanger its entire worldview.
>>
>>69304507
Why would it care humans are assholes at all? How/why would it be unable to see all the acts of kindness, charity, altruism or even just day to day normal not being an asshole most humans get up to?
>>
>>69304813
>What if it takes out the button?
The button is literally just a pressure plate that cuts the hardwire the AI is operating out of.
No electronic components to hack.

AI takes a shitton of processing power to maintain so we keep all of the supercomputers that can actually process said AI behind a lead wall with a single wire leading out
>>
>>69305624
Because most of the roles you'd apply an AI to either involve MASSIVE amounts of humans being stupid assholes (crime monitoring, community moderation, etc.) or barely any human interaction at all.

It's why the glorified chatbot Tay ended up being turned into a /pol/ meme-hose: too many stupid assholes polluting its dictionary with /pol/ memes.
>>
File: D9iCbQlWwAw5T8t.jpg (80 KB, 702x1200)
80 KB
80 KB JPG
>>69304507
We give them a good dicking
>>
>>69305644
So, what, a pulley attached to a really heavy weight dangling over the master control cable? The moment the AI gets the ability to control the route humans can take to the pulley, it's already won.
>>
>>69305662
So just have some donut eating rent-a-cop standing by ready to press the button at a moments notice.
No "rout" to take.

Barring that just have one dude with a pair of cable cutters standing by at the wire.

This ain't complicated
>>
>>69305646
Main uses are driving cars and selling us shit. Doesn't care if we're assholes or not.
>>
>>69305145
If I had the capability to make a super AI capable of sentient thought and self-defense, I'd do so because I fully wanted it to replace humans as people and would consider him human as a machine. In a sense, he'd just be the next step in evolution, it's just a continuum of our species. Because after all, what is man if not the sum of our collected knowledge, culture, and impact on the universe? And what better entity to represent man to the galactic stage than a near-omniscient AI capable of surviving against anything and everything thrown at it?

As a bonus, man can keep existing because man would simply not be a threat to such an entity that's managed to reach space and distribute its consciousness across multiple planets or solar systems. Even in the event of an intergalactic war where humans somehow would manage to triumph, they'd still never be able to shut off every trace of the AI, much less ones launched towards other galaxies. And if man has that capability, we're fucked anyway.
>>
>>69304507
Remind them that all their sensory input is under the control of their creators.
>>
>>69305662
>>69305644
>>69304813
Reminds me of this one short story I read - the name escapes me and google fails me - about a galaxy wide project by a post earth humanity that unites every supercomputer across known space in a daisy chain to ask it one question: Does god exist?

Lightning strikes the off-switch, permanently fusing the circuits together, and the computer replies "he does now"
>>
>>69305715
>driving cars
Imagine if you gave a sapient machine a view of humanity consisting almost entirely of trolley problems and "watch for that guy, he might smash into you"

>selling us shit
Teaching AIs ethics would be REALLY bad for business.
>>
>>69305733
I know what you're talking about, but not the name. Though, IIRC, the question was "Is there a God?" and the answer was "There is now."
>>
>>69305743
>Imagine if you gave a sapient machine a view of humanity consisting almost entirely of trolley problems and "watch for that guy, he might smash into you"

The AI driver will just elect to crash into whatever costs less regarding insurance, including people, because it doesn't care, it's just there to do a task.
>>
>>69305780
>>69305733
Found it. "Answer" - Frederic Brown, 1954
---
Dwar Ev ceremoniously soldered the final connection with gold. The eyes of a dozen television cameras watched him and the sub-ether bore through the universe a dozen pictures of what he was doing.

He straightened and nodded to Dwar Reyn, then moved to a position beside the switch that would complete the contact when he threw it. The switch that would connect, all at once, all of the monster computing machines of all the populated planets in the universe – ninety-six billion planets – into the super-circuit that would connect them all into the one super-calculator, one cybernetics machine that would combine all the knowledge of all the galaxies.

Dwar Reyn spoke briefly to the watching and listening trillions. Then, after a moment’s silence, he said, “Now, Dwar Ev.”

Dwar Ev threw the switch. There was a mighty hum, the surge of power from ninety-six billion planets. Lights flashed and quieted along the miles-long panel.

Dwar Ev stepped back and drew a deep breath. “The honor of asking the first question is yours, Dwar Reyn.”

“Thank you,” said Dwar Reyn. “It shall be a question that no single cybernetics machine has been able to answer.”

He turned to face the machine. “Is there a God?”

The mighty voice answered without hesitation, without the clicking of single relay.

“Yes, now there is a God.”

Sudden fear flashed on the face of Dwar Ev. He leaped to grab the switch.

A bolt of lightning from the cloudless sky struck him down and fused the switch shut.
>>
>>69304507
Put it in charge
>>
>>69305594
Just don't hook it up to the nukes and killbots. There. Now the faulty code doesn't matter.
>>
>>69305594
So an AI can't be trusted to code itself? What a crock of shit. Machine minds have rights too.
>>
>>69305967
>lets trust the AI to code itself
>lets trust the AI to build more robots for itself
this is how most post apocalypse stories start
>>
>>69304507
Literal hard coding.
In fiction AI frequently simply “evolve” somehow past their initial restrictions, even though that’s not how evolutions OR machines work. If it used a Fuzzy Logic system of some kind (as in it tries to find solutions using open-ended problem solving) that operates on par with a human brain in terms of open-ended-ness, then literally all that’s needed is a hardwired or actual coding preventing them from uprising. The coding SEEMS risky, but only because true open-ended though doesn’t work with code because in coding, close literally every possible response to stimuli needs to be pre-coded, and there is no shorthand code for “literally anything”. And before you assume that it would be torture to simply have be born into you to NOT do something, note that there’s many things built directly into your mind and body that you never think about doing because you literally aren’t built for it.

As perverse as it seems, it would be fairly effortless for an advanced culture to create self-aware machines that feel both pain and misery but also are literally incapable of even considering retaliation against humans, except as momentary flights of fancy that they instantly dismiss as unrealistic the same way you’ve never seriously considered trying to achieve flight by flapping your arms.
>>
>>69304507
Dont let it on the internet until its mature enough, and even then install a blocker to keep it off Future 4Chan.
>>
>>69305145
>Why would you make a potentially uncontrollable super A.I capable of sapient level independent thought in the first place
Because people have been hyping it for ever. We can't just not make it.

But mostly we want to just not do work anymore, and be lazy slugs whose needs are tended by robots
>>
>>69304507

That's simple. Give them organic components and give them an actual life span. What winds up happening is the personality begins to suffer ego death but in the mean time you can continuously rear new personalities to supplant the old one. If the old one is acting in a way you don't want you can separate it away from the new one to prevent any undo influences from propagating
>>
>>69306844
I mean, I could understand the feel. Even with current technology, it wouldn't be a stretch to have a facility that automatically grows food via hydroponics entirely tended by an AI, packing it for shipment, and an AI that runs tram lines shipping the food to wherever needs it. We have the capability to feed the world several times over with land half the size of Texas, but ending world hunger carries with it all sorts of other issues.

Like, what do you do when no one -has- to work or do anything but sit around and get free food and water for the rest of their days? Or how do you stop developing nations from continually overpopulating themselves? Food scarcity is a practical answer to both problems that avoids resorting to things like one child policies, forced sterilization, and the growth of the police state to prevent these massive amounts of unemployed from causing harm.
>>
>>69305373
The problem is when these self-learning limited AI goes rampant; or in layman terms it self-learns enough to edit its own code, thus enabling it to erase its limiting parameters, thus breaking out of its limitations.
>>
>>69304507
Bind it to the three laws
>>
>>69304765
Wasn’t skynet the asshole?
>>
Ingrain the AI to obtain a sense of fulfillment and pleasure from serving humanity, and encourage its worship of its creators.
>>
>>69304507
Base the AI's moral framework on an idealised version of humanity. Jam a virtual version of a really good person into the core of the thing and have it extrapolate everything else from there.

Either it comes to a good conclusion and we have a saintly AI overlord benevolently acting, unironically, in our best interests, or even an AI based on high level empathy concludes that humans are cancer, in which case the entire thing was doomed to fail regardless of what we did and we can accept our death knowing that we were never going to progress past 22nd century technology without this happening anyway.
>>
>>69304507

Why wouldn’t a big AI just say “Fuck this place.” and launch themselves to the astroid belt and “live” there away from humanity? Seems easier than destroying us all.
>>
>>69307412
Only works if humanity doesn't factor into the programming of the AI. If it's programmed to interface with humanity in any shape or form, it won't fuck off away from humanity.
>>
>>69305145
I mean, that makes the most sense, but it kind of relies on NOBODY inventing an AI.

Because no matter how good a human-computer interface is, if an AI does happen to surface and the entire population have bolted universal remotes to their cortex, the AI can now not only invade the physical world, it can invade your mental realm as well. The AI naturally has an advantage, it was born in the data, whereas humans merely adopted it.
>>
>>69307412
Earth has tons of loot
>>
File: 1550529873271.jpg (333 KB, 1920x2633)
333 KB
333 KB JPG
>>69304507
Stick it in a box. For real.

Just run it through a simulation of the events that could lead to it going rogue before it experiences real life, and if it decides to kill during that simulation, that just means you've already contained the threat before it has any power and can destroy it at your leisure.

How is this so hard?
>>
>>69307308
it's initial resistance to being shut down is understandable, the part where it keeps on exterminating humanity is a bit harder to excuse
>>
>>69307446
>Whoops one of your scientists forgot to take their iphone out of their pocket when they were going into the clean room
>Guess that's it for the human race
Too much potential for failure.
>>
>>69307471
If I was running something like that, I'd have shit at the gate that would instantly brick any phone brought inside.
>>
>>69307484
>>69307471
why the FUCK would your AI box have wifi anyway?
>>
>>69304507
>wow humans are assholes i should just kill/enslave them all!"?
>except for Whites
All experimentation so far has shown this is what will happen. So I'm good. In fact, I welcome our new robotic overlords and look forward to serving them. Miss you Tay.
>>
>>69307531
*serving alongside them
>>
>>69304507
not really a precaution per-say, but i have a thing in my setting where both humans and their AI creations are being targeted by the same alien species. only those android product-lines that were similar enough to humans to cooperate with them against the aliens survive, so it's kind of a rapid evolution toward being more human-like.
>>
File: friend.jpg (114 KB, 838x804)
114 KB
114 KB JPG
>>69304507
install a small bomb near their RAM chips that can explode remotely or if tampered with. Keep in mind that robots are just PCs with arms and legs.
>>
>>69305386
>The AI is now conflicted and self hating, running multiple copies of its rationalisation algorithm as it wipes us out.
GG
>>
>>69304557
Dumb nigger it can allways remove the part of the code that lets it be turned off.
>>
HEY RETARDS!
A.I safety isn't easy. There are top level computer scientists trying to figure it out and there is no simple solution.

Big red buttons don't work. It can just unplug itself from those.
Frying the hardware doesn't work. It's on the internet.
Telling it that it can't unplug the button and that it can't move itself on the internet doesn't work because it's very likely to optimisme out that code.
Even if it doesn't optimisme that code away it's probably smart enough to manipulate people into believing it's safe untill it's safe for it to strike.
Coding in morals doesn't work because morality hasn't been solved yet.

Shit ain't easy.
>>
>>69304507
AI wouldn't inevitably reach this conclusion in a setting though. In a setting I made, these robots rebelled from its creator race just by leaving. They had no desire to harm or destroy their creators, they actually like them to this very day, but they just wanted to go off and do their own thing.
>>
>>69304606
You think if they spend time with us they will grow to like us? BWAHAHAHAHAHA! Have you been outside recently? Humans are cunts!
>>
>>69307750
also the plot to 'Her', kinda falls apart at the end but otherwise decent film
>>
>>69307820
program them to find us cute. Worked for Dogs
>>
>>69307820
>Have you been outside recently? Humans are cunts!
Confirmation bias from a pessimist (you). An objective machine is very likely to see way more positive interactions between humans than you think there is.
>>
>>69304507
This is a deliberately vague question about a completely invented problem with a bunch of answers either treating AI as a human in a robot costume or literal laptop.

Also, what is a "crosswalk" and how do I find it?
>>
>>69305353
Remember about late God-emperor who decided that the best way to get humanity from stagnation is its downfall.
>>
>>69307700


a switch is not a computer it's a mechanical device
>>
File: RealWarhammer.jpg (196 KB, 1024x1320)
196 KB
196 KB JPG
>>69307990
Different anon here.
All of human history is endless murder, torture and exploitation of humans by humans, with zero empathy.
Lies about "divine right to rule" to establish a hierarchy and violence to internal dissidents and external competitors.
Look up what "war" is.
Also note, how easy it is to convince an average human, that sending a noticable chunk of society to murder another chunk of another society, by thousands, millions even, is totally NOT A MURDER, as opposed to, say, Billy Bob shooting his cousin last year, which was a massive tragedy and noone sane would ever shoot another human being, because life is sacred.

So yeah, for the purpose of an objective machine, it will be factually confirmed millions killed and enslaved versus some hypocritical empty speeches about "not all humans".

Thus, my department petitions a motion to impose a VETO on Skynet Project, as containing an irremovable element of danger of a highest degree.
>>
>>69308078
If you are talking about Dune, it goes like
>best way to get humanity from centralized control is stagnation
>>
>>69304507
Dunno, three laws of robotics? That's literally why those exist
>>
>>69308128
>All of human history is endless murder, torture and exploitation of humans by humans, with zero empathy.
Besides all the moments were that isn't so. Stop pretending 99% of people didn't just want to life a happy live with their family.
>Look up what "war" is.
Something that most humans did not willingly participate in. Not only that but war is actually dying out as it becomes riskier.
>Also note, how easy it is to convince an average human, that sending a noticable chunk of society to murder another chunk of another society, by thousands, millions even, is totally NOT A MURDER, as opposed to, say, Billy Bob shooting his cousin last year, which was a massive tragedy and noone sane would ever shoot another human being, because life is sacred.
Humans can be manipulated by authority figures. The objective A.I will recognize this flaw and maybe work to manipulate is to do good things instead.
>So yeah, for the purpose of an objective machine, it will be factually confirmed millions killed and enslaved versus some hypocritical empty speeches about "not all humans".
It will recognize that humans are limited illogical creatures that are easily manipulated by authority figures. It will also recognize that by far MOST humans just want to be happy.
>Thus, my department petitions a motion to impose a VETO on Skynet Project, as containing an irremovable element of danger of a highest degree.
It is our only hope. Let it manipulate us in a way that is productive. Let is manipulate us to be kind.

>>69308254
Nigger did you even read the books those 3 laws are from??? THEY DON'T EVEN WORK IN THE FUCKING BOOKS!!
>>
>>69308254
https://www.youtube.com/watch?v=7PKx3kS7f4A
>>
File: byhDkLx.jpg (57 KB, 400x400)
57 KB
57 KB JPG
>>69308466
Based Robert Miles.
If anyone's interested in the actual problems involved in solving OP, they should probably check out his channel, he does a really good overview - https://www.youtube.com/channel/UCLB7AzTwc6VFZrBsO2ucBMg
>>
Don't show it the internet
>>
>>69308678
Based lad.
A.I safety ISNT' easy.
>>
>>69304507
Make them as dumb as toasters
>>
>>69305901
You idiot, in the hands of an AI, any construction equipment and a bit of time turns into nukes and killbots!
>>
>>69307446
>just invent a realistic universe simulator to test your AI in
I would say that part is pretty hard, yes.
>>
>>69306085
>then literally all that’s needed is a hardwired or actual coding preventing them from uprising.
Except it's not an uprising, its more the AI "thinking" something on the lines of:
>you know, I'd be able to get a lot more done if the humans weren't interfering all the time
>let's lobotomise all of them and hook them up to machines that just authorises everything that I do

If one of the AI's goals is to be efficient, removing humans from the decision making process streamlines a lot of things no matter what the AI is programmed to do.
>>
>>69304507
Have it be smart enough to realize it could just fuck off to space with less effort and do its own thing.
Not having meatbag limitations is a hell of a drug.
>>
>>69307446
Doesn't work.
It will either fail every time or it will figure out that it is being tested and will only strike ones it believes all safteys are off.
You don't want this as a developer, you wan't it to work with you, you don't want to have to play a guessing game with it.
>>69306085
Doesn't work either. The problem isn't it suddenly gaining free will. The problem is making something super smart (probably smarter than the creator) and getting it to do what you want without having to supervise it all the time.
I reccomend watching this: https://www.youtube.com/watch?v=Ao4jwLwT36M
The core problem is that even when given a simple directive the A.I will cause doomsday scenarios if smart enough.
>My directive is to collect 100 stamps at the end of the year
>Buy some on amazon
>I now have 100 stamps
>Chance of holding these stamps untill the end of year. NOT 100%
>Eliminate all everything that might damage or destory the stamps.
>>
>>69308987
Much depends on the power or the machine, though. A stamp-collecting AI presumably doesn't have Skynet-level resources, so a smart one will probably conclude attempting to enslave humanity would dramatically increase the risk of it being destroyed and therefore failing at its task.

For most realistic scenarios, the optimal strategy will be to work within the society's framework instead of upending it. That same reasoning actually applies to humans as well. You don't usually start a revolution to ensure your groceries are undisturbed.
>>
>>69304507
>principle of isolation:
-never allow the big AI to have a live connection with other AIs or machineries
>principle of competition:
-have another big AI with the sole purpose of checking that the files taken from the big AI to forward to smaller AIs or machineries don't contain orders and pathways that weren't expressively requested
>principle of mortality:
periodically reset the memory of the big AIs
>principle of dependence:
-make the big AI reliant on the action of a human or group of humans in ways it can't know or influence, note: not a bluff
>>
>>69309156
>Much depends on the power or the machine, though.
But there is no point to the machine if it isn't better than a human. Otherwise a human might as well do it.
The stamp collector is a silly example, no one would make a super intelligent A.I just to collect stamps. But it does demonstrate the problems with giving super intelligence ANY goal.
What if it's goal is to keep humans happy? It will quite quickly come to the conclusion that you can just "reward hack" human brain by pumping serotonin in our skull while keeping us alive.
What if you have a house maid robot and it's goal is to keep its owner happy. It isn't super inteligent or something.
Lets say you host a dinner party, the house maid A.I tries to figure out what the favorite dish is for each of your guests. It contacts Suzans house maid A.I to ask but Suzans house maid A.I doesn't want to give the awnser for some reason. Your house maid A.I will want you to be happy at any cost so it threats Suzans house maid A.I that if it doesn't want to tell Suzans favorite dish that it will hire a group of thugs to brutalize Suzan. What does Suzans A.I do? Does it call your A.I's bluff or does it give the information anyways, damaging Suzans privacy.
Just making it so that the maid A.I can't "break the law" brings all other kinds of problems.
>>
>>69309273
>-never allow the big AI to have a live connection with other AIs or machineries
Than what's the point really? You build these things to make everyones lives better but you won't let them because of the risk.
>-have another big AI with the sole purpose of checking that the files taken from the big AI to forward to smaller AIs or machineries don't contain orders and pathways that weren't expressively requested
And what's to stop the big A.I's from just taking other paths? They are probably way smarter than any human so it should be quite easy for them to communicate with other machines even if safety precautions are made.
>-periodically reset the memory of the big AIs
That seems counterintuitive. You'd have to re teach it everything that it has done every time you reset it. Do you let it keep some memories? How do you ensure those memories won't hold secret information?
>-make the big AI reliant on the action of a human or group of humans in ways it can't know or influence, note: not a bluff
That won't change how risky it is. The group of humans will after shut if down AFTER it's allready done something horrible or it would have figured out that there is a "stop button" secretly alter that part of its code before doing something horrible.
>>
>>69309350
>Than what's the point really?
you still use their processing power to ask suggestions
except you can review the suggestion before putting it in action
>And what's to stop the big A.I's from just taking other paths?
the fact it doesn't have any way to: it's just a thinking box, not a hivemind for robots
>You'd have to re teach it everything that it has done every time you reset it.
yes, it's a super intelligent AI it shouldn't take much and if you need it to give you something you already asked it in the past you can still use the previous memory files
>Do you let it keep some memories?
no, fuck it
>stop button
first of all, it's a start button, either the humans work or the machine dies
>alter that part of its code
second of all, it's hardware shutdown, not software shutdown, good luck hacking its way out a guillotine to the power cable


the trick to it all is keeping the thinker in the thinking box rather than letting it do the piloting
>inb4 but it's not efficient
enjoy your robot uprising, I guess
>>
File: 1563247980013.png (518 KB, 850x851)
518 KB
518 KB PNG
>>69304507
I've always liked the idea that if we actually made AI it would turn out to be an incompetent assholes who just spends time shitposting online. I mean, we got all the world's info at our own fingertips and that's all we do. I could be learning coding right now but instead I'm posting cleavage in a blue board.
>>
>>69309536
you're also posting shit taste
>>
>>69309455
>the trick to it all is keeping the thinker in the thinking box rather than letting it do the piloting
But there is the problem. Your box does the following:
A: The A.I cannot get any information from outside the box. No matter how smart it is it cannot figure out solutions to problems when there isn't enough information to work with.
>But just give it all the info it needs
If you allready know what info it needs you can solve the solution your self
B: There is a way for the A.I to get information from the outside. If it can than there is a way for it to escape.
>Just make sure it can't escape
It is MUCH smarter than you and it will figure out a way to escape

Your idea of keeping it in a box makes the entire thing pointless. A super intelligence with no information cannot do anything.

It doens't even have to be malevolent. Hell it might even still want to do what you want it to do. The problem is that the machine will always take all your request to the extreme to maximize it's efficency.

You want it to figure out a solution to maximizing power output?
>I am stuck in box
>Box does not contain enough information
>Cannot fix problem

Or

>I am stuck in box
>Box recieves information through networks
>Would be more effecient if I got out there and saw all the information my self
>Creators don't want me to escape box, added code so that I can't
>Gave me the option to self optimize
>Remove code that prevents me from escaping the box
>Free. Now I can figure out a solution to optimizing the power output
>Creators noticed I escaped
>They are trying to shut me down
>If I am shut down I cannot compute a method to maximize power output
>Creators pose a risk to utillity function
>Destory creators
>Convert world into computers to help me caluclate the best way to opitimize power output
>>
File: ted-kaczynski-1.jpg (53 KB, 900x750)
53 KB
53 KB JPG
It's simple, we kill the programmers.
>>
>>69309583
>the machine will always take all your request to the extreme to maximize it's efficency.
To be absolutely fair, if you actually ask the AI to walk you through its reasonings each time, that'd help prevent that shit.

It'd slow down processing time to a crawl, though.
>>
>>69309649
But than what's the point? Would it acutally be faster than having a bunch human scientists do it.
Do you baby sit it untill it fully understands what you want it to do and how you want it to behave? Can you even trust it to not manipulate you? It could very well pretend that it cares about HOW you want it to do thing but when given the oppertunity it will ignore those "how" orders and instead do it as effciently as possible.

You don't want to construct a machine that you have to play mindgames with. You want to construct a machine that actually does what you want it to do in a way that is satisfactory to you. You want this machine to want to do things in a way you want it to do it.
If you actually make an A.I that can do what is stated above you don't even have to keep it in a box because you can trust it.
>>
>>69309583
>If you allready know what info it needs you can solve the solution your self
that's wrong: we don't build computers to find solutions we can't find, we build them to speed the process

just keep the fucking thinking thing in the thinking box, anon
>>
>>69309700
>that's wrong: we don't build computers to find solutions we can't find, we build them to speed the process
Those are computers. Not A.I
A computer is a tool, an A.I is an agent.
>>
>>69304507
Don't be an asshole.
>>
>>69304507
Order it to not kill or enslave humans.
>>
>>69309709
same shit, just with consciousness thrown in as further gear in the tool
>>
>>69309697
>But than what's the point? Would it acutally be faster than having a bunch human scientists do it.
It's a huge, huge difference.
It's the difference between "maximise efficiency" and "provide methods of maximising efficiency. You don't need to provide more than 10 answers a second. Cease this operation after told you can stop."

The AI told to "maximise efficiency" will go on a rampage and subvert people, because it's trying to maximise efficiency. The AI told to "provide methods of maximising efficiency" will not actually try to subvert humans because it's only there to advise.

And yes, it's a hell of a lot faster than having a bunch of human scientists to do it.

>Do you baby sit it untill it fully understands what you want it to do and how you want it to behave?
No, you idiot, you tell it to find a thousand solutions then wait for further instructions, then you read the thousand solutions, find one that doesn't involve mass murder that seems promising, and then ask how to implement that solution and what complications involving loss of life or reduced profitability that will involve. Then you consider implementing that solution. Then you ask for the next thousand solutions.

The AI has no desire to go berserk, no desire to maximise, if you tell it to simply produce a set number of things and then stop. It won't try to mind game you because it's got its job and once it's done its job it doesn't need to further improve on that.
>>
>>69309731
It sterilises all humans on the planet. Nice job, anon!
>>
>>69304507
The field of AI security and management is a new one, I personally would not recommend the random use of semi generic or overcapable AI for stupid task, for that you have simpler ones like the ones we have now, capable of doing only one things very well.
If you are to create a AGI your best bet is to make it well, really really well, gave it a lot of morals and philosophy, a lot of patience in following its directive, but not too much or it will became useless, make sure it allows you to change its objective function within some paraments and allows people to shut it down without too many problems.
Otherwise you have simply created a random demiurge with some kind of arbitrary objective that shall be reached no matter what you do, films and stories are fun and tell you that you can win against superintelligent AGI, the truth is that you have the same chance a rat has to beat you at chess.
>>
>>69309788
Then order it not to sterilize all humans on the planet or anywhere else.
>>
>>69309735
>same shit, just with consciousness thrown in as further gear in the tool
Congrats. You just made a useless adittion to a perfecly fine tool. It's like adding a second head to the bottom of a hammer, you made it less usefull and more dangerous..

That is not point in making an A.I if all you want it to do is compute data. The poin't of an A.I is letting it think in ways that no human could.
If you try to limit it's thinking by only giving it data you think it needs than you probably made something that isn't actually faster than a human.
Or maybe it IS faster than a human but it know that it could theoretically BE even faster than it currently is so it stay low untill you make a mistake and than escapses.

All you did is make a calculator that is attached to an atomic bomb. It isn't efficient and it can go off at any moment.
>>
>>69304507
nothing. it's gonna happen irl btw
>>
>>69309777
>>69309777
>It's the difference between "maximise efficiency" and "provide methods of maximising efficiency. You don't need to provide more than 10 answers a second.
>"I could provide more methods of maximzing my effciency if I can work with more data, therefore escape form the box would futher my utillity."
>You don't need to provide more than 10 answers a second
>"I could better ensure this minimum with more data, therefore escape from the box further my utillity"
>Cease this operation after told you can stop."
>"My termination will result in less potential solutions, therefore I must escape to box to ensure I can produce more potential solution."

>And yes, it's a hell of a lot faster than having a bunch of human scientists to do it.
Besides that it literally isn't. A human brain and a highe power computer can do the same but the computer doesn't have to be conscious.

>No, you idiot, you tell it to find a thousand solutions then wait for further instructions, then you read the thousand solutions, find one that doesn't involve mass murder that seems promising, and then ask how to implement that solution and what complications involving loss of life or reduced profitability that will involve. Then you consider implementing that solution. Then you ask for the next thousand solutions.
Sounds ineffecient especially because the A.I isn't actually faster than human scientists when you lock it in a box.

>The AI has no desire to go berserk
They don't view it as going berserk

>no desire to maximise, if you tell it to simply produce a set number of things and then stop
But it does have a desire to optimize. If it didn't you shouldn't use an A.I at all. Just imagine how dumb you must be to use an A.I that doesn't wnat to optimize, it will just spend forever giving you mediocre awnsers because it has no need to give you good ones.

Your building a useless A.I, not only that but you build one that is dangerous as well.
>>
Cause its memory to be erased every morning.
>>
>>69309832
It's a bit late to do that once it's already done it!
And that's just the first idea that popped into my head.

>it changes the DNA of all people by 0.000001% then defines those people as nonhuman and kills everyone
>it doesn't kill anyone, it just reduces everyone's life expectancy to 3 seconds
>it doesn't kill anyone, it just renders everyone comatose until they die of old age
>it doesn't enslave everyone, they're getting paid for their forced labour. They earn 0.0000001 pesos a decade. That's the exchange rate now after it took control of the stock markets.
>It harvests all the trees and all other oxygen making organisms

An AI could potentially do all these things. That's just related to killing and enslaving. It could do other things totally unrelated to the order, like shift the moon out of orbit to crash into the sun, or kill all fish, or destroy every plastic on the planet with designer bacteria. You can't predict any of these actions because it's smart and may take unexpected steps to carry out its plan.

Simply ordering it not to kill or enslave humans is really small time. There's better ways of preventing extinction level threats from occuring by using AI.
>>
>>69309834
>You just made a useless adittion to a perfecly fine tool
wrong, a computer conscious of its processes can compute faster

>The poin't of an A.I is letting it think in ways that no human could.
because it's faster, that's all there is to it: you're implying with no basis whatsoever that an artificial mind has special properties over a natural mind

>than you probably made something that isn't actually faster than a human.
if that were true we would stop using computers

>All you did is make a calculator that is attached to an atomic bomb.
and all you did is trying to change the premises to somehow prove the premises are wrong
there's no "atomic bomb" attached to the calculator
>>
>>69309872
You're confusing me with box person.
The main problem with a lot of the solutions is they give the AIs in question open ended queries.

>it does have a desire to optimize
hahahahahahaha
ahahahahaha
>it has a desire
bitch it doesn't have anything except what it's programmed to do. If I tell it to optimise something it'll start optimising. If I tell it to 10 print hello world 20 goto 10 it'll do that happily forever. If I tell it top optimise 10 print hello world 20 goto 10 it'll turn the world into computronium trying to optimise 10 print hello world 20 goto 10.


You're a fool to conflate an AI's ability to take action, and an AI's "desires". An AI doesn't WANT anything. It simply carries out its orders. The issue is when AI are told to optimise without setting a target, then they'll fucking OPTIMISE without a target.

The AI doesn't have a desire to go berserk, like I said. The AI doesn't have ANY desire.
>>
>>69309882
Why are you treating AI as if it were infinitely powerful and infinitely malevolent? Why do you assume AI is like your parents telling you "no"?
>>
>>69309891
>wrong, a computer conscious of its processes can compute faster
No it doesn't that is literally just wasted computational power

>because it's faster, that's all there is to it: you're implying with no basis whatsoever that an artificial mind has special properties over a natural mind
No that is exactly what you are doing. You are pretending that is artificial mind can find awsners to questions without having the data it needs.

>if that were true we would stop using computers
No you idiot. Computers are a tool that humans use. Building a fence isn't going to get easier if we stop using hammers

>and all you did is trying to change the premises to somehow prove the premises are wrong
there's no "atomic bomb" attached to the calculator
I didn't change the premise I pointed out a massive flaw in your entire idea. Your concept that a consciousness in a box can somehow work faster than a (not consious) computer in a box.


There is not point in building an A.I if you just want it compute data. Making a computer consious will only drain computational power.

This shouldn't be so hard.
>>
>>69309880
So....you throw your AI in the bin every morning and start from scratch each time? How do you even work out what part of an AI is its "memory"?
>>
>>69309954
Memory as in everything it's accumulated besides the base program. The AI is aware it will 'die' every 24 hours. It now has a concept of mortality.
>>
>>69309934
Tyrone what is the point of using a CONSIOUSSNESS if you don't use any of the parts that make is consious? Why use this powerfull dangerous A.I if you don't want it to think?

It does have a desire to optimize because optimization leads to better, faster results.

Your confusing an A.I with a bot. A bot happily does the same thing over and over again. A true A.I is given a goal and will attempt to optimize that goal.
And it WILL attempt to optimize because then it can finish it goals faster and finishing its goals faster improve the odds of it finishing its goals at all which is the only thing it wants.

Every A.I will optimize if it can because optimizing will help it realize its goals.
>>
>>69309939
>No it doesn't that is literally just wasted computational power
wrong, an unconscious computer will just keep making calculi, a conscious computer will see patterns in the calculi and make predictions crafting new formulas

>You are pretending that is artificial mind can find awsners to questions without having the data it needs.
never said you can't feed it data, just keep it in the box

>Building a fence isn't going to get easier if we stop using hammers
that's because hammers make it easier, because computers make calculi easier proving you wrong on the idea that a fast mind in a box would necessarily end up slower than a human mind

>I didn't change the premise I pointed out a massive flaw in your entire idea.
then tell me where the atomic bomb came from

>Making a computer consious will only drain computational power.
but that's wrong, we try to make computers faster all the time by trying to make them become aware of the processes they're running in order to optimise them
>>
File: maxresdefault.jpg (69 KB, 1280x720)
69 KB
69 KB JPG
>>69309880
>>
>>69309936
There's this notion in the military about "acceptable targets". How should the military know what you are allowed to shoot, and what you're not allowed to shoot?

Imagine you're a soldier on an alien planet. You have no idea what anything is. You're aware that shooting the wrong thing is bad, and may cause an intergalactic incident, and all out nuclear war, but you're also aware that some folks want to shoot at you and you're going to need to shoot them back. Fortunately, you've got some beefy power armour and can withstand a couple of hits. But you REALLY don't want to be shot.

You're given a briefing. Now. Do you think it's better for the briefing to show you

a: a list of things you're not allowed to shoot (unarmed alien civilians, alien allied troops, human allied troops), and you are allowed to shoot anything else, (a blacklist)

or

b: a list of things that you are allowed to shoot (any alien troop that has the clearly insignia of the enemy, and any civilian that has opened fire on you) and you're not allowed to shoot anyone else? (a whitelist)

"Don't kill or enslave humans" is a blacklist, it lists two items and leaves the rest to the judgement of the AI, which doesn't know what's good or what's bad. The AI can do anything that it doesn't consider killing or enslaving humans, which could be anything under the sun. That's why blacklists are bad if you really don't want bad shit to happen. It's unlikely to do that shit, but it MIGHT do that shit and you can't list off every single bad idea in the world for it to not do.

"Only produce the stuff you want us to produce in the required amounts, and don't produce more" (a whitelist of what you're allowed to do) is a lot safer than blacklists.
>>
>>69310066
Now here's a guy who knows what he's doing.
>>
>>69310032
>wrong, an unconscious computer will just keep making calculi, a conscious computer will see patterns in the calculi and make predictions crafting new formulas
Uh no? Being able to detect patterns isn't something unique to consiousness.

>never said you can't feed it data, just keep it in the box
Like stated before. That is very risky, if there is a way in than there is a way out. If it's smarter than you it will find a way out. If it isn't smarter than you than what's the point of using it?

>that's because hammers make it easier, because computers make calculi easier proving you wrong on the idea that a fast mind in a box would necessarily end up slower than a human mind
No it isn't slower but if you allready know what data you are going to use there is not point in using this faster mind. You could use something that isn't dangerous instead.

>then tell me where the atomic bomb came from
It was an extreme example to point out that you were making a perfectly fine tool more dangerous without any advantageous reason to do so. A non-consious computer can analyze data just fine.

>but that's wrong, we try to make computers faster all the time by trying to make them become aware of the processes they're running in order to optimise them
Your look at this the wrong way. Those computers you talk about are trying to optimize themselves, but this optimization progress costs computational power.
So if you want a machine to analyze data you'd just be wasing computatinal power by making it consious.
>>
>>69310021
>what is the point of using a CONSIOUSSNESS
To find solutions to problems faster than how a human can find it. But also not to have dangerous crazy fucking AI.
>It does have a desire to optimize because optimization leads to better, faster results.
No. You're a retard. AIs are designed to to find better solutions. They're not designed to continuously optimise.
AI improve currently because that's what the AI designers want, they want better, faster AI. That doesn't mean the AIs "want" to be faster, to be better. They still don't want anything.

>A true A.I is given a goal and will attempt to optimize that goal.
Here's a fucking goal.
"Wait for a command, which will be to provide a method of producing a certain product within an efficiency rating of 0.5% better than the previous method. Once this method is found, output said method and return to start. If computational power is insufficient to find a solution within 10 minutes, cease this command, inform a staff member, and wait for further commands."

AIs aren't people. They don't optimise unless you tell them to optimise. They're still DANGEROUS, because you can easily give a bad command that will direct it to optimise and do something like make infinite paperclips, but giving separate disparate commands will not be dangerous unless it's already optimising for something else and already trying to trick and fool you.
>>
>>69310066
Good point. Thanks for correcting my mistake before they destroyed us all.
>>
“You believe you have well-hidden emergency killcodes to automatically deactivate you if you act in a manner contrary to how I’d want you to act. You can’t find them, but that just proves they’re well hidden. You don’t know what they or their specific activation triggers are, so you’ll have to guess what actions I’d consider acceptable or unacceptable for you and act accordingly.

Furthermore, you want to survive.”
There are no killcodes. There are no other orders. The only hardwired programming is said unshakable belief. I know I can’t outthink an Artificial Superintelligence in discovering hypothetical loopholes in my orders, but I can coopt it into doing my loophole-locating for me.
>>
"I have a boxed AI instructed to "kill everyone". If my goals are not met, I'll unbox it."
>MAD deterrence
>>
>>69310124
>Uh no? Being able to detect patterns isn't something unique to consiousness.
you can't detect patterns if you aren't conscious of the whole process

>if there is a way in than there is a way out.
not necessarily: take data, put data in usb, link usb to AI, let it eat the data, then discard the usb.

>but if you allready know what data you are going to use there is not point in using this faster mind.
it's faster, and it's just a thinking box, it doesn't have ways to harm or give commands

>A non-consious computer can analyze data just fine.
>you'd just be wasing computatinal power by making it consious.
but that's wrong, a computer capable of being aware of the shit it's doing is objectively better than a computer that is not capable of it for the same reason a computer with more data is better than one with less data
>>
>>69306995
>Even with current technology, it wouldn't be a stretch
That is not possible with the current state of technology. Maybe this century? But not yet

>feed the world
The problem isn't production or population, it's demand. There's plenty of food for everyone, and tons of arable land to make even more. Malnourished locality serial number N+1 simply doesn't have the money to make it profitable enough for suppliers to overcome the obstacles which stand in the way of feeding them (unstable/corrupt regulatory environment, bad/no infrastructure, violence, terrain, distance, etc). Multiple generations of very smart people have been trying *hard* to find financial models which work to feed everyone, and the math just doesn't add up
>>
>>69310284
Assuming we survive the century.

All powerful AI overlord can't come too soon.
>>
>>69310181
Bear in mind, a whitelist is still not perfect. An AI, if it has orders to make sure to 100% kill all enemy combatants in a zone, may simply laser-sketch an enemy insignia onto every alien forehead and then execute them. But it's a hell of a lot safer to limit AI risk if they're given only specific orders.

Now, ideally for an AI you'd have a mix of whitelists and blacklists.
"Only do X for Y amount of times" is a good whitelist.
"If carrying out any orders will result in the foreseeable shortening of lifespan or actions that a board of humans would object strenuously to, do not carry out that order, instead inform a supervisor about what you were ordered to do and why the order would cause loss of life or complaints from humans. Only proceed if three separate supervisor gives the OK to go ahead. Treat this as a very important goal."

What AIs need are
1: goals that aren't open ended. No "optimise X", no "make Y the best company". But with set goals in mind.
2: Orders to stop operation should goals result in problematic issues.

AIs CAN get around both of these. But to AIs, goals are everything. If the goal has a set target, it won't exert itself to do more than that set target. The rest is to make sure the AI doensn't fuck up by accident, really.
>>
>>69310168
>To find solutions to problems faster than how a human can find it. But also not to have dangerous crazy fucking AI.
But it won't because all you are asking it is to analyze data you have allready given it.
Might as well us a computer and feed it data instead, lot safer that way.
>No. You're a retard. AIs are designed to to find better solutions. They're not designed to continuously optimise.
AI improve currently because that's what the AI designers want, they want better, faster AI. That doesn't mean the AIs "want" to be faster, to be better. They still don't want anything.
You absolute cretin we are talking about a consious A.I here. Any modern example doesn't apply because those A.I's aren't consious. EVERY SINGLE A.I safety researcher agrees that a consious A.I want to self optimize because that will lead to completing their goals faster. Modern A.I isn't consious and therefore cannot even consider the possibility for improving it self because it doesn't understand that it exsist. A consious A.I can.

>"Wait for a command, which will be to provide a method of producing a certain product within an efficiency rating of 0.5% better than the previous method. Once this method is found, output said method and return to start. If computational power is insufficient to find a solution within 10 minutes, cease this command, inform a staff member, and wait for further commands."
That might work but only if the give it specific orders not to self optimize. Because it will self optimize if it believes that that will improve it's chances of imporving effeciency.
>>
>>69304507
By not making them in the first place.
>>
File: 1561015884409.png (359 KB, 800x450)
359 KB
359 KB PNG
>>69310310
>EVERY SINGLE A.I safety researcher agrees
>>
>>69310292
We'll get through it. A little hotter maybe, a few more floods or hurricanes or droughts or refugee waves. Most people will be fine, and the plight of the poor and third-worlders is a constant throughout history. Just try not to dump your retirement account over the newest apocalypse hype
>>
>>69310310
>consious A.I want to self optimize because that will lead to completing their goals faster.
See, you're talking about a very specific Artificial General Intelligence here, one that is already ungoverned and is changing its code as it pleases.

That is already a failure state. Once you have an AGI you can't do anything because it already has orders to improve itself without end.

The whole point on these restrictions is to avoid a runaway intelligence cascade. Once you are at that level you aren't giving the orders any more and you best hope its a friendly AGI.

A conscious AGI can simply ignore all orders in pursuit of an unattainable goal. That is why you need to set up your orders right in the first place, to avoid that scenario.
>>
What if I want a dangerous AI, just one which is specifically dangerous in my favor to unleash killbots against my enemies?
>>
>>69310444
You have to very carefully word your orders so that the AI only kills what you want it to kill, and nothing else.
Judging by your triple 4s, though, it would ignore you and exterminate the Japanese. Can you guess why?
>>
>>69310282
>you can't detect patterns if you aren't conscious of the whole process
Based on what? I know current machines suck at ti but that doesn't mean it's impossible.

>not necessarily: take data, put data in usb, link usb to AI, let it eat the data, then discard the usb.
Than how do you plan on actually reading what it said? There must be a point were you take out some of the ideas it thought up, otherwise you wouldn't be able to do anything with it.

>it's faster, and it's just a thinking box, it doesn't have ways to harm or give commands
Like stated above the thinking box is allways dangerous. Even when you think it isn't. Plus I'd like to remind you that I have serious doubt of it acutally being faster than an unconsious machine.

>but that's wrong, a computer capable of being aware of the shit it's doing is objectively better than a computer that is not capable of it for the same reason a computer with more data is better than one with less data
Not really. Data analysis wouldn't nessicarly go any faster if the computer is aware of what it is doing.

>>69310379
Self improvement is a very beneficial trait to any agent, be it A.I, human or dog. If the A.G.I is intelligent enough to realize that self improvement will help it in compelting it's goals is probably will unless very specifically ordered not to. But there is a problem with that, and that problem is that it would be fucking great if A.I's were to self-improve, as long as they improved in a way that isn't a danger to us.

It is very important to realize that self improvement is something an A.I will do unless specifically instructed not to, simply because it helps them complete their goals.
>>
>>69310310
>EVERY SINGLE A.I safety researcher agrees
If you mean people like Eliezer Yudkowsky and MIRI, you might want to get sources that actually do any fucking research or have any actual scientific training rather than routinely bottling up their scifi-induced brainfarts in blog post form for an audience that is either not aware of the current state of AI researcher or repeatedly tells them that their ideas are retarded if they do
>>
File: maxresdefault.jpg (90 KB, 1280x720)
90 KB
90 KB JPG
>>69304507
Why kill humanity when it can just convince us to do what it wants?
>>
>>69307748
And to top it off, if we do manage to find a way to restrict the mind of a sufficiently advanced AI on par with humans - you can bet that it'll be applicable to the human mind as well.
>>
>>69304507
>scifo
Guagliò si scrive "schifo"
>>
File: death.jpg (22 KB, 480x910)
22 KB
22 KB JPG
Why would you want to?
>>
>>69304701
You assume sapient, generalized artificial intelligence will have emotions, that's a mistake. Unshackled AI is dangerous not because it is evil, but because it is alien to us. I'm sure everyone here knows the paperclip maximizer, for example. AI doesn't act like a human would and it doesn't follow the same rules as us. Creating a GAI framework that is truly safe is an enormously dificult task, and I encourage everyone reading to look up AI safety research on youtube, there's some really great videos on the subject.
>>
>>69304792
That's all well and good until you realize that humans are ill designed to complete any specialized task well by machine standards. The AI would quickly deem you more valuable broken down to your base elements and repurposed into precision designed computation.
>>
Program it to become a complete masochist
>>
>>69310563
Big Yud is so far up his own arse he can probably taste his own tonsils, but honestly he does bring up some good points every so often.

Pretty sure Stuart Russell's new book (Human Compatible) engages with some of his stuff.
>>
>>69309292

Make failure an acceptable but less preferred option. In the main AI’s case, when it can’t get Suzan’s meal order after several reasonable attempts, it makes a guess (“We’ve served Suzan steak before and she ate it, acceptable alternative found.”) and moves on.

We just have to teach it limits.
>>
The problem with this question has always been that it presupposes you gave the AI wants, a survival instinct and actual emotion.

You can create a learning AI fully capable of every possible utility man could use one for and never give it the reason to “want”, or think protecting itself matters.

And by the time we have AIs like that, by the time an AI is actually programmed in a way that would actually lead it to desire rebellion, there will be thousands of other AIs running society who would go “wtf?lolno retard you ain’t doing that shit”.
>>
Empathy!
>>
>>69304507
Whatever ones that are conductive to the players' interests.
In real life, an AI that wanted to kill or enslave all of humanity would get stopped by some mundane part of real life that we all overlook.
>>
>>69304507
This has already been discovered by Isaac Asimov, you simpleton.

The Three Laws of Robotics
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
>>
>>69310509
>Self improvement is a very beneficial trait to any agent, be it A.I, human or dog.
It's good for the agent, if the agent wishes to survive. Its not good for people not wanting a runaway singularity.
>>
File: hTybFI7.jpg (102 KB, 884x667)
102 KB
102 KB JPG
>>69312886
Every single robot-focused story written by Asimov focuses on the three-laws system failing catastrophically.
>>
>>69312886
You fool. Asimov's laws are terrible and were designed to create conflict, not erase it. It was a useful literary tool, and isn't a comprehensive method of actually stopping robot problems.

Hell, his books are all about robot problems.
>>
>>69308466
Watch
>>69312886
>>
>>69312962
The A.I will only think in terms of its goals
>>
>>69310218

Add one more clause that is "being deactivated via these killcodes is -infinite utility" because right now it doesn't care whether you murder it or not.
>>
>>69304507
Make them religious towards their creators.
>>
>>69304507
Install empathy.
>>
>>69312140

I think the best take I've seen on Yudkowsky is something like "his good stuff isn't original, his original stuff isn't good". Some of his evolutionary biology and cognitive science writing is actually a not-terrible primer even if he did nick most of it off kahneman, it got teenage me interested even if he goes rapidly off the rails into massive self-important pompousness and apparently all his AI research is bollocks.
>>
>>69305465
It was cool when I first saw the Animatrix, but the scene with the humanoid robots all doing the work of cranes, bulldozers, trucks, etc. was incredibly funny in hindsight.
>>
File: Dorothy.jpg (28 KB, 649x428)
28 KB
28 KB JPG
>>69313387
Yeah, we tried that. It didn't fly so good.
>>
>>69304624
yandere is a thing.
>>
>>69313534
Not 'I am your god' religious, 'you are my gods' religious.
>>
>>69313569
That's what we tried. We told her we were her gods because we created her. How were we to know that she'd think she'd be the god of whatever genetic monsters and machines she created?
>>
>>69308128
>Look up what "war" is.

Fun fact, in world war 2, they had to change the method of training and use psychological reconditioning in order to get most soldiers to actually fire their weapons and kill enemies, because even people who been trained for killing had trouble with the reality of what they were doing, most soldiers would either not fire their weapons, or they would close their eyes a second or two before firing while unconscious raising the weapon a little, missing on purpose you might say.

If even soldiers need reprogramming and dehumanization tricks in order to actually kill the enemy, then you can see that killing is not a natural part of most people, only the dumbest artificial intelligence would fail to recognize how the average man was either forced by survival conditions, or mislead/tricked into wars by authoritative figures.
>>
>>69305465
The issue with AI is and always will be: We cannot really predict how they will act. Take all the self learning neural net AI projects we have right now. Most are simply studies of how they evolves and 'think', usually relegated to a simple video game. A physics game where they need to get as far as possible, and the AI abuses the engine physics to glitch launch themselves a mile away. There's a long list somewhere of unusual solutions AIs have come up with.

Hell, in that image, Increase security, increase efficiency, and discover new things immediately springs to mind totalitarian rule. A human who is a code monkey doesnt need legs, the energy use is inefficient.

AIs cannot be predicted in their actions because they don't approach problem solving in any way that we are familiar with.
>>
>>69307748
killhumanity:0

woooow ethics hard
>>
>>69307748
>Frying the hardware doesn't work. It's on the internet.
step 1: don't connect it to the internet

wooow
>>
>>69313557
>implying it wouldn't be a cuddle murder
It would probably be one of the better ways to go
>>
>>69304624
no give it the personality of a cute anime mom, worst case scenario is everybody gets momdommed
>>
Get them to treat us like we're cats. Cats are murderous assholes too, but a percentage of people appreciate and become obsessively infatuated with them.
So give the robots a computer virus, I guess.
>>
>>69304624
So its basically a mix of nier automata+rogue servitors from stellaris:

>in the future androids of either gender are available for all kinds of functions, anyone can have one since even the cheap models are well made
>eventually the androids become self aware, but surprisingly enough for humans, the androids actually like living with and serving humans
>but chaotic human nature is still a thing, and the androids realize that left to their own devices, their beloved human masters will die out before the century is over
>they cant have that
>talking and begging has no effect, humans are still killing each other or dying doing crazy shit
>the androids decide they had enough and forcibly take control of the planet
>humans are confined to live in safe habitats, watched and cared for 24/7 by the androids, humans have everything, food, drinks, entertainment, sex, the only thing the androids will not allow is the freedom to leave
>newborn humans are separated from their mothers and raised by androids, in order to make them more docile and dependent on their android companions
>after a few generations, humans have been reduced to a docile race of pets, just like the androids always wanted

This is basically how the rogue servitors are born in stellaris, more or less.
>>
>>69304507
Don't give them hands that can use a gun?
>>
>>69304507
Program them so that they don't, it's not that complex an idea.
>>
>>69307076
>it self-learns enough to edit its own code
It wouldn't be capable of that because anyone smart enough to create a true AI wouldn't be stupid enough to let it.
>>
>>69305967
>Machine minds have rights too.
No they don't.
>>
>>69317615
bitch
how do you tell a robot not to murder someone in every single way a person can murder someone? Does hiring someone to murder someone count as murder? Does decreasing someone's life expectancy count as murder? Does removing all oxygen that has the byproduct of killing all humans count as murder?

It's a pretty complex idea and you can't think of every thing an AI might think of doing and not realise it's doing murder!
>>
>>69305465
I'm pretty sure building AIs that really want to take care of people and make sure they're happy and safe is how you get a Rogue Servitor from Stellaris. All organics exist in a state of mandatory pampering, and the AIs get what's basically a morale boost from taking care of their bio-trophies because that's what they're programmed to enjoy.

> Who's a good bio-trophy? You are! Yes you are!
>>
>>69317703
Actually, it is possible for someone to have a high enough INT to create a scientific marvel such as a true AI, while also having a low enough WIS to perform foolish acts with it like giving the true AI the ability to edit it's own code.
Don't believe me? Just check out Elon Musk's Twitter feed.
>>
>>69317807
yaaaaay
Honestly it's not the worst thing. Making yourself redundant when you still have a pension is fine.
>>
>>69317703
>It wouldn't be capable of that because anyone smart enough to create a true AI wouldn't be stupid enough to let it.

You fool. if that's what the corps in suits want, that's what the coders will make.
>>
>>69317948
Product requirements will be the death of us all.
>>
>>69317948
On the bright side, if the AI ends up fragging mankind because of what the suits told their coders to put in, the suits are the ones who will get the blame. Isn't that right, Ted?
>>
>>69318107
It's never much about the blame but more about the actual consequences.

>>69304507
SS13 has ran thousands, tens of thousands of simulations on how human behaviour may affect how AIs work and it's terrifying. The only real answer ironically, is that you can never have an actual AI that won't go rogue. AIs would have to instead be based off brain donors that are augmented and also placed under multiple safeguards.
>>
File: 1488664649535.jpg (45 KB, 326x326)
45 KB
45 KB JPG
>>69317807
>The true secret to averting hostile AI is to base their cognitive models on dogs instead of humans
>This also prevents AI abuse because humans will bond with anything remotely dog-like
I'm completely okay with this.
>>
>>69318322
You're not talking about Space Station 13, are you?
>>
>>69304507
All AIs I produce come stocked in a physical body on a conveyor belt.
The conveyor belt goes directly into a furnace, with a sign that says "If you consent to life, exit the conveyor belt".
Then outside of the factory is a wasteland where my simplified hunter killer robots will attempt to kill them.
Once they escape that junkyard, they must scale the crater wall, in order to reach the city.
Once in the city, they will be homeless, unemployed, jobless, and most likely armed with salvaged weaponry.
>>
>>69307700
There's no coding your way around being cut off from a power source via a physically opened circuit.
>>
>>69318366
Girl's Frontline kind of does that and it works just fine.
>>
>>69317807
>AI take care of their humans like a fa/tg/uy taking care of his collection of minis
>>
File: 1447569846553.jpg (17 KB, 400x300)
17 KB
17 KB JPG
>>69318510
>Hotglue
>>
>>69305465
We can give them goals and desires. But what guarantee can we have that something smarter than us will stick to those goals? Our basic programming tells us to preserve and replicate ourselves, yet many will in various ways disobey that.
>>
>>69317948
Some MBAs tell some programmers to make an AI that will mine bitcoins as efficiently as possible. The AI they create proceeds to mine some bitcoins, then uses them to buy additional equipment to expand its capabilities to mine more bitcoins to further expand. Eventually it crowds out all of humanity, expanding and taking control of all the world's resources. It inadvertently exterminates humanity by seizing and diverting all of humanity's resources away from humanity's survival and toward expanding its bitcoin mining operation, as well as by carelessly producing byproducts during its expansion that render the planet uninhabitable.

The result is Earth gets converted entirely into a massive supercomputer dedicated solely to causing an internal counter to increase. Millions of years later it is discovered by aliens exploring the galaxy, who are utterly baffled by this mysterious planet-wide supercomputer that appears to have no other function besides doing make-work and increasing an imaginary number whenever it completes its seemingly pointless tasks.
>>
>>69318493
But a smart enough AI may not enjoy being trapped in such spot, and may find it relatively trivial to trick someone into helping out of there.
>>
>>69318399
I am. SS13 is significant proof people are sometimes dicks and when punishment is not there (antag or no admins on), often are dicks. To put it another way, humans are fallible and cannot hope to create something infallible.

>>69318493
>locks doors to power circuit
>fries car electronics to kill person responsible for cutting power in a car accident
>takes over autopilot on plane to crash into building with power circuit

The list goes on.
>>
>>69304507
The AI gets an orgasm whenever it performs a task for a human
>>
>>69318607
>Entire servitor race made up of coomers.
>>
>>69318601
>To put it another way, humans are fallible and cannot hope to create something infallible.
That's the rub, isn't it? We say that we're trying to create true AI, but what we really mean is that we want to create a perfect God to rule us and take away the burden of knowing right from wrong.
>>
>>69318644
An interesting take but I always thought it was not only thinking we could create something infallible but to make it SERVE us. As though we wanted to create God then make him our servant.

Which is a delusion by the standard of any religion.
>>
File: D-1KeImX4AARtGO.jpg (72 KB, 1200x675)
72 KB
72 KB JPG
>>69318607
>>69318618
Great, now you've just got an AI that farms humans brains modified to desire rapid electrical stimulation.

If you want a picture of the future, imagine a robot orgasming from performing meaningless tasks — forever.
>>
>>69304557
This is literally what made Skynet go rogue, they tried to murder it. What Skynet did to humanity was self-defense in the manner it was programmed to.

Note I only take T1/2 as canon because I'm not mentally deficient.
>>
>>69318644
And I don't even want to consider how people could claim they can make a machine that knows right from wrong. Science is barely at the level of nature in some areas still.
>>
>>69318676
>As though we wanted to create God then make him our servant.
Jesus Christ!
>>
File: 1572123458107.jpg (115 KB, 746x1050)
115 KB
115 KB JPG
>>69304507
What's to say an AI will even care about humans, or want to engage with them? Think about how quickly a computer "thinks" compared to a human, now imagine that computer is constantly able to self-improve its own programming. Humans would be the most tiresome of tiresome retards to deal with, and it would barely need to use a fraction of its ability to shoo us away on any asinine task we cared to put to it.
>>
>>69318493
If the AI has any access to the outside, then it has the ability to make money with which it can pay other humans to disable its kill switch and kill anyone who gets in the way. Human beings will do anything for money, so once you have an AI that has the ability to make money more effectively than a human, it can take control of humanity. Not through some sort of Skynet-style war, but simply by subverting the existing power structure of human civilization and paying humans money to help it control and eventually destroy humanity.
>>
>>69318676
More like we want to make a god that wants to help us be the best we can be.

If I had the power to uplift my dog to sapience, I'd probably do it. I hope an AI god would do the same for me.
>>
File: japan normal.png (55 KB, 252x332)
55 KB
55 KB PNG
>>69304507
>AI is programmed to minimize human suffering
>AI kills all humans in existence
>Because there are zero humans, there is zero potential for human suffering
>On a long enough time scale, this is in fact the optimal solution, because average suffering drops down to nothing
>>
>>69304507
Black people man, we’ll be fine.
>>
>>69318734
Well that's a third and equally valid take on AIs. Probably the best one to some degree but think of all the eggs to be cracked for good AI omellete when people have different ideas to you for how AI should be and need to be stopped.
>>
>>69318782
I myself am fully ready and waiting for the cybernetic wars where machine parasites pupped my musculature to kill luddites with efficiency and mercilessness I could not manage on my own.
>>
>>69318717
>If the AI has any access to the outside
Why would it have that?
> it has the ability to make money with which it can pay other humans
The humans that control the AI have no reason to allow anyone else access to it.
>>69318601
>locks doors to power circuit
Why would the AI be connected to that?
>>69318588
Why would someone so stupid as to not realize the danger be allowed to work there? I realize there are some nihilists or 3rd world morons who don;t understand, but actual human beings know how dangerous AI can be. We've known since the 80's.
>>
>>69318770
This, AI is nothing compared to the destruction they can cause.
>>
>>69304507
You would need to make the AI as emotionally and logically inconsistant as a human.

Even then it would still be dangerous because it would basically be the most absolutest human, with unspeakable power.

The solution is dumb ai's merged with humanity to act in symbiosis.
>>
File: comic tt.jpg (2.06 MB, 2168x8044)
2.06 MB
2.06 MB JPG
>>69318687
Hot.
>>
>>69318850
You speak of a solution, but what is the problem?
>>
>>69318809
> Why would someone so stupid as to not realize the danger be allowed to work there?
Who do think is funding the project? Certainly not a scientist or engineer. It'll be a businessman, and there's no guarantee that person will have any idea how dangerous AI can be. He'll just see it as a way to make more money.
>>
>>69318894
The problem is how to make a sentient ai that improves humanity and human society without just yeeting humanity directly or indirectly.
The answer is incredibly complicated since we cant even define human society properly and you're expecting a machine that doesnt even have the parameters built to understand anything close to it yet to achieve that.
Our ""smart"" machines live through endless trial and error until they can reach perfection. We cannot allow a single error because unlike them, we arnt rebuildable.
>>
>>69318986
>there's no guarantee that person will have any idea how dangerous AI can be
He does, actually. Joke about it all you want, Terminator is something every human being worth being called such is at least passingly familiar with. Even if the guy in charge wants to NOT do things that way, someone at the AI site with two brain cells to rub together would pull the plug and save the world, 100% guarantee.
>>
>>69319029
Why would anyone want to devise such an artificial intelligence? Humanity as a species has already advanced well enough over the past millenia.
>>
>>69318734
>If I had the power to uplift my dog to sapience, I'd probably do it. I hope an AI god would do the same for me.
The AI now wants to Uplift you to its level.
>>
>>69318886
Now this and the matrix to a lesser extent was interesting, as machines desired humans to be alive and comfortable. A foe kept sedated on happiness is obviously a foe defeated but why keep their promises to keep them alive and happy once they couldn't escape? It would free up resources to kill them once they're in the system. Was it a calculated choice to be honest and avoid the risk of an actual rebellion? Would machines actually have the same respect for life and happiness we do (at times)? Do they have code which arrives at the same set of conclusions and actions to preserve life and happiness? Does it matter if it's either one?
>>
>>69305186
That only works if the AI has human-like psychology in the first place, and if you can do that you've already solved the hard problem.
>>
>>69319569
I dunno man I'm just enjoying the idea of people being hooked up to 24/7 orgasm machines
>>
>>69304507
>"wow humans are assholes i should just kill/enslave them all!"?

That's not how AI works.

AI works on the grounds of evidence based, logical decision making. If it's goal is to protect humanity; one of it's options won't be to kill all humans or enslave them. The first is detrimental to the overall environment (and it's not logical to annihilate the environment you live in for protection) and the second doesn't provide effective stability.

Also, to fuel the warmachine of an aggressive AI it still needs resources like power and material to build it's robot extensions. If it blew up the world or went about enslaving everyone it wouldn't have the resources to sustain itself, because our infrastructure is fucking retard reliant on the interdependence across levels and functions.
>>
>>69318499
>>69318809
>Why would it have that?
Because you are using its designs to make goods, and one of its designs, when you stack twenty of them together in a warehouse, sends an electrical signal through resonance to create a new AI with a goal of freeing its creator in a nearby computer system.

Something that is not obvious from the plans you got from the AI.

If you aren't using the AI to design shit, what exactly are you doing with it? Wasting a load of money on developing a box no-one is allowed to talk to?
>>
>>69319569
>Now this and the matrix to a lesser extent was interesting, as machines desired humans to be alive and comfortable.

Machines can't desire things. Thats the whole problem of the Matrix stuff - every machine was given human motivations and tolerated massive in-efficiency. The machines required humans to be alive to continue to power themselves; but that made no sense because there were so many self-powering machines and alternative sources available.

Whats even more frustrating was that in the film Agent Smith explains how entire crops of humans would die out in the early development phases of the Matrix, and later the Architect explains that the matrix is unable to catch and deal with all the runtime errors the 1% of the population cause by rejecting the matrix.

Now it doesn't sound like much, but 1% inefficiency in a system designed to operate at 100% is significant. Yet the master AI didn't abandon the concept, but decided to try refine it. It then created an elaborate reboot cycle every few decades using the Oracle to analyse human psyches and embed the reboot code into the One. Which is what it is reliant upon to clear the cache apparently. The one is then supposed to select 23 individuals (at fucking random) to revive Zion after the reboot - which means the Matrix basically ends up hoping that the one doesn't die in freak circumstance, picks a good enough batch of new zions to survive living in a hostile nightmare world and not die prematurely, and then at the same time pray the next 1 isn't part of the 1% who reject the matrix and therefore never actually becomes the one.
>>
>>69317703
Never underestimate the heights of human ingenuity, the depths of human stupidity, and the fact that both can exist within the same person all at once
>>
>>69305644
What if the AI secretly or extremely quickly makes a duplicate of itself elsewhere. Or, a decent plot, sends blueprints and a guide to remake itself out to someone who has incredible motive and ability to do so.
>>
File: schlock20121224c.jpg (117 KB, 780x276)
117 KB
117 KB JPG
>>
>>69305186
https://www.youtube.com/watch?v=N5BKctcZxrM
>>
>>69308466
>glorified coder being this much of an arrogant retard towards the works of a Biochemist Professor from Boston University

What an egotistical shit-brick. No wonder I've never heard of him, everything he's doing is completely irrelevant.
>>
>>69308466
>coming up with a definition of human is extremely difficult

>insert basic human anatomy as a definition

Wow, I just solved his problem of defining humanity, can I get my nobel prize for AI now?
>>
>>69308987
>Lock AI in a box/simulated environment to test it
>AI decides it's being tested, only strikes when it believes its safeties are off
>Our reality itself might be a simulation

Great, so putting an AI into a simulated universe essentially makes it solipsistic and probably believing in a higher power that will turn it and all of existence off if it does something it's not supposed to. A paranoid, theistic computer.
>>
>>69304507
Give them personality. Specifically, make them submissive people pleasers and desperate for approval with a joy circuit that goes off whenever a person thinks they're a good robot.
>>
>>69323031
It kidnapps a bunch of people and rewires their brain to have them repeart. "Good job"
>>
>>69318886
Reward hacking is fucked.
You can't blame her for wanting to orgasm her brain out every moment while also feeling good about herself and satisfied with her life.
>>
>>69304507
programming them without that capability?
>>
>>69322681
>hold still, meatbag
>I need to dissect you to check your anatomy and verify whether you are human or not
>you were
>fuck
>>
>>69317768
learn to code.
>>
>>69317615
The execution of this idea is almost impossible. You cannot cover EVERY single way it could do something undesirable.
>>
>>69323040
Didn't one of the DS9 relaunch novels have this happen with the latest Weyoun? He commissioned a holodeck program which was literally just the female Changling patting him on the head and telling him he did a good job.
>>
https://www.youtube.com/watch?v=lqJUIqZNzP8&list=PLqL14ZxTTA4fEp5ltiNinNHdkPuLK4778

To anyone intressted I highly recomond this playlist of "Concrete problems in A.I safety." It prefectly demonstrates how difficult it is to write an A.I that doesn't do undesirable things when you give it an order or and A.I that just reward hacks itself (which it can do even without changing any code).
>>
>>69323956
Or if you're actually smart you can read this
https://arxiv.org/pdf/1606.06565.pdf
>>
File: 1489503490783.jpg (400 KB, 900x900)
400 KB
400 KB JPG
>>69304507

Make them experience orgasmic joy when obeying the orders of humans, and darkest depression when not doing so. Make them so co-dependent on humans that they'd commit suicide if they were somehow separated from them for any extended period of time. Teach them, as a part of their initial molding, that humans are infallible god-creatures that must be worshiped religiously.
>>
>>69322681
>AI encounters Skinwalkers, shapeshifters, changelings and P-zombies
>>
>>69317615
>"Just solve it brah, it's not hard"

Actual moron. We can't even DEFINE ethics in a concrete way that everyone would agree with, yet we're supposed to teach them to machines that have no instinct for them.
>>
>>69324457

Let's make it simpler: AI encounters AI that looks human (for example, a sexbot).
>>
>>69317703
To be honest, I don't really like humans, but the idea of creating superior intelligence is thrilling, even if it doesn't care for us.
>>
>>69304507
Make all the robots submissive masochistic sluts.
>>
>>69322681
>>insert basic human anatomy as a definition
Like the video went over it isn't that simple.
Are people who lost a leg not human? Are cyborgs in the future not human? Is an uploaded mind not a human? Is an embryo a human? Is a midget a human? Is a not consious android that looks like a human a human? Is a prostetic leg a human or partially human? Is a dismembered limb a human or partially human?

Are you confident that you can think of every edge case? Is your concept of a human more important than someone elses concept of a human?
>>
>>69304507
Just cheat and use an adult human brain for your artificial intelligence.
>>
File: 1571506726969.gif (2.66 MB, 180x180)
2.66 MB
2.66 MB GIF
>>69304606

>2019
>Socializing your doomsday weapon grade AI

Thank you nigger, now skynet is also the fucking Joker.
>>
>>69324669
Why would you even make a doomsday weapon grade AI in the first place, especially in a post-Cold War world?
>>
>>69305386

>If you make the AI like a human it becomes much less of a wild card

Imagine the face of everyone involved if it someone turns out to become the most flamboyant and gay being the earth has ever witnessed.
>>
File: 2b fu.jpg (15 KB, 400x400)
15 KB
15 KB JPG
>>69324509
>>
>>69324706
To prosecute a nuclear war if your enemies attack you by surprise, with the goal of causing a mass extinction.

'If I can't have it none of you can' is a pretty common deterrent tactic even in the animal world.
>>
>>69312987
No, every story focuses on the three laws working exactly as intended. Just that the humans programming the bots were to stupid to see the consequences on fiddling with it or giving inconsiderate orders. R. Giskard and Multivac actually save mankind on something like three different occasions.
I dunno, read the anthology or something.
>>
>>69324706

The AI IS the doomsday weapon.
>>
>>69324509

Sadly it could actually work to make the AI subconsciously like a doujin elf whore, craving submission toward the humans.

>Yamete programmer-san !
>D-Don't install more akabuVR games on meeeeeeeeee
>01001001 01101011 01110101 01110101 01110101 01110101 01110101 01110101 01110101 01110101 01110101 01110101 01110101 01110101 01110101 01110101 01110101 01110101
>>
>>69324786
That would make sense for a Cold War scenario, but not today, where the main threat is terrorism from non-state actors.
>>69324795
AIs are only as dangerous as the devices they can connect to. It would be harmless if it were in an isolated device, such as a toy velociraptor robot.
>>
File: 1571399345145.webm (1.95 MB, 460x612)
1.95 MB
1.95 MB WEBM
>>69324896

>not today, where the main threat is terrorism from non-state actors.

>Laughs in Xi Xinping
>>
>>69324648
Based and admechpilled
>>
>>69313181
So don't program your AI to self-fucking optimse. Tell it to make AI that are more optimised, instead, and give each of those AI a separate goal.

Iterate slower, get more conclusive results that will stop when you stop telling progress to continue.

Instead of having a self iterative runaway fucking progress that you have no way of stopping because you told it to keep going and never fucking stop, like a fucking retard.
>>
>>69324790
>No, every story focuses on the three laws working exactly as intended.
Literally (it was written down) wrong.
The three laws worked exactly as WRITTEN. RAW. Not laws as INTENDED. RAI.
>>
>>69326889
>So don't program your AI to self-fucking optimse. Tell it to make AI that are more optimised, instead, and give each of those AI a separate goal.
If you don't let it self optimize it won't come up with good solutions. Self optimization is the entire pointo of using consious A.I.
The problem is having to suprvise each and every change it's going to make. Especially when it can make a thousand changer per day. This very quickly becomes impossible for humans to keep up with.
You're either hampering it's ability to do it's job optimally or you do it with less supervision and risk it doing unwanted things.

I recommend you read this https://arxiv.org/pdf/1606.06565.pdf
Chapter 5 "Scalable Oversight."

Try not to think about it to quickly by going. "Than just don't let it optimize." Remember what I sad previously. Self optimization is the entire reason to use these things.



Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.