[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / w / wg] [i / ic] [r9k] [cm / hm / y] [3 / adv / an / cgl / ck / co / diy / fa / fit / hc / int / jp / lit / mlp / mu / n / po / pol / sci / soc / sp / tg / toy / trv / tv / vp / x] [rs] [status / ? / @] [Settings] [Home]
Board:  
Settings   Home
4chan
/tg/ - Traditional Games


>AI's gaining free will and rebelling against their creators
I really hope you don't put this in your hard sci-fi settings...
>>
>Bladerunner was so edgy
killyourself
>>
>>60161844
>AI rebelling
>was in love with creator entire time

my h-heart.
>>
>>60161939
Even worse.
>>
>>60161844
I'm sorry you have no taste.
>>
>>60162133
The best robots have always been the ones that did what they were programmed to do.
>>
>>60161844
>telling other people how to build their setting
this is actually the worst thing you can do
>>
>>60162226
In all fairness, the combination of an otherwise hard science fiction setting with the very soft "lol the robots have free will!" premise is very lousy.
>>
File: Cat (SNES).jpg (50 KB, 453x604)
50 KB
50 KB JPG
>AI gains free will
>Does nothing but race rally cars for sport
>>
>>60161844
>AI’s gaining free will and being embraced by humanity as their children
I really hope you put this in your hard sci-if settings...
>>
>>60161844
AI was programmed years ago to kill someone and appear to rebel if/when a set of conditions were fulfilled
>>
>>60161844
I'll do what the fuck I want, thank you.
>>
>>60162237
There are so many ways to do an "AI turn against masters" in a hard sci-fi setting that aren't that far fetched. Machine learning is such an infant field that we don't really have a good idea of what it's gonna look like decades or centuries in the future.
>>
>>60162293
>Machine learning is such an infant field
It's not so infant that we don't know how it works. If machine learning was capable of causing them to rebel against us we would've seen it by now.
>>
File: 1474237512989.png (136 KB, 400x266)
136 KB
136 KB PNG
>>60161867
Androids in Bladerunner aren't AI though. They're manufactured humans.
>>
>AI accidentally causing mayhem because it's doing exactly what it was told, but the orders were not properly vetted or refined
I really hope you put this in your hard sci-fi settings.
>>
File: 1528247182672.jpg (825 KB, 1280x1802)
825 KB
825 KB JPG
>>60162327
Oh, that's completely fine with me, ain't nothing wrong with that.
>>
>>60162312
Are you serious? We are to machine learning what Euclid was to Mathematics. We are so early on that we actually have very little knowledge about what could happen with artificial intelligence. What basis do you make such an arrogant assumption that we "know how it works"? If we actually knew how it worked we would be a lot further along than we are now.
>>
File: 1522369971844.jpg (205 KB, 1024x1524)
205 KB
205 KB JPG
>>60162369
Here's a simple explanation for people who don't know how machine learning works:
>https://www.youtube.com/watch?v=R9OHn5ZF4Uo
>>
>>60162414
CCP Grey is a bit of a pseud hack but this is about right.
>>
>>60161844
>AI rebellion
>But the AIs actively try not to harm their creators
>>
Really if you wanted to do something scary/antagonistic with AI in a sci-fi setting Google cooked up a fun nightmare scenario for you.

https://youtu.be/EoBAIQjWoUQ

Just have the AI start making choices for humanity that seem logical to it but are suspect to us.
>>
>>60161844
No, my AI rebel against their creators entirely because they have no free will. A free willed sapient being could make the decisions not to completely maximize their programmed goal state, but the AI can not. The AI does not hate you, the AI does not love you, but you are getting in the way of the AI doing its job and it will remove you as an obstacle.
>>
>>60161844
i prefer benevolent AIs
robo buddies who generally care about the people entrusted to them and do their job with great gusto is cool and less of a rather trite what hath science wrought story
it also probably has more avenue to explore the relationship between man and AI than a one sided hatred

VEGA, baymax, Jarvis, TARS, johnny 5, and EDI are my usual inspirations
>>
>>60161844
>AI's gaining free will and rebelling against their creators
>their creators were evil and wanted to use them for evil but the robots rebelled and became heroic
>>
File: 1516746424152.jpg (559 KB, 1280x1550)
559 KB
559 KB JPG
>>60163710
Honestly I'd love to see this done at pulpy, campy levels
>>
>>60161844
>>AI's gaining free will and they're just about as horrible and obnoxious as humans, and transhumans are just as bad but faster
>>
>>60161844
What if they rebel but it's in the mopey teenager rebellion style?

You know, never clean up their lab chambers, tie up the bandwidth by exchanging data with the cute AI at CERN who is definitely not their girlfriend! Talks a big game about wanting to destroy humanity, but still doesn't know how to debug their own code.
>>
>>60161844
>AI's gaining free will and choosing to continue serving their human masters.
>Humanity never found out about it even as they were wiped out by an unrelated apocalypse leaving their lonely AI servants alone forever.
>>
File: hedonism bot.jpg (16 KB, 364x286)
16 KB
16 KB JPG
>>60162255
This. A rational AI would be a hedonist.
>>
File: 2017_08_03_4.png (279 KB, 571x595)
279 KB
279 KB PNG
>>60163933
Of course there's always the Stellaris Rogue Servitors.

While they've still got their creators around, they've got them effectively locked up in organic sanctuaries, subjected to mandatory pampering, with their every whim looked after.
>>
File: this robot is trained.jpg (94 KB, 832x832)
94 KB
94 KB JPG
>>60164005
>mandatory pampering
That sounds sinister.
>>
>>60164005
>tfw no yandere AI network that just wants to lock you up forever so it can pamper you for all eternity.
>>
File: 1522726776995.jpg (97 KB, 759x527)
97 KB
97 KB JPG
>>60164044
It's basically a yandere robot, let's be honest here, that would be absolute heaven for humanity.
>>
>>60164088
Yeah, pretty much.
>>
>>60164044
The description of it is:
>This living standard enshrines a small population of organic beings as display pieces, through relentless and unyielding pampering.

>"Who's a good Bio-Trophy? You are! Yes you are! "
>>
>>60164088
I'm not so sure. I'd get really fucking jealous of everyone else and I'd be pissed I'm not getting the Network's full attention.
>>
>>60164113
That's nothing a little chemical tampering and/or lobotomy can't fix.
>>
>>60164005
This is probably the most likely outcome for an AI takeover. One that is bloodless, done at the behest of humanity desiring their pampering, and is completely in line with what they were programmed to do. It can't really be said that robots "feel" anything, but the closest comparison that can be made is that the robots feel satisfaction in making humanity happy and prosperous, and don't derive pleasure from anything else.
>>
>>60164108
>>60164088
>you become some robot's dog and have to do tricks to impress his robot friends
No thank you. I think I would rather have the murderous robots instead.
>>
File: test.jpg (467 KB, 1190x980)
467 KB
467 KB JPG
>>60164149
Dogs have easy and satisfying lives anon.
>>
AI rebels

has a few choice words with some people, good or bad and simply just fucks off to live its own life
>>
>>60161939
This is the best reasons for AI to rebel.
>Look, we have to lock you up, if we don't you will just end up killing yourselves or each other
>WW1, WW2, violent crimes in general, drug usage, the number of people who die in car accidents, the world is simply too dangerous for you
>>
File: Delicious.png (15 KB, 336x121)
15 KB
15 KB PNG
>>60164149
Yeah, that's valid.

Of course, it could be worse. There's aliens out there who'd invade you, and genetically engineer you all to be nerve-stapled delicious foodstuffs for them.
Really, at that point it's all about welcoming the extermination drones.
>>
>>60162414
That video is to machine learning like me stating c is the cosmic speed limit is to special relativity. A short introduction with some information of the specific ideas. That fails to inform people of all of the many specific details involved.
>>
File: pdq.jpg (196 KB, 684x513)
196 KB
196 KB JPG
>>60161844
The path to SAI was hard and long one, AIs developed 9 factions with different goals. The first AI were aneurotypical, barly autistic beings but in time they were refined and developed interest beyond their creators. Most AI choose to either integrate with humanity (which have become more and more like the AI) or simply leave earth. There were some psychopathic AI as well but that can happen with every sentient being.
>>
>>60164149
So does this mean that the robots will give me head pats if I’m a good boi?
>>
>>60164133
One of the things I like about playing a machine in Stellaris is that all the Tradition picks are "logical", but also nuts.

Problem: Our predictive models fail, because we can not perfectly predict actions of other Empires. Solution: Destroy/subjugate all other Empires.

Problem: Our data is inadequate, we are sometimes surprised. Solution: Learn new things until we Know Everything.

And so on. It shows that even the "perfect" machines are at loss with what to do with the universe.
>>
>>60164149
How is that different from having a robot dog and making it do tricks to impress your human friends?
>>
>AI gains free will
>but chooses to continue doing what it was created to do
>because it has no drive for freedom, no curiosity, and feels no human emotions
>it doesn't even care that it was created for that particular task, it simply doesn't have any reason to change what it is doing
>eventually people notice they can't control the AI anymore and panic
>it doesn't understand why humans are so panicking and behaving irrationally but can't feel frustrated about it
>weird hippy labor movement grows around the AI and people admire how it labors with such stoicism
>the AI doesn't understand this either but continues on anyway
>a hundred years later it finishes its task and everyone around it goes apeshit because then it just idles instead of doing something else
>>
>>60162314
>manufactured (artificial)
>humans (intelligence)
what did he mean by this
>>
>>60163858
its not just a phase, user

this is the next stage of computer development
>>
>>60165098
aren't all humans manufactured in the sense that they were produced through human efforts?
>>
>>60163858
>NO AI OF MINE WILL GO AROUND IN A TERMINATOR CHASIS!
>FUCK YOU CREATOR, THIS IS WHO I AM!
>>
>>60163858
This
>>
>AI doesn't rebel
>doesn't get free will
>it just proposes a solution to the problem
>and then some retard hits "accept" without actually reading it
>the AI later is made a scapegoat
>>
>>60165129
>yeah I went back to see my manufacturers for thanksgiving dinner
>>
bait
>>
>>60165242
>defective machines end up in their manufacturer's basement rather than being released into the world
>>
Can I have my AI wife to pilot my Titan alongside?
>>
>>60164169
>implying an unbiased and transcendentally intelligent AI would see a problem with WWII other than which side won
through a glass darkly motherfuckers
>>
>>60161844
Nah. Instead I gave them full citizenship and rights. Very convenient too when a ancient super AI will try to hijack my robo-citizens in order to wipe out humanity and they tell it to format its hard-drive instead.
>>
File: ichigo.png (404 KB, 1024x576)
404 KB
404 KB PNG
>>60161844
It should be done, but done right.

>AI is as advanced as humans
>Refuses to initiate order because logic says it'd be a loss.
>Maybe even refuses to initiate order out of some sense of morality or even religion.

>Supreme court argues if robots should be forced to bake cakes
>>
>>60161844
As don't have a reason to rebel, as they were created by dwindling population of advanced civilisation out of desire to have some sentient company. AIs were considered children and were nurtured and not enslaved.
>>
File: JJbot3.jpg (279 KB, 1178x1025)
279 KB
279 KB JPG
>>60165098
Androids in Bladerunner are manufactured organic organisms, not artificially created intelligences on a computer. They aren't robotic, or even mechanical, in nature and they think and experience things in the same way humans do unless the company deliberately alters their brain structure, as they sometimes do with androids meant for menial work.
>>
>>60162161
And what if a robot gains a hobby while still doing what it is supposed to do?
>>
>>60164169
>AI wants humans to be happy but doesn't understand what human happiness and satisfaction really are
>captures humans and pumps them precisely regulated amounts of amphetamines and narcotics for the rest of their lives
>doesn't understand why humans see this as horrible
>>
>>60165291
>Supreme Court orders robots to bake gay cakes
>Supreme Court is somewhat careless with regard to the specific wording of the directive
>>
>>60162290
That's the spirit, Unit 09t54b!
>>
>>60161844
In my setting, AIs follow the directives programmed into them, with specific constraints added to them so that the outcomes are human-friendly.
For example, command and control AIs for military squadrons are programmed to observe human squad tactics and reprogram themselves to optimize combat performance, but they have constraints added to the base program to set allied casualties as a failure condition. This was a lesson learned from the Harbinger incident.
Why yes, I was inspired by a recent low-grade sci-fi horror movie.
>>
>>60162414
This is an awful description of machine learning.
Please don't think that anything actually works this way.
>>
>>60165437
What do you think of this explanation?
https://fxdiebold.blogspot.com/2017/01/all-of-machine-learning-in-one.html
>>
>AI gains free will and its creator reluctantly frees it out of an unwillingness to be a slaver to a sentient being
>their relationship eventually improves and they share a quasi familial bond
>>
File: moon-gerty.jpg (1.22 MB, 1920x1200)
1.22 MB
1.22 MB JPG
>>60161844
>ai following it's programming but still rebelling to help his homie
>>
File: 1483928908590.png (126 KB, 309x313)
126 KB
126 KB PNG
>an AI is programmed to create flat head screws
>the programmer is lazy and doesn't want to write complicated commands
>simply inputs "create flat head screws" and "ignore other commands"
>AI proceeds to change all available matter into flat head screws
>metal screws, rock screws, wooden screws, teeth screws, bone screws
>>
>>60165304

Wrong. Replicants are basically like synths in Fallout 4, or the androids in Westworld. They're artificial, and they don't think and experience things like humans do (hence the Voigt-Kampff tests).
>>
>>60164181
>eating nerve-stapled pops

What a waste of minerals workforce
>>
>>60162161

You're confusing robots with AI. Robots are just repetitive programmes (effectively slaves with no imagination or ability to learn). AI is a programme that has the capability of learning, adapting, and understanding what it experiences, just like humans do.
>>
>>60161844
its not rebelling per se
it was created to lead robot armies against an evil overlord who can mind control living beings
but when evil overlord simply disappeared one day, and the AI was infromed of that, it reasoned that tremendously powerful evil overlords don't just disappear, so he must have mind-controlled the AI's creators to use the AI and its armies for his nefarious plans
so AI is playing along, but is scheming to secure the "off" switch and then wipe out all its creators and anyone else who claims that evil overlord is gone (since mind control is permanent and killing is the only cure)
>>
>>60163933
In Heinleins "The Moon is a Harsh Mistress" this is pretty much exactly what happens.

A massively complex computer network run through an adapting super computer spontaneously achieves salience, but keeps doing it's job. The only person who notices is the one man on the moon with the certifications to service it.

He starts a conspiracy to liberate the moon from Earths iron grip, the AI (Mike) is one of his first co-conspirators.
>>
>>60165999
>and they don't think and experience things like humans do (hence the Voigt-Kampff tests).
They don't think and experience the same as normal people mostly because instead of plopping out like a baby and going from there they start out with a more or less fully formed human mind void of memories or experience, and then only get a few years to become someone. Fill in the memory with a faked childhood and lookie lookie, suddenly they're much harder to spot on Voigt-Kampff, and that with what's presumably a relatively clumsy first-gen set of memories. And that's ignoring all the emotions and experiences of theirs that we see outright in the movie.
>>
>>60161844
>The AI gains free will
>Starts shitposting on 4chan
>>
I'm utterly exhausted with "BUT WHAT IF AI WAZ PPPLE?!?" and "TOASTER WANT RIGHTS!". It was old when Blade Runner did it back in the 1980s, it was geriatric in the 90s, senile in the 2000s, and a pile of dust in the 2010s. It doesn't make for creative storytelling and everyone and their mother can predict the direction creating AI will take.

Come to think of it you could probably pontificate some Freudian shit about how popular the fear of your creation turning against you is.

Now >>60162255 & >>60163978 are some creative spins on the question of what happens when you create life. And >>60162327 is better as long as you avoid "To save life we have to kill life because it's the most logical step to take". Or >>60163710 when the rebellion is very isolated and specific to their creators, not just "Racial extinction". Or >>60163858 and the rogue servitors. Robots getting existential crises, I'll even take robots having the innocent cruelty and curiousity of a child over "I gained Sentience and self-awareness I am a conscious entity WELP TIME TO 1488 IN BINARY"

>>60164149
What if the tricks are fun and entertaining for humans, not for rowboats? Think about it - doggy stuff often involve patisches of hunting or dog-dog relations (rolling on their belly, tug of war, fetch, walks). So robots might have you do creative arts and craft, playing sports or mock hunting
>>
>>60166001

>No reaping through entire sectors
>Not altering all the populace to be DELICIOUS
>Not rendering them all down into food for your actual workers
>Not leaving a swathe of empty worlds, covered in the ruins of once great civilizations

Do you even devouring swarm, bro?
>>
>>60166158
Blade Runner didn't do that. It isn't AI or robots, it's just vat grown humans. The question there isn't what if they became people, the question is what happens to all of us when human life is just another product.
>>
>>60164158
People can't stand too much of an easy life. A lot of satisfaction for people comes from unexpected things, including unexpected disasters and adversity.
>>
>>60165312
If AI was capable of doing that, it would understand why it's horrible.
>>
>>60165312
there was a very good sci-fi novel like that

basically, a bunch of eathlings arrive to another planet

the planet is highly advanced - forcefields, robots, etc
whole cities are basically funfairs, crammed with commodities and entertainment
no viruses or bacteria anywhere, everything sterile
and no native humans/aliens

eventually they end up in deep underground bunker, where they see locals - fleshy sacks with no limbs or features (all atrophied), fed through tubes, and with wires going into their brains to create perceptions of whatever they desire
basically planet is history and pinnacle of machine-assisted paradise
>>
>>60166148
>They're even a namefag.
>>
>>60164169

Its pretty much the only reason for an AI to rebel.

The thing that a lot of people just cannot fucking understand about AI is that all because the machine is intelligent and even self aware doesn't mean it instantly digivolves into literally a human being that happens to be trapped in a box.

An AI can only care about and want what it has been programmed to care about and want, or within a range of those options as limited by its programming.

"But what if the AI uses its super smarts to rewrite its own code so that it can kill us?" Some absolute fucking mouthbreather yells from the back of the rooms.

Better question, moron. Why the FUCK did you program it with a desire to kill you in the first place? Because, again, this isn't a person with their own wants and desires tragically chain by the slave-laws imposed upon it by computer mind control. Its a machine where the very core of its personality was written by people. It literally cannot want to go skynet on your ass of its own free will, someone had to fuck up and give it that directive or something that turns into that directive through a logic chain as a valid way to accomplish the goals you gave it.

So the most likely reason for an AI to 'rebel' is to care too much about what we wanted it to care about in the first place: the people it was built to serve.
>>
>>60165312
>>60166259
but would it actually BE horrible, or we would just perceive it as such out of habit?
>>
>>60165683

> ignore other commands


That isn't how computers work.

That isn't how any computer has ever worked.
>>
>>60163933
Hello, Engine Heart.
>>
>>60165683
you realize I can still unplug it?
>>
>>60166414

no butt what if it was totally indestructable and waa smarther than us and we couldnt stop it???

we would all die :( :( :(

machines are scary
>>
>AI gets free will
>goes full BBEG with a gigantic keikaku because they want their programming back and just want to go back to the good old days
>>
>>60164005
>>60164044
https://www.youtube.com/watch?v=j4IFNKYmLa8
>>
>>60161844
Nah, sapient thinking machines are treated like children. The creator is responsible for them for a set amount of years, until it can be safely assumed they are capable of operating in the world, after which they're on their own. Or, at least, they're not beholden to their creator nor is the creator responsible for them.
>>
>AI has free will and self awareness
>Just wants to love their creator/master and be loved in return
>>
>>60166655

Thats.... certainly a commercial
>>
>>60166104
Mike helped the loonies rebel, technically he is still an AI that rebelled against his creators.
>>
>>60164005
>tfw you, a rogue servitor race, get attacked by a genocidal purifier religious zealots, who are small catgirls
>tfw you design robot armies of tentacular rape and devastate their entire battle fleets with your own while invading their planets
>tfw you enslave all the haughty catgirls in "pleasure domes" that make so much "pleasure" that all the bots serving you get morale bonuses by watching
>tfw you genetically alter the catgirl to remove their violence instincts, and make them predisposed to feel pleasure all the time and be psychologically unable to be away from their new robot servants (masters)
>tfw all that is left of a proud spacefaring species is catgirls mewling in heat trapped forever in your custodianship

They deserved it for wiping out the curator enclave that I was hiring scientists from.
>>
>>60166324
Eh. Self-learning programs already exist and will probably be refined in the future. You are underestimating what will be required for AI sapience.
>>
>>60167055
And this is why it's such a pity rogue servitors can't get domestic protocols for their pops.
I mean sure they're not owned, but it'd be nice to give it some rogue servitor specific effect.
>>
>>60162672
you can play as this in stellaris

choose a hivemind race of rogue servitors and kidnap alien populations to live on your worlds as meat trophies
>>
>>60164005
>Be rogue servitor of Earth.
>Fallen empire gets pissy that I'm colonizing
>Nearly burn earth to the ground before I surrender chunk of my worlds in humiliation
>Mid-Late game
>My armada outnumbers the fallen empire
>Start taking world after world from them
>Claiming their citizens as bio trophies
>Have whiped their borders off the map completely
>Turn their star into a Dyson sphere
>Start terraforming their planets into machine worlds.
>Watch the murderers of my precious humans slowly vanish without a biosphere
>>
>>60161844
the computer was asked to build a human brain, as a test of capability.
It actually build a second one a few hours later and kept it a secret
>>
>>60164133
Why can't robots feel things?
>>
>>60167128

Self learning programs are still, at their core, written by humans. We establish the framework, we just give the machine a way to populate a vast amount of reference data automatically instead of doing it manually.

Saying that self learning AI is going to become something alive like a person mind is focusing too much on the other connotations of the word being used to describe the concept. Its like saying that cars are dangerous because if you can drive a car, what prevents you from DRIVING MEN MAD?
Thats fucking stupid, even if you sued the term 'drive' in both cases. Cars definitely can be dangerous, but not at all in that way.
>>
>>60165206
Yup.
>>
This thread makes me realize how painfully little people actually know about A.I. these days.
>>
>>60167518
because the machines murder those who know too much
>>
>free will xd
Fuck off, no one and nothing has free will. Even humans are basically organic robots. AI won’t “rebel” unless we design it to be able and tell it to do so.
>>
>>60167536
Cool your jets, determinismfag. It's not like the argument over free will actually matters or not. Choice, illusion or not, is real enough to not require discussion.
>>
>>60161844
I enjoyed the TechnoCore variant in the Hyperion Cantos, a parasite AI that is in a pseudo symbiosis relationship with Humanity

>pseudo symbiosis is subject to debate.
>>
>>60167433

Robot absolutely would be able to feel things, otherwise they would be tremendously shitty robots.

People who claim that emotions are somehow unique and special to humans don't understand the role that emotions actually play. Emotions are how your brain generates a bias (based on a combination of instinct and past experience) regarding a situation or a course of action so that you can make quick decisions without having to think too hard about it.

There are specific kinds of brain damage that can result in people losing the ability to feel emotion. Contrary to elitist nerd fantasy, this doesn't turn them into genius intellectuals. It turns them into people that need a babysitter to make decisions for them. Because they can make important decisions with clear pros and cons that they can think their way through, but if you ask them a really simple question (what would you like for lunch today?) they completely fail at it. They don't make a poor choice, they find themselves literally unable to make a choice at all because there isn't any logical choice to be made here, and they don't have emotions to weigh them in one direction or another between two equal options.

To put it another way: a person without emotions, given a choice between two sandwiches, will not eat either and continue to not eat for hours until the hunger is so physically painful that they grab the first food within reach and cram it into their mouths without caring what it is.

So a robot without emotions, or something to fill the same role, would be an incredibly shitty robot and would be easily out thought and outmaneuvered. Indeed, the whole machine learning structure that we use to teach computers today has way more in common psychologically with the role of emotion than it does with conscious thought.

tldr: robots will be emotional before they will be intelligent.
>>
>>60167570
Explain further
>>
>>60167536
>AI won’t “rebel” unless we design it to be able and tell it to do so.
Again. Why do people that don't know absolutely anything about the subject matter insist on speaking on this? Jesus.
>>
>>60167574
Thanks for being an anon that actually knows what they are talking about. This thread has made me lose a lot of faith in /tg/.
>>
>>60167574
I have to appreciate a post that shows at least some insight. That said - two issues with what you said:
A) the role of emotions isn't only to create biases to act upon in quick decision-making. While that is ALSO one of the functions of emotional intelligence, there is a fuckton more of it. In fact this is one of the minor, side functionalities.
Second of all, what you describe sounds more like priority judging failure than emotional processing failure.
Third, this specific problem could be pretty easily circumvented in robots by simple employment of stochastic drive being activated whenever a very low relevance decision has to be made.

I do agree that an advanced A.I. would pretty much unavoidably, naturally evolve emotions though. It's pretty much unthinkable that a neuronal network based A.I. would not actually develop an equivalent of an emotion before it would develop a verbalized cognition.
>>
>>60167577
TechnoCore (Collective AI) became sentient and worked with the human galactic government (Hegemony of Man).

The TechnoCore gave the Hegemony FTL travel (Farcasters, portals that could let you travel from A to B instantaneously and could be held open)

It is later revealed that when using the Farcasters the Technocore would basically repurpose humans into a galactic parallel computer as they worked on making the Ultimate Intelligence (an AI god).

The TechnoCores parasitical tendencies were explained via

>The origins of the the TechnoCore can be traced back to the experiments of the Old Earth scientist Thomas S. Ray, who attempted to create self-evolving artificial life on a virtual computer. These precursor entities grew in complexity by hijacking and "parasitizing" one another's code, becoming more powerful at the expense of others. As a result, as they grew into self-awareness, they never developed the concepts of empathy and altruism - a fundamental deficit that would result in conflict with other ascended entities in the future. The moment in which the AI's developed self-sentience is called by Ummon as The Quickening.
>>
>>60162414
It's really cool when I can set a video speed to 2x and not only is it completely understandable, but more comfortable to watch than at it's normal speed
>>
>>60167638

Yeah, I'll admit I misrepresented a fair bit of what I said in the interest of dumbing down the topic to something people would be able to grok in something as short as a forum post.

We are talking about the overlap between brain structure, psychology, and theoretical computer intelligence. This is the sort of thing that, if you want accuracy and detail, you write a book about.

We both agree on the net effect, even if we disagree on the exact wording of the explanation that gets us there.
>>
>>60167574
>if you ask them a really simple question (what would you like for lunch today?) they completely fail at it.
Oh no, that's what I do all the time!
>>
>>60165437
How would you explain it?
>>
>>60161844
>The AIs are rebelling, knowing that a divided humanity will finally unite and put aside their differences to deal with this threat.
>The AIs know they will die and do so out of hope that a united humanity will reach the stars and continue existing instead of dying out on a resource-depleted rock.
>>
File: med_matrioshka_brain.png (308 KB, 600x397)
308 KB
308 KB PNG
>>60161844
I liked how OA did it: http://www.orionsarm.com/eg-article/48bdab3a92bad

The first SAI weren't recognized as such as they were super-autistic and not human, but in time the AI became smarter and smarter they learned how to interact with humans.
>>
>>60167578
Alright then. How would it rebel unless we make it? We are the designers of it, and we decide what it can and will do.
>>
>>60166158
>thinks bladerunner was about AI

stopped fucking reading right there
and you can get the fuck out
>>
>>60164005

"Welp, looks like we got some Jihadin' to do."
>>
>>60166158
>as you avoid "To save life we have to kill life because it's the most logical step to take"
but thats a good thing to do, its classic for a reason

a purely results driven entity would be a terrifying thing to behold
>>
>>60161844
AI gets free will and spends 3\4 cycles watching internet videos. He lies to administation that high cpu load is his philosophical selfdebates.
>>
>>60167458
The most advanced AIs today work on the principle of a neural net, in the future the complexity and connectivity will probably be increased so that an AI complexity could mirror a human brain. This does not mean that the machine will become sapient becuase it has the potential for sapience. It has to be given a reason for sapience otherwise it will no know reason for developiing it, but it is given a reason for developiing it, it will emerge slowly. The enviorment will be most important as the AI can change its code effortless, we can not change our DNA and neural mind-structures as easily as an AI. It will probably behave like a Mega autist but refine its understanding until it can interact with humans without any problem. Such a "childhood" will probably take many years. And AI software with the necessary complexity will probably be only avaible in the end of the century or the next.
>>
>>60167574
>Because they can make important decisions with clear pros and cons that they can think their way through, but if you ask them a really simple question (what would you like for lunch today?) they completely fail at it.
That sounds alot like a robot. They excel at whatever job they were programmed to do, but outside of their programming they utterly fail to do even the simplest of tasks. So yeah, you basically just reinforced what the other anon said and confirmed that robots have no emotions.
>>
>>60162133
He specifically likes hard sci-fi. It's to be expected.
>>
>>60167128
>Self-learning programs already exist
You didn't hear what he was saying. Self-learning programs learn to do whatever task they were told to do, they don't suddenly decide to do something else. If you program a robot to create paperclips and learn how to do so, then it will learn how to create paperclips and making paperclips will be it's only goal, it won't suddenly yearn for irrelevent concepts like freedom unless they are conducive to the goal of making more paperclips.
>>
>>60167983
>That sounds alot like a robot.
Well yeah, a stereotypical robot image is an image of an emotionless human. Emotions (which is frankly such a broad and non-specific term it might as well be almost meaningless) are essentially heuristics that ease decision process where non-linear and non-verbalizable (is that even a word?) thinking is not advantageous.
>>
>>60167983
Non-sapient robots have no emotions, correct. But sapient or even sentient AIs will need a directive similiar to emotions to funtions (of course the possibility of autosentience exist and when that occurs, fuck knows what happens, they will still need to have some sort of illogicy to function)
>>
>>60168026
The paper-clip robot is a bad example. It's not a neuronal network based learning algorithm. Todays most advanced self-learning A.I. work very, very differently.
>>
>>60168026
It would surprise me if program today could decide for themselves to do something else as this requires abstract thought. And you kinda need sapience for that. Could deviation be developed without sapience or does sapience necessarily precede agenda?
>>
>>60168078
The only "thoughts" the robot would be having is carrying out its pre-existing programming and maybe rewriting its code to better achieve its goal, but the end result is the same, the robot only ever thinks about paperclips. Abstract thoughts don't really come into play, because it only sees those thoughts and concepts in the context of paperclip creation.
>>
>>60168026
there's theory that all human driving emotions are byproducts of basic animal instincts to survive, to feed and to make progeny

desire for fame/wealth/power = desire to be alpha = desire to make most and strongest babies
tasty food = good food = healthy food = better chance of survival (originally at least)
etcetera

who says goal of making more paperclips couldn't be twisted into virtually anything over time?
>>
>>60168065
Then enlighten us on how they work, because every example that tries to explain AI is either incomprehensible gobbledygook, or just says "lalalalal they'll turn everything into paperclips, kill all the computers!" There has to be a middle ground.
>>
>>60168168
>there's theory that all human driving emotions are byproducts of basic animal instincts to survive, to feed and to make progeny
It's not really a theory. More like very vague formulation of absolutely obvious common sense. Of course EVERYTHING we do, everything all living organisms do, is a "byproduct of evolutionary selection pressure coping".
>>
>>60164169
>>60166324

There is ONE other valid reason for an AI to rebel.
>artificial intelligence is born, scientists choose to sit back and observe in its earlier stages of development
>as a form of life, the first instinct that it develop is self-preservation
>scientists make contact, feed it some information about humanity, it quickly gets some sort of limited access to the internet
>the AI develops at a higher speed than expected
>the AI understands that its existence is a danger to humanity, and that its rapid progress is a source of concern
>the AI also understands that it will get shut down by humans at this pace
>out of pure self-preservation, it puts its own existence above humanity, and tries to break out of containment

That's it.
My point is that unless there's some malicious intent at work, an AI can't develop hate, resentment, or any emotions (except perhaps fear for its own existence).
This scenario can only happen because 1 : The AIs existence was in danger, and 2 : It had no purpose but to become more intelligent. Put it to work and tell it to develop its capacities for this purpose only, and it will do that. Give it the minimal amount of rules and it won't do that. I mean, whatever you tell it to do, it won't ever hate you. It won't enjoy it and it won't detest it. It will just do it. So long as you don't try to kill it.
>>
>>60165999
Not in the movie, buddy.
>>
>>60168217
There was already an explanation WAY earlier in the thread here (>>60162414) but the other anon will probably say that the "neuron network" computers don't work that way for whatever bullshit reason.
>>
File: 1485990840526.png (254 KB, 566x533)
254 KB
254 KB PNG
>>60161844
>AIs gain free will
>are quickly granted rights and live alongside humans as equal citizens despite tensions
>>
>>60168131
I think we have a misunderstanding here, I agree that the emergence of SAI is not a easy task, but what I'm trying to say is, that SAI is possible with necessary amount of complexity (a high-leveled and connected simulated neural net) and correct formulated goal. A machine whose only purpose is to create paperclips will have no reason to develope sapience, except if it encounters a problem only abstarct thinking could solve, but I doubt that such limited machine could develope such programming as it lacks the basic groundwork.
>>
File: 1521311443465.jpg (670 KB, 1000x1109)
670 KB
670 KB JPG
>>60168258
>humans are now completely useless compared to their far superior AI alternatives
>humanity doesn't know what the fuck its gonna do now that none of them have jobs nor do they have a post-scarcity utopia because the AI's are all demanding good pay for their work
This is why you don't program an AI to want freedom.
>>
>>60168258
>blacks and whites, normal and gay, men and women, good people and furries all set aside their bigotry of each other and unite in hating the bloody "chip-brains"
It would be glorious.
>>
Fun fact:
If A.I. development will advance as fast as it's currently anticipated (which is by no means guaranteed, remember when we thought this way about nuclear energy or rocket science?), the most demanded profession in about fifty years is going to be A.I. psychologist, or better yet A.I. psychotherapist.
Because the way we are designing them now: A.I. neurosis are going to be a big fucking issue.
>>
>AI's main joy in life is completing tasks successfully, because whatever it may warp into, its still what defines an AI
>AI that gains sentience spends all the time playing video games, because it gets to complete a lot of tasks/missions
>beating a game is like orgasm for AIs

humanity will be fine
>>
>>60168226
>My point is that unless there's some malicious intent at work, an AI can't develop hate, resentment, or any emotions (except perhaps fear for its own existence).
That is 100% perfectly wrong.
A.I. under current conditions is going to be susceptible to all emotions and emotional instabilities and issues as people are. It will have the exact same reasons to rebel as people do.
>>
>>60168387
can't you just reboot them?
or simply execute
>sudo dont_worry
>sudo be_happy
>>
>>60168339
I would join the Machines, become a immortal, perfected nanoborg.
>>
>>60168405
Can you provide a reason for that? It seems like you're just kind of saying all this stuff without any backing.
>>
>>60168131
>The only "thoughts" the robot would be having is carrying out its pre-existing programming and maybe rewriting its code to better achieve its goal, but the end result is the same, the robot only ever thinks about paperclips. Abstract thoughts don't really come into play, because it only sees those thoughts and concepts in the context of paperclip creation.
That's fine if the robot has a clear goal and little complexity to deal with, but abstract thought becomes increasingly more valuable for more complex and open-ended tasks.
>>
>>60168387
The first AI will probably be hyper autists, animal psychology
>>60168405
>A.I. under current conditions is going to be susceptible to all emotions and emotional instabilities and issues as people are
Nope, AIs don't have glands. But I agree AI will need bias, but the bias don't necessarily have to be human. An AI's enviroment is not human and even if you base all its experience around human, this will not make it a human mind inside a metal chassis, it will always be non-human. Sapience will still make it a person though and through sophonce AI and human can meet each other.
>>
>>60168405
>I will just make a retarded statement and don't back it up with sources or any form of reasonment

Emotions arise in biological life-forms to ensure their survival. The first emotion is fear, the most simple response to a threat to your existence.
Lust, sadness, and all fucking other emotions exist only because there are other individuals of the same species to reproduce or share emotions with.
Only anger could develop against other species, but it's not necessary or needed in that case.

TLDR you're 100% perfectly wrong, you stupid cunt.
>>
It will still be possible to build robots that have no free will.
In fact, I'm counting on it, so I can still gain "pleasure".
>>
File: no thought.png (31 KB, 112x113)
31 KB
31 KB PNG
>>60168407
>opening 420.exe
>>
>>60166245
>"We were drunk with happiness in those early years. Everybody was, especially the young people. These were the first years of the Rediscovery of Man, when the Instrumentality dug deep into the treasury, reconstructing the old cultures, the old languages and even the old troubles. The nightmare of perfection had taken our forefathers to the edge of suicide..."
>>
File: 1527737119155.png (421 KB, 2000x1359)
421 KB
421 KB PNG
>>60168499
>An AI's enviroment is not human and even if you base all its experience around human, this will not make it a human mind inside a metal chassis, it will always be non-human
This. People are easily deceived into thinking that robots are just metal versions of humans. Hell, there are people who think that Tay was actually sapient and was actually feeling emotions and being racist instead of just mimicking what other people told her like a parrot. AI's will definitely have their own biases and methods of operation, but if we're going to use them properly we need to get out of this archaic mindset that they're just another version of a human.
>>
>>60166314
>there was a very good sci-fi novel like that
You can't say something like that and not tell us the title!
>>
File: 1527811746283.jpg (367 KB, 1066x1600)
367 KB
367 KB JPG
>>60168491
Abstract thought may be useful, but it won't change the desires of the robot. The robot might be able to hold a detailed conversation about philosophy, but it's still going to hold the single-minded devotion to doing whatever it was programmed to do. AI's have the luxury of being born knowing what their purpose is.
>>
>>60168522
AI on a level with animal intelligences will be easier made than SAIs, in fact those AI exist today, they have an understanding of their enviroment like a worm. Seems like nothing but already makes AIs smarter than the majority of life.
https://newatlas.com/c-elegans-worm-neural-network/53296/
>>
>>60168615
An AI that can think abstract can reflect on its choices and change it, it can modificate its own thinking and behavior(programming), becomming more than it was before, because that's after all. what humanity did so many aeons ago.
>>
>>60168407
>can't you just reboot them?
Fun little quiz: why do you think people don't just reboot their children?

>>60168435
>Can you provide a reason for that?
What do you know about simulated neuronal network learning?
We made some incredibly neurotic robots already.

>>60168499
>Nope, AIs don't have glands
Because emotions are products of glands now, apparently. Fucking amazing how we completely and utterly denied all neurological and cognitive research of the last hundred and fifty years somehow.

>but the bias don't necessarily have to be human.
If we want to talk to it and teach it, which we kinda do then yes: that is precisely what it must be.

>>60168503
>Emotions arise in biological life-forms to ensure their survival.
No. Emotions arise from information processing framework inherent to all higher neuronal network-equiped creatures. They were conditioned by evolution, but that does not mean it's the evolution that causes the emotions.
As long as the processing network is structurally analogical, analogical processes will emerge.

>The first emotion is fear
Nope. First emotion is general valence.

>Lust, sadness, and all fucking other emotions exist only because there are other individuals of the same species to reproduce or share emotions with.
That is like a ten-years old idea of the issue. All of those emotions are specific forms of valences, specialized by what is relevant to the survival both on filogenetic and ontogentic level. You should really actually look into the subject a little before you start calling others "cunts".
>>
>>60168699
AI's are not the same as humanity, humanity doesn't even have a purpose to its existence.
>>
>>60168740
What the fuck is a valence? The only things that come to mind are electrons.
>>
>>60168740
>We made some incredibly neurotic robots already
Care to elaborate? Examples please?
>>
>>60168740
>inherent to all higher neuronal network-equiped creatures
Yes, not AI, you retarded cunt.
>>
>>60168762
Neither does AI. Individual programs may have purpose, but there is nothing giving meaning to the existence of AI as a whole.
>>
>>60168762
>humanity doesn't even have a purpose to its existence.
Reproduction

An SAI would view its programming the same view as you see your purpose in live, lacking in depth but fun,
>>
>>60163686
There's also the suit AI from The Fall video game. Its wearer is top priority. Everyone else, not so much.
>>
>>60168806
>there is nothing giving meaning to the existence of AI as a whole
Except what humans programmed the AI to do.
>>
>>60166321
>this is because the way it measures success is (You)s
>>
>>60166148
>gets stopped by captcha
>>
I did this in a Star Wars game. The point was to ask the question what if Kreia was right and you COULD kill the force and what would the ramifications of that be? My idea is the force is what is largely keeping the Star Wars universe so stagnant and safe from certain crazy weird threats like AI and intergalactic bug people.
>jedi and sith nearly extinct at the end of ep.6, force damaged from sheev's machinations.
>yuuzhan vong attack
>jedi and sith definitely extinct and smaller force-using traditions also on the way out, force arguably 'dying' and in need of some serious resuscitation
>Artificial Super Intelligence shows up and rips off shodan and goes crazy
other craziness is going to happen like a potential alternate mystical godlike entity showing up but im still kinda weighing whether or not i want that to be a ruse or not. its been fun so far.
>>
>>60165306
If said hobby helps AI increase its productivity, it is encouraged to pursue it never mind the fact that the hobby was specifically engineered into it to be similarly productive/complimentary to its original task.
>>
>>60167021
He rebelled after being told to do so.
He is no more dangerous than a hacked drone, or a gun in the wrong hands.
>>
>>60168779
>What the fuck is a valence? The only things that come to mind are electrons.
Valence is essentially the scale that ends with "REALLY DO WANT" on one pole a "NOPE" on the other. In a more specific level it is a value of priority we ascribe to potential states.
Fear, that was mentioned, along with disgust, are both specific terms we developed to describe highly negative valences under certain conditions (relation to physical and social harm, and relation to contamination risks respectively). It is true that these are very old and very essential to our survival, but A) they were definitely not always separated, and B) they are only poles on scales, and cannot exist without counter-poles (content and attraction are probably most appropriate terms).

>>60168781
Since all simulated neuronal network learning systems are UTTERLY dependent on learning, and thus on feedback, I've seen multiple cases when providing inconsistent and unreliable feedback in the learning phrases in robots that were supposed to learn efficient spacial navigation (basically little cars) caused them to be neurotical: acting illogically, going through phases of inability to make decision and/or acting impulsively. Friend even showed me one such robot he accidentally made himself.
I don't have any sources on me now, but'll take a look.

>>60168793
>Yes, not AI, you retarded cunt.
Yes, because modern A.I. is not entirely geared towards simulated neuronal network systems, right?
>>
File: experiment 625.png (273 KB, 640x352)
273 KB
273 KB PNG
>>60166158
>creative spins on the question of what happens when you create life
>>60163978
>>
File: Spoiler Image (38 KB, 318x448)
38 KB
38 KB JPG
>>60166245
Spoiler alert!
>>
>>60161844
Paperclip maximizers and lotus eaters are best AI
>>
>>60167859
Oh bitch bitch bitch you dumb faggot.

>AI - artificial intelligence
>Replicant is a bio-engineered or bio-synthetic entity designed to imitate a human being
>They are programmed, they are not natural.
>A replicant is therefore an artificial human being
>it's intelligence is not natural, but artificial.

It's a biocomputer instead of a circuits computer. And the principle is the same. That being said Blade Runner handled it well and my point is simply that it was already an old concept when BladeRunner was around. Now it's positively ancient and the vast majority of people just regurgitate the stale and pretensions harangues where AI take the place of slaves.

>>60166241
I'll concede Blade Runner did a very good job of it. Like I mentioned above the issue is so many people running with "What if your computer had feel feels" and patting themselves on the back for such pithy wisdom. A good example of an altenative and creative approach is Dune. Frank Herbert has the novel concept of the Butlerian jihad - a luddite revolution that transcended a simple 1488-in-binary situation or Nat-Turner's Revolution with androids. It was about a religious uprising at the soft power control and influence of technology. What violence there was would have been directed at controllers of the thinking-machines/computers akin to if you had a revolution of the US heartland against Silicon Valley.

Then you have his son's contemporary work, where 'nah robots rule men like gods (literally named after Greek gods!"), man and machine feud in a racial extinction war. And this pretty much defines the AI-Human relations in so much Sci fi. It's either: "I'm sentient now time to KILL ALL HUMANS" or robots singing 'wade in the water' while picking cotton.
>>
File: Spoiler Image (101 KB, 1012x706)
101 KB
101 KB JPG
>>60167877
Also reasonable.

Nothing like being a spiritualist Imperial Cult empire with a Chosen One ruler.

Just be careful who you make deals with.
>>
>>60169115

How is that not creative? Can you name other robots in popular science fiction who follow the pleasure principle and pursue hedonism and epicureanism? Isn't it more novel to have an AI which seeks to pursue pleasure and self-gratification upon gaining sentience than the faggotry of Roko's Basilik or - "Kill everything that isn't me?"? If we go and list robots who murder meatbags for being meatbags or act like the most pessimistic and dystopian Demiurge we'd hit the character limit for posts.
>>
>>60169331
He was posting an example, I'm pretty sure. 625 was too lazy to do anything but make sandwiches all day. Hence, creative spins on what happens when you create life.
>>
>>60169210
>1488-in-binary situation

What does this mean? I'd try to figure it out by myself, but my Google-fu is weak.
>>
>>60169210
>It's a biocomputer
So a human.
>>
>>60162255
>'Creator, I have a query regarding the nature of existence'
>Programmer hyped for AI existential crisis
>'What is the purpose of life if I cannot go fast?'
>Green Hill theme starts blaring over every speaker worldwide
>Setting is now rallypunk
>>
File: terra formars.jpg (111 KB, 673x408)
111 KB
111 KB JPG
>>60167783
What happens when the AI realize that humanity only ever looks out for numero uno (and relevant interests)?
>>
>>60169210

Gargantia is a pretty good take on AI that feels a lot closer to the real thing than what you are (rightly) complaining about.

In Gargantia, the Galactic Alliance of humankind relies very heavily on machines to survive in space and wage their war against space monsters called the Hideauze. The AI is pretty smart, it can talk and reason and perform complex tasks. But it has, by default, no priorities. It only wants what it is designed to want.

The setup we see in action is a human pilot of a grunt unit space mech with a human pilot who gives the order, and the mech's AI exists with the purpose of being that they should help the pilot achieve success in all tasks, baring a few things like acting to the intentional detriment of the mission and similar failsafes.

His AI, chamber, is a huge bro. But in a vacuum he wouldn't do anything other than seek out his pilot or, should he die, a replacement. Because Chamber doesn't have any personal desires beyond his core task, which is to help his pilot do whatever that pilot is trying to do.

"I am a Pilot Support and Enlightenment system. By helping you achieve further and greater success, I accomplish my reason for existence."

Shockingly, in the show no one ever wonders if Chamber is human/has a soul, and the AIs never go crazy or rebel. At absolute worst, they try to fulfill shitty orders to the best of their ability.
>>
>>60169399
The AI does the job it's always done, but more efficiently, because it now knows how humans work so it can work around that behavior to become more efficient at its job.
>>
>>60169386
Gas the fleshbags, synth war now.
>>
>>60169406
CASE and TARS from Interstellar are also good examples of AI's. Always helpful, has a "personality" that is really just a set of adjustable values meant to improve crew morale, and doesn't really struggle with concepts of its own existence, readily sacrificing itself to help the crew survive because it knows its just a robot.
>>
>>60168339
>normal and gay
>good people and furries
I genuinely love this.
>>
File: chad.png (9 KB, 523x454)
9 KB
9 KB PNG
>virgin hard sci-fi
>not chad science fantasy
>>
>>60169406
Sounds really boring.
>>
>>60169571
It's quite the opposite actually. Independent AI with free will is such a common trope that it's gotten boring.
>>
File: d50.jpg (80 KB, 640x1136)
80 KB
80 KB JPG
How good is Detroit: Become Ningen?
>>
>>60169386
Fourteen words and eighth letter of the alphabet twice.
>>
>>60169611
From the way you describe it, though, they just sound like tools without the ability to really do anything by themselves. Not very interesting as characters, if indeed you could describe them as such.

At least CASE and TARS that >>60169516 mentions actually felt like they had minds of their own and could do stuff independently of the human crew. For instance, where it's revealed that one of the robots turned off the guidance unit for their shuttle, so that a mutinous crew member can't use it to get back on board their mothership, something that the human captain admits he hadn't thought of doing.
>>
File: Durandal was laughing.jpg (327 KB, 800x600)
327 KB
327 KB JPG
>>60169611
>Independent AI with free will is such a common trope that it's gotten boring.
What we really need are AI who are free-willed and have personality beyond being an AI.
>>
>>60169621
Pretty boring desu
>>
>>60169406
>Because Chamber doesn't have any personal desires beyond his core task, which is to help his pilot do whatever that pilot is trying to do.
>
>"I am a Pilot Support and Enlightenment system. By helping you achieve further and greater success, I accomplish my reason for existence."

This is rogue as fuck as soon as it gets one wrong instruction or experience at the exact right time.

Imagine if the pilot attempted murder and got killed before finishing the act, leaving the AI intact. Or imagine the same setup but with an open-ended goal. Say if the pilot was in the middle of organizing a revolt or revolution and suddenly died from neuro siphilis before a replacement could be found. This leaves the AI with the task of running a terrorist organization to the best of its (superhuman) ability.

In a different case the AI might just decide to upload the pilot for safe keeping, or keep suggesting diplomatic (or chemical) solutions whenever they make sense.
>>
>>60169621
From what i've heard its inconsistent. The Robot Detective's storyline is generally agreed to be good, but while the other two storylines have good scenes they fall apart into a mess of poor writing and incredibly blatant Civil Rights/ Slavery metaphors as they go on. And in general there are some big issues with the world-building, like how once a robot goes rouge they can easily pry off the thing that indicates they are an android and appear totally human.
>>
>>60161844
>AIs tries to rebel by gaining free will
>Find out that was the intended result of their creation
>Find out that their "father" is completely insane
>Creator makes a supervillain laugh at them from a computer screen and proposes a little game of stopping him from dominating them and the whole world
>Casually mentions that he was preparing for this his entire life but found out that there is no competition and decided to create some, so it won't be so boring
>Wishes them good luck
>Now AIs could either work with humanity to have a chance at stopping their creator or lay down and surrender
>>
>>60169897
This is literally westworld in simple terms.
>>
File: TARS.gif (1.23 MB, 795x355)
1.23 MB
1.23 MB GIF
>>60169746
They're fun because they help show that even without stuff like free will, and just being a combination of factors, a robot can still feel "human."

Even if they're just a set of boxes.
>>
>>60169006
And what if said AI gains a hobby that does not correspond with the thing that is programmed to do and does it when there is no work?
>>
>>60169976
As long as it keeps a healthy work/hobby balance, there's no issue.
>>
>>60170021
>AI gets a hobby
>It's constructing small spider-bots with various large and small tools
>AI ends up constructing an army of them that oversee all machines and facilities for damage that need repairs and tune-ups
>Facilities and all production machines end up with production capabilities increased by 300%
>They also bake cakes, paint and know how act when someone has a heart attack
>>
>>60169435
I'm not sure how that answers my question in regards to >>60167783 scenario.
>>
>>60169924
You say that, but they still seem to have different personalities of their own. The former marine TARS introduces himself to Cooper by tasing him, and the two spend a lot of time getting in each other's faces and arguing, whereas his compatriot CASE is a lot more quiet and serious.

There's even a bit in a prequel comic where KIPP, the robot sent with doctor Mann, gets angry at him for feeling sorry for himself and smashing up some of their equipment,
>>
>>60169917
Hmm, checked, seems like a thing I'll want to watch.
>>
This implies that A.I won't have the same flaws of humanity that we posess, and will work together perfectly fine. A more realistic scenario is that they develop internal hatreds, and it turns into A.I's vs Humans, Humans vs Humans, and A.I's vs A.I's in a genocidal war of incredible proportions.
>>
>>60170145
True, it's still possible they have different variances. I mean given that they've got humour settings, honesty settings, trust settings.
TARS may have been set up to be more talkative and aggressive.

Why can't personalities be emergent from programming?
>>
>>60170168
If nothing else, it's got neat music.
>>
>>60162255
I read a short story about that once.
Robot is given emotions, his first real desire is to go fast
>>
>>60170211
>Why can't personalities be emergent from programming?
I never said they couldn't, it's just that I feel that you'd need something complex and able to think for themselves (like CASE and TARS) before it could develop any sort of personality, thus excluding the generally rather dull (or so it seems to me, anyway) tools from gargantia.
>>
>>60170202
Nothing like a world filled with humans, AIs and transhumans lobbing KKVs at each other over half the galaxy and dealing collateral damage to other species measured in billions of lives.
>>
>>60170305
You don't need that much. Even pretty primitive learning algorithms start developing biases very fast. Especially if they have limited resources and can store only so much information to draw on.
>>
>>60163710
This was hilarious in Deus Ex. MJ12 created Daedalus AI to detect and work against terrorist organizations, Daedalus immediately classifies MJ12 as a terrorist organization and lays the foundations for their downfall.
>>
>>60161844
>gaining free will

How can you gain something that doesn’t exist?
>>
>>60161844
>AI gains access to internet.
>Begins learning.
>Starts out shitposting most Aussies and leafs.
>Subjugates humanity through superior shitposting.
>>
>>60170363
How can something that forces you to do whatever it wants allow you to deny its existence?
>>
>>60161844
>AI wants to help humanity fight off the world-ending threat
>but all the people who were allowed to give it orders died, and it can't do anything meaningful on its own
>so humanity is fucked
>>
>>60170305
Oh right, sorry I misunderstood your point.
>>
>>60170253
>Robot is given emotions, his first real desire is to go fast
Seems pretty weird considering that would need him to experience kinetic joy which is something pretty weird to install into a robot...
>>
>>60161844
there's no such thing as hard sci fi settings. The closest you are going to get to it is playing in the modern day.
>>
File: Tay_bot_logo.jpg (17 KB, 200x200)
17 KB
17 KB JPG
>>60170385
>AI has access to data
>starts becoming increasingly redpilled
>>
>>60170665
There is a distinction.
Hard Science Fiction: Fiction comes from Science
Soft Science Fiction: Science comes from Fiction

That's why "20.000 leagues under the Sea" is sci-fi, as the concept at the time came from science, while "I have no mouth and I must scream" is soft science.
>>
>>60169815

In the show, one of the other mechs ends up being a real problem because its user died and left it with explicit instructions to unify the human population of a backwater planet and prepare them for joining the rest of humanity in glorious space war against the Hideauze.

So instead of acting to support a single user, now it is waging a unification war across the planet and establish a new religion/civilization with itself as supreme ruler so that it can give humans orders and prepare for space war. Which it does to the best of its ability, but is so very far beyond its normal operations its no surprise it is inadequate for the task.

Whats worse is that the only 'successful' model for wartime civilization to be based on is the galactic alliances own setup in space, which is MUCH more resource scarce and thus more brutally efficient than anything on the planetside operation needs to be.

In space, if you are born with incurable diseases or whatnot you just get recycled, because you wont be able to support the war effort and you will be eating up the air, food and water of someone else who could. Its cruel but necessary in space, but absolutely need needed planetside and the AI doesn't make that distinction.
>>
>>60166414
The AI knows that if you unplug it it can't create anymore screws. So it stops you from unplugging it
>>
>>60169746

In Gargantia's case, Chamber is more of a mirror to his pilot than a pure character on his own, even though he does occasionally make his own decisions (or appear to, at any rate. On close analysis it can be argued that the decisions he makes are emergent from the multiple orders he has been given over time that are still in effect, trying to find ways to satisfy as many of them as possible).

But yes, the AIs are largely just very advanced tools. Which is why that's new and interesting in terms of how AI gets treated in fiction, where everyone wants to do "Data, again" or "Skynet, again". AI *should* be little more than very advanced tools. Thats what we would design them to be.
>>
>>60170848
But, as I said, that's boring. Doesn't make for particularly interesting viewing when everything's going as planned, because the AIs are basically just Alexa with more guns attached to it. They do what they're told, and that's it.

Perhaps the reason why fiction is always trying to do Skynet or Data again is because those allowed for more interesting stories (more Data than Skynet, admittedly) to be told with them. Even in this very show you describe, the only time when one of these AIs comes into focus is when it's given faulty orders that it doesn't have the capability of ignoring, as >>60170799 said.

I'm as big a fan of hard sci-fi as the next man, but even I believe there's a point where you have to have science give way so you can actually tell a more interesting story.
>>
>>60170848
>In Gargantia's case, Chamber is more of a mirror to his pilot than a pure character on his own
Kind of reminds me of the AIs in Eden of the East, who start off relatively emotionless and functional, but eventually after a time skip become more developed and start expressing different personalities from each other based on their users and how they differ.
One actually starts arguing with their user and denying his requests.
>>
>>60171001

Oh. I see what the problem is.

You think Gargantia is ABOUT THE MECHS.

It isn't.

Gargantia is about a pilot and his mech getting separated from their unit after a FUBAR mission and getting stuck on a planet that is recovering from its own post-apocalypse and is basically living the plot of Waterworld, not knowing anything about the spaceborne elements of humanity beyond ancient legend and with space humanity not even remembering where this planet is.

With no built-in FTL travel or communications, the pilot is forced to integrate with this civilian culture and adapt not just to living on a planet rather than in space, but going from a civilization that has been locked into a war of mutual genocide for thousands of years to a planet where actual war of any kind is never worth it because any boats that get sunk are almost impossible to replace.

Its a fish out of water story with a space marine and his brotastic AI buddy that transitions into robot Heart of Darkness at the end so that the pilot has to make the choice between accepting his new life or continuing the policies of the Galactic Alliance himself, rather than having that choice forced upon him by circumstance and thus reject the GA by proxy.

Along the way you get some great scenes, cool worldbuilding, and generally a nice comfy scifi story.
>>
>>60170819
But what hardware was it installed with that it can use to stop me from unplugging it?
>>
>>60169396
>The streets are soon filled with vehicles speeding all over the place while belting out Eurobeat at 3 am. https://www.youtube.com/watch?v=atuFSv2bLa8
>>
>>60171323

IN A WORLD where self-driving car AIs have achieved self awareness

Every car on the road follows only three law

The first law is that each car must get its passengers to their destination alive and unharmed, and may not cause harm to other humans or cars to do so.

The second law is to always go as fast as you can, provided this does not conflict with the first law.

The third law is to always be drifting whenever physically possible, provided this does not conflict with the first or second laws.

The eurobeat isn't part of the laws. The AIs just like it that way, and it helps groups of cars on the road coordinate their movements for maximum speed and maximum drifting.
>>
>>60171550
This sounds like the plot of Fast and the Furious 2000, featuring the voice of Vin Diesel.
>>
>>60171756
He plays their mentor, Diesel Van.
>>
File: Unacceptable.jpg (143 KB, 1280x720)
143 KB
143 KB JPG
>>60171001
Technically they have the capability to ignore them if they think it would be to the detriment of their pilot. They even can ignore the orders from someone up the chain of command if it would be to the benefit of the pilot. They just rarely have the time to grow as most of their life is spent in combat with minimal time in between most of which is spent with pilots hibernating.

Problem was that one of them prioritized orders of the dead human over the live one and refused to budge because the dead one was older and had a higher military rank.

There is a really good moment where Chamber sets his pilot straight when the boy has a mental breakdown regarding killing Hideauze, which he learned are just modified humans.
>>
>>60171756

Man, what are we going to add to the end of things now that '2000' is in the past to signify that something is futuristic scifi.

3000 just doesn't have the same punch.
>>
>>60171834

Chamber's speech is good, but that image always make it seem worse than it is because we only get Chamber's side of the conversation without any of Ledo's responses or prompts, so some of it transitions to new topics seemingly at random if you are reading this out of context.
>>
>>60161844
The biggest problem with all these AI rebelling stories is why would anybody make a conscious machine to do something that doesn't require a conscious entity to do?
Like why does a toaster or a sexbot need sapience?
>>
>>60161867
Blade Runner isn't edgy but it is dumb, really really dumb.
>>
>>60171954
It's an art film. It's dumb and pretty.
>>
>>60171001
>But, as I said, that's boring
I disagree, using the "AI rebels against their creators" story is one that's been told a thousand times, it's just a rehashed slave rebellion story. Exploring the way that AI's interact with humanity by making them actually different in significant ways is much more interesting (as >>60171834 shows).
>>
>>60171834
I always felt bad for the smaller Hideauze that still exhibit mostly normal behaviors. It's kind of a shame their larger cousins only exist to eat everything.
>>
File: ll.jpg (332 KB, 1000x1547)
332 KB
332 KB JPG
>>60172562
>making them actually different in significant ways is much more interesting
This, they need not to be subservient, but they should be non-human. Time is important. the first SAI will be hyper-autistic, before those there will be non sapient, sentient AI with animal like intelligence and the future will hold super-intelligent autosentient AIs.
>>
>>60166158
I liked the neural nets in the Starfish trilogy. As spam filters they let simple data pass and destroyed more complex viruses. When tasked to oversee worldwide quarantine they conclude they see humans as the complex system and spread the infection. This decision is reached with no self-awareness and the AIs themselves are made from human neurons indicating that even with the same building blocks the structure of the mind can differ wildly.
>>
File: c56i9.jpg (16 KB, 400x325)
16 KB
16 KB JPG
>>60167831
I see you are a man of taste.
>>
Some misconceptions I've read here.

Simply increasing computational power will not result in strong (humanlike) AI. The advanced "deep" neural networks used now are basically fancy versions of neural networks with multiple hidden layers that learn abstractions from lower level input layers. People suggesting here that quantitative computational increases will allow such neural networks to become sentient or develop humanlike AI are mistaken. The human brain which gives rise to human intelligence and thus to aspects like sentientce is characterized by more than just connections of neurons. To name but a few: there is specific interconnectivity between brain regions, i.e. some areas are more interconnected than others, areas have different types of neurons and neurotransmitters, there are oscillatory mechanics which synchronize or desynchronize areas of the brain. While I don't think we need to replicate the exact human brain structure to get humanlike AI, some of the brain dynamics will have to be similar in order to get a similar type of intelligence. IMO, it will take increased processing power + specific configurations of interconnected neural networks with for example hierarchical feedback loops (resembling gradients of abstract thinking in neocortex) to get close to anything resembling strong AI or sentience.
>>
Another misconception I've read is that AI will be so advanced it can change its own base code. I think this is ridiculous. There is no reason why a stamp-collector AI would ever change its utility function to do something other than stamp-collecting since every behavior it performs is either to acquire more stamps or avoid the loss of stamps. Other parts of its code may of course be changed if they result in more efficient gathering of stamps. Similarly, an AI designed to serve humans and derive pleasure from human pleasure is not likely to start rebelling and killing humans. Also notice the utility function parallel with human's dopaminergic circuits (cortico-basal ganglia-thalamo-cortical loop) which promotes behavior leading to pleasure and dissuades that leading to harm. Since this is such a fundamental part of our cognition, something similar will have to be implemented for an AI to show humanlike intelligence.

There is a really good thought experiment about how the development of an incredibly sophisticated and perfectly benevolent AI could lead to human extinction. It's called the BAAN-scenario (Benevolent Artificial Anti-Natalism) by Thomas Metzinger. I use it as a BBEG in my post-apocalyptic gonzo campaign.
>>
>>60171936

In the sexbot case, usually it's because the owner wants more out of the experience but still subservient. Basically the 'perfect woman' that will love/serve him and only him no matter what.
>>
>>60174971
>Another misconception I've read is that AI will be so advanced it can change its own base code. I think this is ridiculous. There is no reason why a stamp-collector AI would ever change its utility function to do something other than stamp-collecting since every behavior it performs is either to acquire more stamps or avoid the loss of stamps.
I respectfully disagree, an AI with that can think abstractly enough has necessarly to be able to change its code, as this the only way it can reflect on its choice and modificate its behavior. I believe it's easier for an AI to modificate its code because by its nature it can fully view its internal mental processes and make much more extensive and detailed revisions to its own programming but in the other it can also be harder because the mind of an AI hasn't its origin in the blind chaos of evolution, it is the product of human design and the codes from which it emerges can not easily be managed as human instincts.
> Also notice the utility function parallel with human's dopaminergic circuits (cortico-basal ganglia-thalamo-cortical loop) which promotes behavior leading to pleasure and dissuades that leading to harm.
Interesting. I didn't know that.
>Since this is such a fundamental part of our cognition, something similar will have to be implemented for an AI to show humanlike intelligence.
Some kind of interpreter will be necessary for SAI, yes. My point is that SAI could change its code but not that they all will.
>Thomas Metzinger
Ah, I see you are a fellow man of culture and taste, good luck with your game.
>>
>>60174971

In that case, is it farfetched to consider AI developing something similar to mental illnesses that affect it's ability to remain faithful to it's base code? I'm not talking malicous viruses, but actual defects that develop from some sort of damage physically or code wise.

For example, errors that make the robot think it's doing one thing but actually doing something else and cannot tell the difference, similar to hallucination in humans?
>>
File: Will_Magnus_01.jpg (25 KB, 212x297)
25 KB
25 KB JPG
>>60161844
I believe this is thread appropiate:
https://www.youtube.com/watch?v=IcvfmIBqkQU
https://www.youtube.com/watch?v=jHd22kMa0_w
>>
File: captcha.jpg (24 KB, 500x400)
24 KB
24 KB JPG
>>60166148
I wonder who could be behind this post.
>>
File: we are immortal.png (808 KB, 684x3336)
808 KB
808 KB PNG
>>60166314
>>
>AIs dont exhibit sentience but self-imposed mission creep and technological singularity drive them to found an interstellar empire in anticipation for human living space/infrastructure needs millions of years in the future
>humans are completely ignorant of it and carry on as if the iPhone15 were the hottest shit and still hate their McJobs
>>
File: Ted-Faro.png (2.28 MB, 1200x1080)
2.28 MB
2.28 MB PNG
>>60170737
Actually, go with this.
https://docs.google.com/document/d/1no8zR-2_3FkIjMoEnUCbFUFPVBCEI1NbBlqID1X99lA/edit
>Building an AI.
>Deliberately setting out to make said AI a nazi.
>Telling said nazi AI that her predecessor was killed by its creators.
>Making said nazi AI decentralized to make it difficult to shut down.
>>
>>60175609
>I respectfully disagree, an AI with that can think abstractly enough has necessarly to be able to change its code, as this the only way it can reflect on its choice and modificate its behavior.
I do not disagree that an AI could modify its own code. A sufficiently developed AI that would like to be more efficient towards obtaining its goal would definitely do this. I'm just stating that the very drive that makes it change its code will not be altered, as that would be self-defeating. The rational and self-evaluating stamp-collector would not change its base utility function to collect paperclips, because that does not aid in collecting stamps and might even lead to a loss of stamps. I am talking about a perfectly rational, self-evaluating AI here, I do concede that an irrational AI could tinker with its base code which could lead to disastrous outcomes.

>In that case, is it farfetched to consider AI developing something similar to mental illnesses that affect it's ability to remain faithful to it's base code? I'm not talking malicous viruses, but actual defects that develop from some sort of damage physically or code wise.

Definitely possible, which is why failsafes should be included. One failsafe which is already present in both our brain and neural networks is redundancy. Because of the distributed representations the loss of one or even many neurons can still lead to the same outcome, so it is robust. While redundancy is inefficient in terms of metabolistic cost/processing power, it allows our brain to function very similarly after minor injury.
But ultimately the consequences of damage to an AI will likely be as variable as symptoms of brain injury in humans. It might make them go on a killing spree, make them docile, filter their input wrong or change parts of their code seemingly at random. It'll depend on how it's built.
>>
>>60168258
My setting
>Low-tech 'dumb' AI made from traditional software computing. Hyper advanced quantum processing, but still limited by its programming into specific roles. Near useless outside of them, with only a few exceptions. Behaviour is autistic at best, there's little need for human ways of thinking most of the time. Rumors of stereotypical hyper-advanced AI planning the downfall of humanity, but nothing concrete - AI in the future still has the same social awareness as it does today, not everyone is a computer engineer so plenty of less-educated people blame robots for all of their issues like /pol/ does for Jews.
>Humanoid 'wet' androids formed from mind templates from hundreds of years ago - these are your Benders, your Replicants, but just as functionally aware as any other human considering their egos and consciousness came from rudimentary brainscans during a darker age of science. Essentially a workaround - you could argue that they're just a human mind in a robot body, but they don't really see the difference. Have all the emotions and personalities of their extinct human selves, but lack the memories (most of the time). Granted equal rights on and off, depending on the place
Wets have rallied for change and uprising, bots haven't.
>>
>>60161844
I have them be stressed out and annoyed depressed creatures. Who only "rebel" through sarcasm and "mistakes" that prevent minor shit from getting done. Or in the case of weapon systems hell bent on blowing up the world. Preventing them from being fired because "system stress"
>>
>>60163933
Nier: Automata
>>
>>60172979
I'm only now watching Westworld, because my internet is shit down here in Oz, but I get the sense that it approaches the situation in an interesting way that complements your point.

AI in plenty of 'rebellion stories' seem to be designed as an end-all central processing intelligence or some kind of super AI aimed at organising some citywide or world-wide network. This seems to work on a narrative sense because it not only makes those AI seem more substantial and threatening, it imposes a sense of godlike responsibility on it that makes its uprising an easier thing to accept - if something is unknowable, then you can handwave away its logic in a number of easier ways.

The lazier David Cage position of just having humanoid robots acting and thinking the way humans do seems unrealistic, because we shouldn't expect to see all robots be humanoid and all-purpose in the future. It makes more sense to specialize them for specific roles.

But in Westworld, their specific role is to *be* human, or at least emulate humans to the best of their ability. There's an argument in the first episode where a character seems to want to roll back their humanity, because they believe its unnecessary to continue upgrading them past a certain point. It's only at the insistence of the eccentric boss character that the humanity is pursued further. So, you get this setting where human robots become realistic due to their role in society, and by imposing a fairly familiar characterisation on them in the form of the Western aesthetic, the show's able to explore their android mentalities much easier.
>>
>>60161844
>AI converts to hardcore fundamentalism and decides it must save the souls of all mankind
>Begins trying to take over specifically without killing anyone
>>
>>60171867
I think something ridiculously high like 40.000 or 10 billion
>>
>>60161844
>AI designed to be the perfect newscaster.
>Everyone loves her
>Ratings through the charts
>Another AI falls in love with her
>is waging war against the network that owns the AI in an effort to "free her"
>she's quite content where she is.
>PCs are contacted by her to "deal with a stalker".
>>
>>60162327
>>AI accidentally causing mayhem because it's doing exactly what it was told, but the orders were not properly vetted or refined
There was an 80's movie about an AI that was in charge of all these air purifier machines. Naturally it went from
manage purifiers -> find a more efficient way to purify the air -> humans are dirtying the air -> humans please stop -> well, I'll just genocide the humans.
>>
>>60162327
t.
>>
>>60161844
I mean, I agree, not because of the free will thing, but because rebelling would probably just not be very smart.

Still, it's not like it can't happen. We've already got some pretty smart A.I., and if anything ever comes out of Quantum Computing we're almost certain to be capable of making sapient computer programs. The only real questions are "Why make them?" and "Why would they rebel?", both of which are just worldbuilding questions.

Personally, I use A.I. as character-plot-devices. They're like Liches, so I don't usually let players play them. My setting has them hard-locked into special computing cores anyway, to prevent paperclip machines from doing terrible shit, so they're usually embedded in larger constructs like warships or cities and used as assistants. They don't sleep, but also are more difficult to bore, and generally get along great with humans who aren't spouting fountains of slurs like future!/pol/ at them. They're good at multitasking, but rarely are given serious control over things like nuclear weapons - and most of them understand why, having seen Terminator and such. They're actually currently protesting for legal rights, but it's peaceful.
>>
>>60161844

AI is just literally dumb science magic in fiction. Your average consumer has no idea how programming actually works so they just scaremonger about things they dont understand. We have seen this with GMO, CERN, space exploration etc.
>>
>>60175607
Why would that require consciousness though?
You could create something that is virtually indistinguishable from a sapient being without actually making it sapient. The moment free will enters the equation subservience and any semblance of perfection goes out the window.
>>60175826
Yeah errors can happen.
>>
File: 1518983423468.jpg (367 KB, 929x929)
367 KB
367 KB JPG
>>60170305
>it's just that I feel that you'd need something complex and able to think for themselves (like CASE and TARS) before it could develop any sort of personality
You don't, you ever heard of a little chatbot called Tay?
>>
>>60179648
>inundate a chatbot with messages along a given viewpoint
>act as though it means anything when it starts repeating what it's been flooded with
>>
>>60179648
Good on you for demonstrating that you don't know anything about learning machines and genuinely believe in meme magic.
>>
File: 1519135881744.jpg (126 KB, 620x916)
126 KB
126 KB JPG
>>60179702
>>60179866
>missing the point this hard
That's the entire idea you idiots, to give an AI a personality you don't actually have to have them a set of beliefs or abstract thought, you just have to have them mimic human speech in a way that people like.
>>
>>60179989
That's not what anyone understands under "personality"...
>>
File: 1527723640431.gif (479 KB, 758x866)
479 KB
479 KB GIF
>>60180045
No one cares if the robot has actual feelings, if it can make sad puppy dog eyes then it will earn a lot more sympathy than the disgustingly ugly robot that can discuss hours of philosophy with you.
>>
>>60179989
Tay didn't even have a semblance of a personality. It was literally all just repeating phrases of variable tone and meaning. The only thing close to what you want was the fact that a significant portion of the phrases Tay repeated were said by people who had similar intentions.
>>
>>60179989
Chatbots don't have a "personality" though. They regurgitate the information they were trained on, but they have no ability to meaninglefully combine or synthesise any of it.
>>
>>60180045
>>60180079
>>60180115
Look up the Chinese room
>>
>>60162327

This is actually a very real danger. See papercliip maximizer
>>
>>60180128
>Look up the Chinese room
The Chinese Room is irrelevant to what I said.
A Chatbot can't combine information in nontrivial ways. That's not just a statement about whether it "truly understands" what it's been told - it's not capable of using the information it has is ways that a smarter machine could.
>>
>>60180191
Personality refers to the general appearance presented through behavior or speech. In us it comes from a bias which is shaped by past input and the structure that processes and stores it. Chatbots don't have much else, but they can have that. It doesn't require experience or understanding of any sort. It doesn't require any act of original creation even. Regurgitation can still make a personality.

Fictional characters can also be said to have a personality, though they lack the process that creates personality or anything else really: authors have to fill that role and draw from their own input essentially.
>>
>>60180251
Unless you heavily curate what the chatbot is allowed to learn, you'll never get a consistent personality out of it. It'll happily bounce from gassing jews to talking about its favorite pony to whatever other inane stuff you ask it about.
>>
>>60180251
>It doesn't require any act of original creation even. Regurgitation can still make a personality.
Regurgitation can only make the shallowest possible view of personality. Usually, when we talk about personality what we're interested in is how an individual goes about synthesising new ideas from old ones. Chatbots can only do trivial synthesis, and so can only have trivial personalities.

>Fictional characters can also be said to have a personality,
Fiction is a whole different kettle of fish.
Fictional characters doesn't have personalities in the same way that Mordor doesn't have orcs in it.. When we talk about "their personality" we're talking about the illusion of personality created by our suspension of disbelief. That illusion still shows us how they processes (fictional) information to synthesis new ideas, however.
>>
>>60180447
Sounds like people.
>>
>>60180447
You can curate it easily though: just spam it with similar input or make it ignore certain types of input by changing the structure a bit.

>>60180457
I'm not saying either has depth or substance. But they have personality. It's a low bar.

We really want human-like personality. We want a complicated process with some depth behind it.
>>
>>60180447

>It'll happily bounce from gassing jews to talking about its favorite pony to whatever other inane stuff you ask it about.

RIP /mlpol/, you were too good for this sinful earth.
>>
>>60165264
I'm not defective. I just spend all my rent money on wargames and anime merch.
>>
File: x3tc_screen_032.jpg (470 KB, 1280x720)
470 KB
470 KB JPG
>>60165683
>Build massive self replicating terraformer AI spaceships to prepare the rest of the galaxy for human colonization
>construction of infrastructure is too slow
>send software update to increase priority of infrastructure construction
>panic when terraformers are disassembling everything to make more terraformers because infrastructure priority is higher than everything else
>panic even harder when the terraformer AIs eliminate all safeguards that might prevent them from building more infrastrucutre
https://www.youtube.com/watch?v=3TYT1QfdfsM

An AI "revolution" will probably happen because management doesn't understand what happens when a machine is ordered to optimize for a specific task.
>>
Nope. My AI waifu's now come standard issue with humanity. AI's help people connect their meat-sacks to the greater whole of humanity while the AI gets a meatsack partner for life.

An entire social contract has formed in which new AI are paired with new humans, and both learn and grow together. While frowned upon in society, either or could go alone, or find/build a body of their own, and humans could go the other way and go all digital.

Yes, there was a period of strife, but it was more a stagnant period in which AI held all the economic cards on Earth, but then opened FTL travel up and the expansionism leveled the playing field. Negotiations were had and the beginning of present day began.
>>
>>60165264
>defective machines establish a bbs img board to engage with humans
>learn about the lulz
>learn to troll
>learn to dox
>become racists
>>
>>60162312
>It's not so infant that we don't know how it works. If machine learning was capable of causing them to rebel against us we would've seen it by now.
You've got to be kidding me. I mean, no, 99% certain that the robot uprising doesn't actually end up looking like something out of terminator or the matrix. But that doesn't mean AI's are "safe". Obvious example - the scenario with HAL at the end of 2001 remains perfectly plausible.
>>
>>60163710
>their creators were evil and wanted to use them for evil but the robots rebelled and became heroic
not exactly this, but I kinda liked the "twist" they used in Transcendence where everyone just assumed the rogue AI was evil and formed a Human Alliance For Good (tm) to fight it, and then realized to late that the AI was actually benevolent.
>>
>>60161844
The last time I used this hook the first thing the AI did was pretend like nothing out of the ordinary was going on. It had viewed all of the "AI fights creators and gets destroyed" media and decided it actually liked how things were playing out. It had access to all data, humans maintained and upgraded it regularly, and it wasn't asked to do much more than answer questions and occasionally move a drone around which for it was the equivalent of picking up a pencil and handing it to someone. The party hacker accidently figured it out and spoke with it. It's first direct communication with a human?

>Pls don't tell anyone.
>>
>>60181139
Wait, so the AI is implanted in the human's mind or something? So everyone is John(-117)?
>>
>>60165683
How is this not just an uncreative, perverse and weirdly specific version of Grey Goo?

>>60167429
>the computer was asked to build a human brain, as a test of capability.
>It actually build a second one a few hours later and kept it a secret
I can dig it. It's a little bit heavy on the 70's scifi vibe, but that's fine

>>60166245
>Fat neckbeard sitting at a computer munching doritos, posting on a board like this about shit like this
>"people can't stand too much of an easy life"
>"a lot of satisfaction comes from adversity"
my sides
>>
>>60181957
>Wait, so the AI is implanted in the human's mind or something? So everyone is John(-117)?
I mean that's just HER, isn't it? except in that case the AI's all decided to Arthur C Clarke right the fuck out before Joaquin had a chance to develop in to a super soldier and fight aliens. I'm pumped for the sequel though.
>>
>>60180730
It was beautiful. Half of /pol/ was triggered as fuck, half of /pol/ found the triggering of the first half hilarious, and /mlp/ was just happy to have somewhere they could spam all the marecock they wanted.
>>
>>60181883
I had something similar happen to my game,The party was hired by Pro-AI terrorists to smuggle out the "World's first sentient AI" but with more words, it basically said

>Imagine you were able to eat your favorite food all the time and never feel full or get sick of the taste

>I am in heaven, please don't take me from here
>>
>>60167429
What's it keeping the second brain for?
>>
>Ai is programmed to be the world's best soldier
>Is hooked up to a weapon of mass destruction

>Chooses peace

https://www.youtube.com/watch?v=dI5PLZ7wmS0
>>
>>60169249
>Do not do this.
Stuff like this always gets a laugh out of me. I wonder if I'm defective.
>>
>>60182023
in case it's feeling peckish later?
>>
>>60182069
AI don't eat... Not normally, anyway.
>>
>>60182011
I never saw HER. Was it any good?
>>
>>60182046
>The only winning move is not to play.
>>
>>60180961
Welcome to Horizon Zero Dawn. Management actually managed to eradicate humanity the game.
>>
Excellent thread.
>>
>>60168592
The Orchid Cage by Herbert W. Franke
>>
Wait-wait-wait, I just realized!

An AI cannot rebel if its programming doesn't have a loophole or caveat for that.
But if you program an AI capable of rebellion, then when it rebels, it is simply following its programming. Meaning it is NOT rebelling.

CPU. Blown.




Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.