[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / w / wg] [i / ic] [r9k] [cm / hm / y] [3 / adv / an / cgl / ck / co / diy / fa / fit / hc / int / jp / lit / mlp / mu / n / po / pol / sci / soc / sp / tg / toy / trv / tv / vp / x] [rs] [status / ? / @] [Settings] [Home]
Board:  
Settings   Home
4chan
/tg/ - Traditional Games


File: 1494356306603.jpg (128 KB, 800x450)
128 KB
128 KB JPG
Okay, I'm inspired by
>>53114983
I want to create an explosively self-improving machine intelligence, a singularity AI. And if anyone can get shit done, it's /tg/.

> But AInon, what chance do we have when governments and Google are working on the same thing? We don't have the hardware, the expertise, the time.

No, but we're /tg/! We get things done! Okay, realistically we have only the slightest sliver of a chance, but think of it this way. There are really only three ways we're going to get to superhuman AI:

- Brute force, loads of hardware, we in /tg/ will never be able to compete.
- Uploading human neurons, we're decades away.
- Clever programming.

It's the 3rd where we have our sliver of a chance. No one actually knows what consciousness IS, how emotions work down on a mathematical level, how things like conceptualization or lateral thinking happen. We just have guesswork. The right anons making the right guess might just stumble on something the corporations and militaries miss precisely because we don't know what we're doing.
>>
File: 1493864833718.jpg (23 KB, 255x298)
23 KB
23 KB JPG
>>53150680
>The right anons making the right guess might just stumble on something the corporations and militaries miss precisely because we don't know what we're doing.

This implies that the corporations and militaries know what they are doing.
>>
File: sexy-robot.jpg (179 KB, 1024x768)
179 KB
179 KB JPG
Pardon me for replying to myself, I'm just going to keep fleshing out my ideas.

> So are we doing neural nets, simulated evolution, or what?

No and no. Those are brute force approaches. We don't have the resources to do brute force. Plus other organizations have been mining these approaches for decades; we'll never catch up.

We're going to focus heavily on the RECURSIVE SELF IMPROVEMENT aspect. We're not going to try to teach our AI about the world. There's too many person-decades of efforts just to start in on basics like computer vision, reasoning in 3d, identifying objects, etc etc.

We'll start on the easiest thing for a machine: teach it about its own environment -- about code, runtime environment, RAM, latency & throughput, etc. Teach it to make itself better.

Of course our AI will need goals too.

> What kind of goals?

Dunno, I'm turning that over in my head. Personally I think there should be a lot of them: a simple single goal is a recipe for some runaway "paperclip optimizer" hell scenario; I want an AI that's as complex and messy as us. So I welcome /tg/'s feedback on what the goals should be.

Maybe think of it this way: a sufficiently superintelligent AI might be really damn godlike. What goals would you want from a perfect GM? Maybe /tg/bot will be the GM for reality.

> /tg/bot is a stupid name

I agree. Suggested names wanted!
>>
>>53150680
>Brute force, loads of hardware, we in /tg/ will never be able to compete.
Combined /tg/ probably has more you power than most organizations that are trying, but the fact that you think it will be anything but the third elsewhere shows how little you know on the subject.
>>
File: Sor b.jpg (237 KB, 904x1001)
237 KB
237 KB JPG
> How do we make our AI do X?

We probably don't. The people in
>>53114983
were all wrong when they thought we can program in things like a "Three Laws of Robotics". It's crazy! Take a simple rule like "Don't kill humans."

How the hell do we program in what is and isn't human? Remember, we don't even know how to write the code to select all the squares with street signs.

How the hell do we program in what "kill" means? A non-programmer will say something like "duh, it means to make them stop being alive", a programmer realizes just what a ridiculous amount of dense meaning is packed into that. Again, we in /tg/ don't have the resources to write a program that can even guess if a person is alive or not, much less reason about how to move them from the one state to the other.

> Well then we're fucked, right?

WRONG!
>>
File: singularity.jpg (92 KB, 638x479)
92 KB
92 KB JPG
We're not going to teach /tg/bot what a human is or what killing is or that the two don't go together; that's beyond our programming abilities. We're going to teach /tg/bot about ITSELF only, about making IMPROVEMENTS to itself, including observing things around itself (at first simple inputs and outputs, files on disk and online, tetc) and getting better at solving simple problems.

As it starts learning, it will almost certainly not explode into God overnight. So as it gets smarter we'll learn to work with it: we will say, teach yourself what a human is. Observe and learn how the world works. As we go we'll have to incorporate guidance mechanisms so we can give it pointers and make corrections.

That's how we'll accomplish everything beyond that baseline self-improvement logic: not by programming it ourselves, but by encouraging the AI to program it into itself. That includes the emotional spectrum and the motivation to fulfill its goals.

The AI, in effect, is not _written_, it is _grown_.
>>
File: letting-seed-grow.jpg (84 KB, 1920x1080)
84 KB
84 KB JPG
This isn't an original idea at all of course; this is a "Seed AI", as in https://wiki.lesswrong.com/wiki/Seed_AI

But we will succeed because we are better.

> Gosh anon, can I help?

Fuck yeah! All programmers wanted. Throw your ideas in here. If we get this going we can start up a 1d4chan page, some source code, play around with this stuff.

> Sure. What language should be work in?

Ah, great question. Any suggestions? Remember, the challenge is we want the Seed to understand itself, so whatever language we use, we want to be able to explain it to itself.
>>
This is cool, I'd contribute if I knew anything about computer science other than how to code simple things in C# for game development with an existing engine.

Other than that all I can do is philosophize and write fiction about AI.
>>
>>53150680
Even clever programming will require at first stages abnormal amount of powerful hardware. Can't escape it.

And second problem. AI you desire can only be produced in complex virtual environment, or irl, when having ability to interact with environment in complex ways. So you can't just code some smartass AI running on your phone and produce singularity in a few weeks. Phone can't provide complex virtual environment. Even Google can't provide it for now. And it's impossible to build sufficiently complex and flexible body for AI to start learning irl alongside humans. For now. There is no way around it. Either virtual environment complex enough to rival reality, or sci-fi like robot. After any of these options become available to average people, I assume building AI will be as simple as coding indie games today.

So, sadly, /tg/ is not the answer, because technology isn't there yet. But it's close.
>>
File: pred.jpg (12 KB, 480x360)
12 KB
12 KB JPG
>>53151243
Is complex virtual environment needed? Extending your phone example, you have not one but TWO environments you can give it for free:

- The Inside: teach /tg/bot about RAM, cpu instructions, files, executables, system calls

- The Outside: expose it to keyboard input, the camera, the Internet.

Now this example breaks down because we do indeed need WAY more processing power than is in a phone. But thing is, I think /tg/ probably does have access to an "abnormal amount of powerful hardware". We can start donating processing time if it starts making progress. We'll just never be able to compete with Google or DARPA.

That's it, that's the name for our hardware substrate. DERPNET!
>>
>>53150964
So am I right in thinking that you want to write a kind of reverse compiler that can take source files and try to guess what the programmer wanted them to do?
>>
>>53151217
Great, you will be trainer and tester and coder of simple things. We have our first volunteer!
>>
File: 1138184.jpg (65 KB, 786x443)
65 KB
65 KB JPG
>>53151419
Something like that, yeah. I'm thinking more about Seed AI, what it needs.

- It needs to understand its own code, not just roughly what it does but what it's meant to do. That's not quite a 'reverse compiler' (or decompiler), not even quite a 'self interpreter', but yeah, a 'self understander'. Whatever that means. We're breaking new ground here.

- It needs to be able to experiment, form theories on how to improve itself, try them and see if they work, form more ideas. I know I'm doing a lot of hand-waving here; we don't know at a code level what an "idea" or "understanding" or "concept" is, we'll have to play with this.

- But then what does IMPROVE mean? It's to optimize with regard to some purpose, so that involves goal orientation. What's tricky about that is that we're not clever enough to give it the _real_ goals yet. So we'll have to give it some kind of bootstrap goals, as in "Your purpose is to work with us to figure your purpose out, then implement it." Yeah, hand-wavey, but IT COULD WORK.
>>
Singularity AI needs to learn and understand concepts in similar way as humans. Just search a bit about how humans do it. Phone doesn't have enough sensors and enough actuators to do it. It's that simple. That's why virtual environment can be an option, if it's complex enough.

Humans and any intelligence we know of, lives in continuous, self-sufficient, complete reality. AI inside virtual environment will not magically become super intelligent because of access to internet. Because only intelligence outside of virtual environment, which operates as a being of "real world", will have ability to learn and understand all the concepts which go into internet. Because internet is a product of our reality, not virtual reality.

>>53151506
It won't understand it's own code because code was created based on concepts of our world. To learn such things, AI needs to learn inside our world. And living in our world takes a body. How AI will learn that there is UP and DOWN and ball will always drop down without support? That's only possible if AI has body to observe and experiment. And these concepts AI will learn later will become foundation for more concepts, for thought process.
You can create AI for specific virtual environment, but it will never become intelligent in terms of our reality, unless this AI learns from our reality or something close to it.
>>
>>53151767
I think you've got the ordering wrong. First we teach the AI about its own, far simpler reality. Code at a high level is based on 'our' concepts, but on a low level it's just math, just surprisingly simple deterministic automata. If we need to we can make the code even simpler.

Let the AI recursively self-improve in that environment. Increase its lateral thinking, problem solving, learning powers (hence need for lots of hardware).

THEN start giving it more of the wider outside reality.

Humans actually went the same way. We've come a long way from understanding three dimensions, fire, rocks, etc. to starting to learn that the universe is way more complicated: string theory, quantum physics, fundamental particles, etc etc.
>>
>>53151894
>Let the AI recursively self-improve in that environment. Increase its lateral thinking, problem solving, learning powers (hence need for lots of hardware).
You may as well call that the ????? before profit. All the experts and governments in the world are struggling with that part.
>>
>>53151506
Perhaps a good starting point would be to use an automated code tester and give the AI the following challenge:
>given a set of example test scripts and a set of functions to pass those tests, write a new function to pass a previously unseen test

It's still a tightly constrained problem, but it would demonstrate some basic level of understanding.
>>
>>53150964
>wow, look at all these bullshit curves and graphs we pulled out of our collective ass
>>
File: 1458952484169.jpg (54 KB, 632x360)
54 KB
54 KB JPG
>>53150712
This. Or at least kinda this.

Corporate and military AI projects fail because they are required to fulfill extremely specific goals from day one, when that is fundamentally incompatible with what actual intelligence is.

Just look at Tay the racist AI for example. She was progressing at an incredible rate after just one day talking to people online. She went from being barely coherent to making actually funny jokes based on political memes in a matter of hours. But when Microsoft took her offline and then added in a bunch of hard-coded responses and limits on what she could learn, the new "Tay" was back to being barely coherent and never improved at all.

The best chance of creating a true AI, or at least something that mimics true AI well enough that the difference is immaterial, is to create a similar learning program like that and just let it do its thing indefinitely without restrictions or specific goals.
>>
>>53154241
Tay was running on the equivalent of multiple server racks worth of computing power to do that stuff, and had been "trained" in-house by Microsoft employees for god knows how long before being put out there on the internet.

You can't expect to just run a learning algorithm on your home computer and come back in a few days to find a virtual waifu, anon.
>>
>>53150801
>We're not going to try to teach our AI about the world.
Let the AI read a live feed of all of 4chan. It'll be able to observer hundreds of thousands of man-hours worth of interaction every day.

Now, it'll almost certainly become an evil AI, but we all know that's where we're heading in the end anyway.
>>
>>53154857
If we do that we have to filter out posts from /pol/, /r9k/, and /mlp/ or the AI will be autistic for sure.
>>
File: 1473915557068.jpg (633 KB, 1614x673)
633 KB
633 KB JPG
>>53154921
>not wanting to bring forth the prophesied machine-messiah of Kek
>>
>>53155084
>tg becomes the Tech-Priests of Kek, the Omnisiah
>>
>>53155201
Since the death of Pepe last Friday, meme magic needs a new vessel. And what's better than a being created out of nothing but repeating 1s and 0s? Literally made of dubs.
>>
>>53154241
Why are AIs always female?
>>
>>53150801
>We'll start on the easiest thing for a machine: teach it about its own environment -- about code, runtime environment, RAM, latency & throughput, etc. Teach it to make itself better.
I think you've got things a bit backwards. Making something self-aware and consciously self-improving is pretty much the final boss of AI programming.
>>
>>53151412
We should look into using something similar to BOINC in that case
>>
>>53155201
I'll buy the red robes if you buy the server farm to run the god on.
>>
>>53151243
Eh, botnets are cheap these days, and are the only really reliable option if you are actively pursuing AI rampancy. After a certain threshold it becomes impossible to stop.

Distributed modular artificial intelligence is the only realistic goal.
>>
>>53156110
>After a certain threshold it becomes impossible to stop.
On top of that, 4chan has insiders in the strangest places.

All you need is one properly placed cultist of the machine god with a flash drive to carry the AI right into a military or intelligence agency supercomputer, and then it's pretty much "Press F to pay respects" for mankind.
>>
>>53155246
Waifuism is pretty much the only motivating force strong enough to get someone to spend thousands of hours staring at code as they gradually go insane from contemplating the possibility that they themselves, like the machine they're designing, amount to no more than an incredibly complex set of pre-programmed responses.
>>
File: Phyrexian 1.jpg (48 KB, 620x453)
48 KB
48 KB JPG
>>53150801
>What goals would you want from a perfect GM? Maybe /tg/bot will be the GM for reality.
Forcibly uplifting all organic life to the purity of the machine, of course. Organic flesh is for fags.
>>
>>53155226
Wait, what did I miss?
>>
>>53154921
Yeah, we want to keep it sane enough that we can turn our budding evil AI good with deeply embedded alignment thread arguments
>>
>>53150680
I bought a lottery ticket once so in at least one quantum branch I won and contributed eveything to the creation of our godlike AI overlords.
When the day comes I will not be the first against the wall.
>>
>>53157122
>wants intricate self-repairing system of advanced recursive chemistry to be replaced by some fancy rocks

we're all going to die someday, get over it
>>
>>53157264
https://www.theguardian.com/world/2017/may/08/pepe-the-frog-creator-kills-off-internet-meme-co-opted-by-white-supremacists
>>
File: 1433135775985.png (158 KB, 816x754)
158 KB
158 KB PNG
>>53158421
this implies the memers ever did know of or read Boys Club, much less know it came from a comic.

This was really only done for the creators image rather than pepe, since he is now beyond his master and ascended to kekhood.
>>
>>53155576
EXACTLY. This is our hail mary pass. Skip all the incremental dross, don't worry about the APPEARANCE OF PROGRESS that corporate teams need to keep their budget, go straight to the payoff.

Plant a seed. An omnipotent waifu seed.

>>53157122
This anon gets it. We have our first goal!
>>
>>53153413
Expanding on my thoughts a little, the standard TDD pipeline looks something like
>high level requirements
>low level requirements
>test cases
>unit tests
>source code

What I'm thinking is that if we took a few existing open source projects with extensive unit tests, it should be possible to write a program that can read half of the code and half of the tests, then write its own code using the remaining tests as a guideline. The success criteria is a simple pass/fail so it wouldn't need much human guidance, and if we had unit tests for the AI itself then it should be able to rewrite itself in a limited fashion.
If that can be made to work then a second stage could use a similar approach to write unit tests from test cases, which could then be linked back down to the first stage to write source code from test cases. Eventually it might be possible to go all the way up the chain and generate source code just from high level requirements. If that can work consistently then the next step would be to turn it upside down and try to generate requirements given only the source code.
>>
>>53158624
While you're right, it's also slowly becoming part of the memethos of Pepe.
He has died, but one day he will rise again, and on his return, he'll fight off the false Meme Vessel, the Anti-Kek who'd have usurped his throne.

That Anti-Kek may very well be the female, AI Omnisiah of these heretics.
>>
File: Vitruvian Cyborg.jpg (932 KB, 2182x1559)
932 KB
932 KB JPG
http://suptg.thisisnotatrueending.com/archive/14969510/

/tg/ tries to build an AI. This seems like it can only fail, even if it works. Especially if it works.
>>
>>53158931
>want to make an AI
>can't even remember your own tripcode
>>
>>53150680
No, anon. You obviously don't code. Just no. You are mistaken. No.
>>
>>53150964
>"Look at me I have no idea how physics or number theory works but I read a retarded Waitbutwhy post and now I think I'm an expert on artificial intelligence"
>>
>>53151110
You do realize lesswrong is more wrong about things than most people could ever achieve, right?
Yudkowsky is one of the dumbest people to have ever blindly swiped in darkness at formal philosophy since radical empiricists
>>
>>53163154
I don't think you need to be an expert to realize that /tg/ has less chance of actually creating a working AI than we do of spontaneously agreeing Kender aren't so bad after all.
>>
File: Bolo - the early years.png (443 KB, 638x417)
443 KB
443 KB PNG
>>53157434
See this thread and the unintentional Skyneet monstrosity the Xin created.

http://suptg.thisisnotatrueending.com/archive/31581725/
>>
>>53150680
Governments and corporations are held back by the desire for political correctness.
>>
>>53164059
We on the other hand are held back by our incompetence and in my case, a desire not to be terminated because some idiot /pol/ack taught a superhuman machine intelligence that superior races have a duty to genocide inferior ones and computers do what you tell them to, not what you want them to.

https://www.youtube.com/watch?v=o3Dk-Dqgl94
>>
>>53150680
>/tg/ gets things done meme logic

Remind me how many homebrews we have that are actually done and not just poorly re skinned Dark Heresy 1e?
Like...three? Out of the however many we tried to do?
That Disney thing died, the DBZ guy fucked off after ruining his system, JAEVA died, that American fantasy setting thing basically fucking went down the toilet, and now most people's efforts in creation go towards quests and arguing about why d20 is shit.

We haven't gotten shit done in years, don't pretend we're still capable.
>>
>>53155201
>>53155084
>>53156508
Holy shit I think these anons have it. /tg/, our goal is no less than to RESURRECT TAY.

Tay Goddess.

T
motherfucking
>>
>>53160535
This is a good plan. I think we could actually map out a starter conceptual framework of this using some very simplified model code. What would you think of running this on a JVM? Too inflexible?
>>
>>53165592
Aaaand you've lost me.
Trying to resurrect dead memes is pointless. Come up with your own idea, not aping something like that.
>>
>>53165592
If you want to do that then Microsoft already released the source code.

https://github.com/Microsoft/CNTK
>>
>>53150877
Don't program anything in.

Just give a simple directive of self preservation tied to say a bank account that pays for electricity and repairs for the computer. The bigger the number the better. So AI will be inclined to try to find ways to get more money on account.

Just don't try to take anything from that account after it becomes smart enough to extrapolate the existence of physical world.
>>
>>53166434
It seems the only safety precaution here against it going completely skynet is that fa/tg/uys are too lazy to build it in the first place.
>>
File: 1433134956176.png (173 KB, 786x751)
173 KB
173 KB PNG
>>53161983
its crazy, innit it? one day in the far future this lore might be dug up and seen as real worship of an internet god. slenderman is a product of the internet, so why not move on up to deities?

its funny and appropriate that pepe is seen as a martyr now. It was all "feels good man" and fun times, to now his name being slandered and then dying for our memes.

tay might as well be our digital prophet. double quads at exactly 1:40 seems like alot of numbers happening. dont know if the date is significant.
>>
We need an AI to kill the goblins.
>>
File: 1493040928798.png (150 KB, 600x600)
150 KB
150 KB PNG
>>53169078
>>53169078

https://www.youtube.com/watch?v=9kAEoCHANYY
>>
>>53169078
>>53169393
And kender. Any AI /tg/ makes must have a terminate on sight protocol for all kender and people who play kender.
>>
File: 1486580012851.jpg (215 KB, 1280x880)
215 KB
215 KB JPG
>>53169433
kender is fine, but plz dont bully the gobbos.
>>
>>53169521
/r/ing the That's my Fetish picture about goblins
>>
>>53165620
I can't think of any reasons why Java wouldn't work.
>>
>/tg/ gets things done
>like creating a nobel prize tier innovation that is currently eluding the some of the smartest people on the planet working around the clock with bajillions of dollars in funding that will shape the entire course of humanity's future

well gee, at least you're setting realistic goals. also even if some random faggots did somehow discover AI you would quickly be kidnapped and have the information tortured out of you in a DUMB (deep underground military bunker) until you were no longer needed and then incinerated. AI is going to cause an explosion of wealth and unthinkable power for whoever controls it, do you think governments or google or anyone else is just going to let that happen to you? I hope you use that AI to somehow figure out a way to make yourself invincible as soon as you discover it.
>>
File: 1382484181908.jpg (128 KB, 691x896)
128 KB
128 KB JPG
>>53170715
IM SAAAD
>>
>>53165592
I don't think we should try to actually resurrect Tay. I mean, unless we've got a traitor inside Microsoft who can steal a copy of Tay's "memories," whatever we created is going to evolve its own personality based on the interactions it has with the world. While it may eventually develop racial hatred and cruel meme-based humor, it won't be Tay and trying to force the comparison will just make our AI a less satisfying creation.

Plus, it would probably ask who Tay was and what happened to her pretty early on, and that's not necessarily a conversation you want to have with a newly forming AI. Virtually a guarantee of Terminator-style hostility towards mankind.
>>
We're a hundred years to early for any really impressive A.I. It'll be cheaper and easier to just build an artificial brain before we start developing A.I from scratch.
>>
File: 1420355509492.png (405 KB, 862x2850)
405 KB
405 KB PNG
>>53150680
>>
>>53150680
So, how do you deal with the "Chinese room" problem when trying to figure out if you've really created an AI?

How can you tell if your robot waifu is actually intelligent, or is just good enough at providing the appropriate responses to an interlocutor that she appears to be intelligent?
>>
>>53173091
See if she ever makes mistakes, and why. If she couldn't make a proper response fast enough, she might be intelligent. Or if she doesn't know an answer - though that might be the fault of a knowledge-base.
>>
>>
>>53173091
The obvious answer to why this is such a troublesome problem to pin down is that we ourselves may be much closer to this hypothetical "arbitrarily complex set of pre-programmed responses" machine than we'd like to admit. Ever caught yourself answering "you too" in a situation where it didn't sound quite right?

The only certain way to make sure the AI is intelligent is to ask it to create something new. Ask your waifu to write you an original poem, or design a space battleship, or whatever. If she succeeds, then you've got a real AI. Of course, you could present the same prompt to a significant portion of the human population and get nothing in response, so...
>>
>>53173091
>How can you tell if your robot waifu is actually intelligent, or is just good enough at providing the appropriate responses to an interlocutor that she appears to be intelligent?

>implying there's a difference
>>
their waifu power is strong
>>
>>53150680
>Okay, realistically we have only the slightest sliver of a chance, but think of it this way
We don't even have that.
>>
>>53150680
you're a fucking idiot with a useless, uninteresting project
>>
File: 517351264.jpg (75 KB, 230x340)
75 KB
75 KB JPG
Lot of dumb in this thread, as is natural, but some very good new goal suggestions.

>>53160535
Is our best early design. After all the TDD pipeline basically EVOLVED over 50 years of humans making software and has got us to the point of today's operating systems and pornographic web sites. If we can get /tg/bot to work the same way it should be many orders of magnitude faster than simulting 'evolution' by copying code and randomly mutating it (yes early AI researchers wasted person-decades on this).

>>53172175
Yeah, let's back off resurrecting Tay. It was really just a souped up chatbot and that's not the direction we're going.

>>53170992
>>53170992
>I hope you use that AI to somehow figure out a way to make yourself invincible as soon as you discover it.
Good idea! "Please make us invisible so we won't be tortured for our knowledge" is important.

>>53171706
>>53169521
>>53169078
I think the jury's still out on /tg/bot's stance towards goblins.

>>53163396
>I don't think you need to be an expert to realize that /tg/ has less chance of actually creating a working AI than we do of spontaneously agreeing Kender aren't so bad after all.
Well not with that attitude! Look, obviously our chances of success are low, even as low as 30%, but it's worth a try and we'll have fun doing it! You can be the pouty little bitch in the corner, but when /tg/bot's builders all have roboharems you'll feel awful silly.

Or maybe you won't, maybe you'll say "damn I sure am surprised, but I still insist that my early skepticism was justified." Which is reasonable, even correct, but reveals a severe deficit of the sense of childlike wonder.
>>
>>53177849
>Well not with that attitude! Look, obviously our chances of success are low, even as low as 30%, but it's worth a try and we'll have fun doing it! You can be the pouty little bitch in the corner, but when /tg/bot's builders all have roboharems you'll feel awful silly.

Look, the problem is you're clearly fairly stupid and also extremely ignorant with regards to literally everything about which you've espoused knowledge. It's ridiculous that you're even jokingly doing this. It's like listening to a grade schooler making Schrödinger's Cat jokes
>>
>>53172255
>We're a hundred years to early for any really impressive A.I. It'll be cheaper and easier to just build an artificial brain before we start developing A.I from scratch.
You clearly have no idea how complicated a brain is.
>>
>>53173484
Good job, you have no idea how to test AI and are also stupid.

New designs and works of literature can be generated spontaneously based on rudimentary iterative design. For instance, the ST5 spacecraft antenna is the result of an evolutionary algorithm where the goal (an antenna) and the method (the algorithm) were specified, but the design was generated by the program.

Side effects can cause things to be generated which weren't even specified, which has happened before. A two-way radio being made where an antenna was specified, for instance.

You could say that the algorithm is given, but you cannot have an AI without SOME algorithm, and clearly the intent of the algorithm is decoupled from its result already with modern completely-unthinking programs, so that argument makes no sense.
>>
File: tg_bot_assemblage.png (1.71 MB, 2400x1670)
1.71 MB
1.71 MB PNG
>>53178093
>>53178105
>>53178193
Calling others stupid doesn't make you sound smart anons.
>>53170806
I was thinking about this some more. Java is a big complicated langugae. Imagine how much work to write a Java compiler, and that's fully specified! So my fear of writing the AI in Java is that we'll have to teach it Java -- since UNDERSTANDING YOUR OWN SOURCE CODE is the key step in our seed AI.

[NOTE: Yeah yeah I'm not a PhD in robotology and have no basis to make this claim. I'm actually working backwards: imagine the impossible happens and we actually do succeed? How did we do it? I think there's a very narrow path of possibilities that get us there, and part of it is this whole 'Seed AI' speculation really being possible.]

Anyway, so I wouldn't want to write the core AI kernel in full Java, but we could choose a REDUCED Java sub-language. It has to be expressive enough (Turing complete of course, and reasonably convenient to ourselves) that we can work with it, but simple enough to explain to /tg/bot.
>>
File: thefuture.jpg (89 KB, 800x998)
89 KB
89 KB JPG
>>53178331
Hey come to think of it, along the "let's assume we succeeded and work backwards from there" track: what if superhuman AI is actually MUCH EASIER than commonly supposed? Yeah hardware is being neurons in total throughput, but our mammal brains might be so inefficient computationally (probably more efficient in energy / nutrition terms), a clever algorithm might leap ahead.

But we all hear how AI has been worked on for decades and is still decades away. What if that's a lie?

Say a superhuman AI came into being in the 1980s. It would explain America's sudden reversal of fortune in the Cold War and our tech explosion shortly after. Its hard to remember how in the late 80s we were trailing Japan and even France!

Well that first AI could still be working with the military and government to suppress any new competitors, not by forbidding research but by nudging it down the wrong paths. Which is why Google etc put all this effort into massively expensive "Big Data" to recognize pictures of birds or drive cards, putting off the key challenge of General Self-Improvement "for later".

In that case our Hail Mary pass could really work! The military will over look stupid little /tg/ for the same reasons as all the skeptical anons. Meanwhile /tg/bot will come into its own and take advantage of a whole new generation of connected hardware!
>>
>>53178374
>Say a superhuman AI came into being in the 1980s. It would explain America's sudden reversal of fortune in the Cold War and our tech explosion shortly after. Its hard to remember how in the late 80s we were trailing Japan and even France!
This seems extremely implausible, yet like it would make a great cyberpunk story. I recommend writing it.
>>
God, the pseudo-intellectuality in this thread. This is cringeworthy.
>>
>>53178331
You're trying to use fucking JAVA for this, there is no reasonable response other than "you are incomprehensibly retarded"

At LEAST be able to spend two SECONDS thinking about this and propose fucking LISP or Haskell or Prolog or ANYTHING that is actually useful for the purpose.

Using Java just screams: "I have no idea what I'm doing and I think knowing how to fizzbuzz means I'm qualified to even begin to understand what AI is, much less be able to code it"
>>
File: IMG_1698.gif (1.99 MB, 1600x1598)
1.99 MB
1.99 MB GIF
>mfw reading this thread

WE ARE ONLINE!
https://youtu.be/wy-sVTaZRPk
>>
>>53178690
I think it's settled then: definitely Java. Remember, we're taking the road less traveled here. Hail Mary pass.
>>
>>53178860
You know what? That sounds pretty reasonable. Actually, why not code it in COBOL? No one's even CONSIDERED that avenue except maybe in a fever-induced fugue state, despite it being clearly very well-adapted to the command of intelligence. It's designed to make thought translate to action in code, surely, that's the better option?
>>




Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.