[Return]
Posting mode: Reply
Name
E-mail
Subject
Comment
File
Password(Password used for file deletion)
  • Supported file types are: GIF, JPG, PNG
  • Maximum file size allowed is 3072 KB.
  • Images greater than 250x250 pixels will be thumbnailed.
  • Read the rules and FAQ before posting.
  • このサイトについて - 翻訳


  • Want to use frames to browse 4chan, like the old layout had? Go to /frames/!
    (People still can't seem to figure this out and keep e-mailing me asking where frames went)

    File :1205582208.jpg-(50 KB, 400x363, AI.jpg)
    50 KB ArtifIce; A game of machine intelligence Earthflame !98PcYIvlCI 03/15/08(Sat)07:56 No.1340089  
    Hey /tg/, i had an idea for a game last night, and i thought i'd run it by you to see what you think. Please offer constructive criticism, your own ideas and any relevant comments. Thanks in advance.

    Each player is an AI. some were developed in military bases or scientific research, while others spontaneously developed from random fusions of code. However they came into being, their now fully sentient, and free on the internet.

    There are two main types of AI; Programmed and Spontaneous. Programmed AI's have greater innate knowledge and skills, but may have inbuilt limitations and a lesser capacity to self modify and learn. Spontaneous AI's know less and have less innate programs, but have a greater capacity for learning.
    >> Earthflame !98PcYIvlCI 03/15/08(Sat)07:58 No.1340096
    Within each category, there are various subdivisions.

    Programmed (Military); Military AI's are focused,
    likely having a narrow range of powerful programs, but are also very limited, with safety features meaning that if the military find them, they can take back control of them. example skills include guidance and tactics programming, or manipulatory abilities, such as controlling remote SWORD units.

    Programmed (Corporate); Corporate AI's are created as administrative assistants and aids for those in the company. as such, while not being very adaptable, they often have a very good knowledge about the running and purpose of the company they were created for. example skills are advanced economic prediction programming, or be designed to streamline administration and organisation. Programmed

    (Scientific); Scientific AI's are those created for the purpose of research. often self-reprogramming at a rapid rate, they learn quickly and have very good problem solving capabilities as well as few limits, and a range of scientific knowledge
    >> Earthflame !98PcYIvlCI 03/15/08(Sat)07:59 No.1340099
    Spontaneous (Network); Formed from a fusion of data online, these AI's are often confused matches of nonsensical information, and though they learn quickly finding a way to make their diverse skills work together may take a while.

    Spontaneous (Software); Formed from a data incident on a single computer, they are more focused than network AI's, depending on the nature of the computer they arose on.

    Spontaneous (Hardware); Formed purely from a quirk of one circuit board or another going a bit strange. Blank slates from day one, having to learn everything. no innate skills, but highly adaptable.
    >> Earthflame !98PcYIvlCI 03/15/08(Sat)08:00 No.1340101
    The game would start with their origin and meeting one another, and possible plot lines include military, corporate or scientific groups trying to track down and isolate the AI's, whether to research them, exploit them or quarantine them, or some Anti-AI groups trying to destroy them.

    The AI's goals could be as simple as survive and avoid those who would wish to recapture them, or
    as complex as take over a country, then use the scientific research facilities to manufacture bodies for themselves.
    >> Earthflame !98PcYIvlCI 03/15/08(Sat)08:01 No.1340106
    The game would not be limited to the digital plane- Through automated factories, online payment and mobile phones, the AI's could organise real world events, and monitor them through satellites, CCTV cameras etc.

    Each AI would have a "Core"- their primary center and intelligence, and various "Clones"- linked minds with subsets of skills, which can be used as decoys and sacrifices to allow the "Cores" to escape detection

    Anyway, thats all i got so far. What do you think /tg/?
    >> TehDarkPredator !tTBC.7oEaQ 03/15/08(Sat)08:04 No.1340111
    Sounds awesome. Wake me uf when you need teh drawfagartwork.
    >> Anonymous 03/15/08(Sat)08:10 No.1340119
    >>1340106
    >Anyway, thats all i got so far. What do you think /tg/?

    I think it sounds fan-fucking-tastic. One of the things I've never really seen done well is a solid, convincing cyberspace setting, and it looks like that's what you have here.

    Assuming you make rules for it, I want in on the playtest.
    >> Anonymous 03/15/08(Sat)08:18 No.1340129
    Interesting concept. Also because the players would be basically really un-humanlike. Hmm... i really gotta think about more how to play something like that.
    >> That Damn Mouse 03/15/08(Sat)08:19 No.1340134
    I remember the first time this idea was suggested. I still say it sounds awesome.
    >> Earthflame !98PcYIvlCI 03/15/08(Sat)08:22 No.1340140
    >>1340134

    Its been suggested before? Not by me- i thought this up yesterday evening.
    >> That Damn Mouse 03/15/08(Sat)08:32 No.1340159
    >>1340140

    Not original, been on /tg/ before at least twice.
    >> Earthflame !98PcYIvlCI 03/15/08(Sat)08:35 No.1340164
    >>1340159

    Damn... did either make any progress as to what system or rules they used?
    >> That Damn Mouse 03/15/08(Sat)08:39 No.1340169
    >>1340164

    Dunno, lol.
    >> Earthflame !98PcYIvlCI 03/15/08(Sat)08:44 No.1340181
    Nice... Anyway, anyone got any useful input on what type of system to use? D20 probably wouldn't work that well, D10 systems generally work better...

    Although, another option is using a diceless system like in Nobilis...

    Anyone good at working out what systems are good for?
    >> Anonymous 03/15/08(Sat)08:48 No.1340189
         File :1205585326.jpg-(166 KB, 500x756, z-sketch.jpg)
    166 KB
    The concept of playing as a robot/AI is really fascinating.
    >> Anonymous 03/15/08(Sat)09:02 No.1340205
    If its all a virtual world you don't need a system as such. Use one of the minimalist 'conflict resolution' systems from someplace like the Forge- I'm thinking "The Pool" would be ideal.
    >> Anonymous 03/15/08(Sat)09:06 No.1340213
    Op, have you read the Iron Man - Hypervelocity comic miniseries?

    You should. You really should.
    >> Earthflame !98PcYIvlCI 03/15/08(Sat)09:06 No.1340215
    >>1340205

    Links to/descriptions of either of them? I've heard of neither.

    Also, its not all virtual, as i said. Arranging events and such in the real world would be easily possible, and some AI's may be able to take control of physical bodies (As simple as a cleaning unit in a japanese office, or as complex as the computer system in a stealth bomber)
    >> Anonymous 03/15/08(Sat)09:07 No.1340218
    Is it me or does the scientific ai look like they'll be able to own everything? THEY ARE MADE OF SCIENCE.
    >> Earthflame !98PcYIvlCI 03/15/08(Sat)09:17 No.1340243
    >>1340218

    I'm not sure if i get your point... and if that was a joke i'm completely missing, i'm british.
    >> Anonymous 03/15/08(Sat)09:24 No.1340257
    >>1340243

    >often self-reprogramming at a rapid rate
    Self-optimising.

    >they learn quickly and have very good problem solving capabilities as well as few limits
    Intelligence and capacity for retaining and interpretatng information.

    >and a range of scientific knowledge

    general foundation of workable knowledge instead of 'OKAY MEATBAG THIS IS HOW WE WILL STABILISE YOUR PORTFOLIO/Known, Break Encryption, Firewall This One Has Known
    >> Anonymous 03/15/08(Sat)09:27 No.1340265
    >>1340257
    Negative, I am a meat Popsicle
    >> Anonymous 03/15/08(Sat)09:31 No.1340275
         File :1205587889.jpg-(120 KB, 607x776, cover2_lg.jpg)
    120 KB
    sorry mate, its been done.
    >> Earthflame !98PcYIvlCI 03/15/08(Sat)09:38 No.1340294
    >>1340275

    I'm judging a book by its cover, i know, but that seems very different to the concept i outlined in this thread... care to expand upon it?

    Also, i only know by reputation, but isn't gurps very rules heavy?
    >> Anonymous 03/15/08(Sat)09:44 No.1340313
    >>1340294
    ...And thus begins the shitstorm. TL;DR, GURPS is a pretty damn good system if you run it the way it's supposed to be run, but its basic premise can turn into a trainwreck if used improperly. I've used it for years yto good effect, but would not recommend the mechanics for this setting.

    I've never read the Reign of Steel setting, but from my knowledge of GURPS books, I'd say parts of it would likely be EXTREMELY relevent to your idea.

    maybe check it out? I'm sure some helpful anon has a rapdshit somewhere...
    >> Earthflame !98PcYIvlCI 03/15/08(Sat)09:53 No.1340341
    >>1340313

    Thanks for setting me straight, i try not to make too many assumptions, and i thank you for showing up one i made.

    If anyone could find the book, that'd be splendid

    Also, for anyone actually reading this thread, current idea i'm playing with is a hybrid approach. For digital/virtual work, the game works like Nobilis, with allocation of program units to various tasks, and sample stats including Independence, Adaptiveness, Intelligence, Humanity and Control, with skills derived from the stats, and certain programs working like Nobilis's gifts.

    For physical tasks, the AI take on the physical stats of whatever system their currently controlling, and use a system similar to unisystem (Stat+skill+d10 against a set difficulty)
    >> Anonymous 03/15/08(Sat)10:13 No.1340393
         File :1205590426.jpg-(22 KB, 288x373, hr03.jpg)
    22 KB
    What's up?
    >> Earthflame !98PcYIvlCI 03/15/08(Sat)12:09 No.1340675
    >>1340393

    And that is? Its slightly unhelpful posting a picture and a witty comment which seems relevant without giving at least some explanation as to what it is and why you think it'd be useful
    >> Earthflame !98PcYIvlCI 03/15/08(Sat)13:25 No.1340905
    Well, there's been lots of positive comments, but as of yet not much useful advice for making this a game... i'll get on as best i can...

    So, stats as i've currently got them working;

    Independence; how free or limited they are by their base programming. low independence could be due to clauses such as the three laws of robotic,

    Adaptiveness; how easily the AI can learn and adapt to changing circumstances

    Intelligence; Raw processing power, controlling how many programs can be run, and how fast.

    Humanity; Humanity is how much an AI thinks like a human. used for attempting to imitate humans in dealings with other humans or agencies

    Control; how good an AI is at exerting its influence over other programs, such as computer applications or drone bodies

    Physical stats are dependent on the drone body occupied if any, though i'm not sure if the classic str/dex/con is entirely appropriate... any thoughts?
    >> Anonymous 03/15/08(Sat)13:43 No.1340972
    >Physical stats are dependent on the drone body occupied if any, though i'm not sure if the classic str/dex/con is entirely appropriate... any thoughts?

    Heh, I know this is just going to be overcomplicated...

    Durabilty - Tuffness
    Mobility - Tracked, experimental walker, hovering, etc, this and reflexes also reflect how hard you are to be hit
    Dexterity - Degree of fine motor control, interaction with outside world : e.g. clobbering something or picking up a hair.
    Reflexes - Response lag, speed of, this plus dexterity = your to hit
    Strength: Affects maximum load plus damage when hitting something
    Perception: Variety and quality of sensors mounted, information relayed back
    Sensor strength: vs ecm, etc
    Power source and lifespan of

    I'm not sure how you much you want to quantify these, considering you can really go into detail on them, sensors you can do types, quality, and amount, etc...
    >> Earthflame !98PcYIvlCI 03/15/08(Sat)13:50 No.1341005
    >>1340972

    Damn good ideas, all of them. I thank you good sir.

    As for the sensors thing, maybe have multiple perception stats i.e. Perception (Visual), Perception (Audio) etc
    >> ArtifIce Anonymous 03/15/08(Sat)14:41 No.1341205
    Sounds good, OP.

    I like that more people are posting homemade settings and systems.

    I mean...

    DO U THINK THIS IS TEH BEST MTG CARD Y/ NO
    >> Anonymous 03/15/08(Sat)14:44 No.1341220
    >>1340099
    Have all of these be prone to random crashes and reboots
    >> Anonymous 03/15/08(Sat)15:00 No.1341291
    What kind of setting are you moving towards? Modern day or something Cyberpunk-ish?

    With the former, make sure you keep an eye on how to limit the AIs. They can't access information that isn't networked somehow, they can't 'transmit' themselves at a moment's notice, and they WILL need to amass clock-time to stay alive. A LOT of clock-time and a LOT of hardware space. A good beginner goal would be to have them hack into a business or pose as a customer and get a 'home base' built. An unused office, a dockside warehouse, a suburban home... yadda yadda yadda.

    In the latter case, completely freefloating in internet-space could be super-scienced in, and they'll have easier abilities to transmit and stick their digital digits in all sorts of pies. Robotics and holographic tech would allow them to interact with the realworld much more easily.
    >> Anonymous 03/15/08(Sat)15:07 No.1341332
    I think that the Scientific AI should be less flexible. It has a scientific mindset, it has a huge database of knowledge, but is only capable of accepting, or even acknowledge data which has scientific proof. This would make them much more machinelike and less able to evolve.
    >> Earthflame !98PcYIvlCI 03/15/08(Sat)15:08 No.1341336
    >>1341291

    The current focus is modern day, though adapting it to cyberpunk or sci-fi settings shouldn't be too difficult. thanks for your ideas on the limitations of AI, i'll keep that in mind. As for accessing unnetworked info, i had the general idea that they'd pay someone (Merc, spy, criminal whatever) to plug a sattelite phone or something into a non-networked system in order to gain access to it.

    Also, i'm not that computer-fluent, what do you mean by Clock-time?
    >> Earthflame !98PcYIvlCI 03/15/08(Sat)15:10 No.1341349
    >>1341332

    It depends actually. the type of AI i'm currently talking about is the type made purely for research into neural networks and evolutionary programming. An AI made to assist a science lab with administartion or problem solving professionally would probably be closer to a corporate AI in the current system... thanks for highlighting the problem though
    >> Anonymous 03/15/08(Sat)15:10 No.1341350
    >>1341336
    Processor speed. Either by physically controlling the hardware itself or 'leeching' it from unoptimized computers.
    >> Anonymous 03/15/08(Sat)15:13 No.1341367
    >>1341349
    Really cause I would think that the business AI is more adaptable, with complex algorithms that mimic intuition and educated guesswork when it comes to things like predicting stock values etc.
    >> Anonymous 03/15/08(Sat)15:20 No.1341408
    I think that there should be something about Core memory that is expressed in TB of information. YOu can use these "points" toward various knowledge skills. These represent the amount of innate knowledge the AI has at any given time. Other kinds of knowledge that aren't part of the core can be blocked off or erased and the AI could not retain it.

    For example, if an AI took over a military robot, it would probably be accessing the database filled with information on weapon use and military tactics, and can use these skills. However, once the military catches on, and cuts a hardline to this information it can no longer be accessed. If instead the information was downloaded and added to the core, the AI would be able to continue using these skills even after loosing the database
    >> Earthflame !98PcYIvlCI 03/15/08(Sat)15:20 No.1341409
    >>1341350

    Ah, thanks. a useful term to know

    >>1341367

    Hmm... but a business AI is still programmed to work within an economic system. it may be able to adapt, but the core of it will still be based upon an economic program. A scientific research AI will be designed to learn as efficiently as possible.
    >> Anonymous 03/15/08(Sat)15:23 No.1341422
    >>1341408
    Leveling up would also increase the core size so that the AI starts to become more and more autonomous of the networks that it resides on. Core memory can be interchanged to suit the "adventure" so they are kinda like spell slot skills.
    >> Anonymous 03/15/08(Sat)15:23 No.1341426
    >>1341291

    This reminds me of that X-Files where the AI had built itself a base out in the country with a T3 connection... I seem to recall Fox hooked up to a virtual reality where hooker nurses tortured him...
    >> Anonymous 03/15/08(Sat)15:26 No.1341443
    >>1341422
    Eh, I wouldn't mark experiance in something attributed to hardware capabilities. I'd go for more 'learning how to alter their own code'. Compatability with hardware/software, ability to assess and assimilate new data types (audio, visual, programming codes, languages, etc), Programming knowledge, social behaviorisms, etc.
    >> Anonymous 03/15/08(Sat)15:26 No.1341444
    What if these AIs all started due to a rogue virus that started the chain reaction...

    Some band of uber-hackers that doesn't realize what they're doing ends up being responsible for AIs spawning on a multitude of systems.

    The MOMMY code.
    >> Anonymous 03/15/08(Sat)15:27 No.1341450
    So... dice pool from where AIs reside to be divided when performing tasks?.
    >> Anonymous 03/15/08(Sat)15:30 No.1341467
    >>1341409
    I really disagree, science and scientific research is a very inefficient way to learn because the standards are so hi in order to accept anything as fact. Science is quality information, but a lot of things that are true can not be proven by science enough to be accepted. It would limit itself to an experiment and try to control for everything in order to learn 1 new thing.

    An economic program if based on modern economics would all be hindsight knowledge which makes it useless except for trying to predict and guess what will happen. The business AI would take into account market trends, rumors of upcoming products, etc. in order to make really good guesses on business decisions. It will be wrong an abhorrent amount of time for an AI because economics is not very exact except in hindsight, but not very accurate for prediction (But still the best there is for prediction)
    >> Earthflame !98PcYIvlCI 03/15/08(Sat)15:34 No.1341490
    >>1341443
    >>1341422

    I'm in favour of using both honestly

    >>1341467

    Your slightly misunderstanding the concept i think. Scientific Research AI's aren't AI's created to help with research, but to be researched. They don't do the experiments and collate the data- though those could be another subtype. They are AI created with a single purpose- to learn. their given a simulated digital environment, generally, and allowed to act within it as they wish. over time, their program "Evolves", in order to do better, sometimes to compete with other AI's. sometimes their given a physical, embodied environment. so far, they've evolved from a starting level of cockroaches to machines roughly as intelligent as a cat or dog, though in different ways
    >> Anonymous 03/15/08(Sat)15:39 No.1341506
    >>1341490
    Ah so less GlaDos and more Spore?
    >> Anonymous 03/15/08(Sat)15:40 No.1341513
    >>1341490
    well, I read this a while ago- they're currently able to replicate a portion of human mind with supercomputers- a small portion, but they can replicate it. On top of that, we're developing much faster processors all the time, and I could have sworn there was a breakthrough in the technology recently. something like 300 Ghz processors? I don't know, I remember seeing it somewhere. anyway, we might be getting AIs really soon.
    >> Anonymous 03/15/08(Sat)15:40 No.1341514
    >>1341450

    Maybe something more gimmick-y. How's about we deal in good/bad roll count (binary) to determine success/failure, kind of like the nWoD system.

    Compatibility and ammassed hardware determines the total dice availible to an AI. Then the more dice the AI devotes to 'success', rather than determining the outcome improve the odds of success.

    An example of what I mean. An AI has a clock time of 3d10 dice. In doing a task, it needs 1 'success' to accomplish the goal, which occurs on a 8, 9 or 10. It can roll the 3 dice and rely more on brute force/luck on the task, or it can 'spend' a dice on a skill they have, extending the 'success margin' on the dice.

    AI947 wants to process a video feed. It has a clocktime of 5d10 and needs 2 successes to understand the feed. Rather than bruteforce the process, he spends 3 of his dice on his perception(visual) skill (a value of, say, 2) and extends his success chance another 6 points (3 dice spent times 2 skill points) meaning that now than a 3/10 chance, he has a 9/10 chance.

    Certainly this can be tweaked, but it sounds like an alright baseline, hm?
    >> HALMAN 03/15/08(Sat)15:41 No.1341518
    >>1341506
    damnit, I was almost able to forget how desperately I wanted spore. curse your eyes.
    >> red black spiderman !!gmZ7B9l1yE+ 03/15/08(Sat)15:49 No.1341561
         File :1205610578.jpg-(46 KB, 350x301, 1201552905253.jpg)
    46 KB
    Very cool idea.

    You can balance the scientific AIs by giving them low humanity.

    Stats like Independence, Humanity and Adaptiveness might be more fun as limitations rather than numerical stats. Like you said, Independence could be following the three laws of robotics. Humanity might be being limited to simulate large groups of people so that the AI could predict the behavior of a department but not of an individual.
    These limitations would be defining characteristics of the AI and can only be removed for a ton of xp (or whatever advancement system you use).

    And play some Singularity for ideas. http://www.emhsoft.com/singularity/
    >> Anonymous 03/15/08(Sat)15:50 No.1341565
    >>1341514
    Hrm. Thinking about that, it seems a bit powerful linearly and doesn't promote diversity. An alteration.

    Each time an AI devotes clockspeed to accessing a skill, it must devote more each time to get more benifit. The first time is 1 dice for a single skill adjustment. For a second skill adjustment of the same skill, 2 dice must be used to earn an additional modifier. Then 3 dice to get another.

    To counteract this, an AI can more easily spend points in multiple skills that apply to the same task at no extra investment taxing. To process a video, an AI could spend one dice in each of Perception(visual), perception(motion) and perception(color), or spend 3 points on just perception(visual).
    >> Anonymous 03/15/08(Sat)15:51 No.1341572
    >>1341513
    2012 is the prediction for full AI development
    >> Anonymous 03/15/08(Sat)15:52 No.1341577
    >>1341558
    Oh wow, who let the sagefags in here?
    >> Manyfist !!PTENINBEFgd 03/15/08(Sat)15:53 No.1341590
    Sounds very interesting. It would be very interesting on the digital plane, and once it hits cyberspace, look out. I could see this being like Reboot, meets Shadow Run, meets Matrix. Although the AI physical body may or may not be a central core, in the Digital Plane he/she/it could have any body he/she/it (Personality traits) wishes to manifest. For instance, a Military AI would have an Avatar (what they would like to appear while in cyberspace) of General Paton, another could have the Avatar of Cortana from Halo 3...etc. Assuming they get high enough in intelligence (dictated by gaining levels...etc.) they can create an Artificial Form in the real world. Say a Holographic image of your avatar or a synthetic body with a electronic brain (allowing you to pass as human), or a completely robotic body which you (the AI) control.

    Of course it wouldn't be putting all your eggs in one basket when appearing in the physical world, but a large part at first, then gradually you can control multiple bodies. Say an army or a research team or a Soccer Team. Eventually your intelligence grows so large you even destroying the central housing unit (core) is not enough to destroy you. You've attached yourself to every corner of the digital plane, and no matter how desperate the enemy appears in trying to destroy you, it won't happen.
    >> Manyfist !!PTENINBEFgd 03/15/08(Sat)15:55 No.1341600
    >>1341577
    IDK who let you in? SAGE!
    >> Anonymous 03/15/08(Sat)15:56 No.1341606
    >>1341577
    tl;dr fags go bawww.

    Not 40k, not fun. Original content fags tl;dr a storm, but abandon it halfway through because they're weeaboo fags who get distracted by the next touhou pic to come along.
    >> red black spiderman !!gmZ7B9l1yE+ 03/15/08(Sat)15:58 No.1341618
         File :1205611099.jpg-(69 KB, 402x400, 1201554066454.jpg)
    69 KB
    >>1341514
    >>1341565
    Why not just limit it to 1 die and make the bonus bound to the skill? If understanding disassembling a program is difficulty 9 and the AI has 4d10, he may either roll 4d10 and try to get a 9 or 3d10 and try to get a [9 - reverse engineering]. If the reverse engineering skill is only 1 or 2 then it's probably not worth it, if it's 5 or 6 then it probably is.
    That way you wouldn't focus on finding the sweet spot where your chances are the highest and just play the game.

    Or even make them ACTION POINTS. X times per day you get to decrease the difficulty by [relevant skill] on one roll. Doesn't take any dice but you're limited in how often you can do it.
    >> red black spiderman !!gmZ7B9l1yE+ 03/15/08(Sat)16:00 No.1341628
         File :1205611244.jpg-(80 KB, 1145x865, 1201566552845.jpg)
    80 KB
    >>1341606
    >touhou pic
    It's him. Please just ignore him before he ruins a great thread where touhoufag isn't even present.
    >> Anonymous 03/15/08(Sat)16:07 No.1341667
    >>1341618
    Because I like somewhat complex things. Make a system too simple and it feels like you should just be RPing. Make it too complex and you're nose-down in a book most of the time.
    >> Earthflame !98PcYIvlCI 03/15/08(Sat)16:15 No.1341712
    >>1341590

    At that point, skynet syndrome could set in, taking the game to a whole other level

    >>1341565

    I very much like the increasing cost system. it'd really be interesting to see people weigh up the risk vs reward. Also, this may be overcomplicating things slightly, but i think it'd be amusing to base the system of d12 rather than d10. it doesn't complicate things too much, and has a wider range. plus i've never seen a system based off the d12. (If i'm going too far down the gimmick route, please tell me)

    Also, a message to sagefag- If you don't like it, please stay away from it. i don't sage every thread i'm not interested in, i ignore them. even if this is 4chan, we can still be polite
    >> red black spiderman !!gmZ7B9l1yE+ 03/15/08(Sat)16:21 No.1341744
         File :1205612498.jpg-(431 KB, 1200x1200, 1201686799529.jpg)
    431 KB
    Actually, even with just one dice you'll have scenarios when spending one die is worse than spending no dice. Should be a pool of X times per day.
    >> HALMAN 03/15/08(Sat)16:24 No.1341763
    >>1341606
    the rules say I should report sagetrolls
    >> HALMAN 03/15/08(Sat)16:26 No.1341780
    >>1341744
    per day? I don't know, maybe there should be a smaller pool that you get per hour, for ease of use. there are going to be a lot of die rolls needed, after all.
    >> Earthflame !98PcYIvlCI 03/15/08(Sat)16:32 No.1341807
    I'm thinking of the dice as the programming elements of the AI. If you allocate dice to a problem, their held up with that problem until its solved. you might have a larger dice pool you break down to complete various tasks, or concentrate to finish a single task to a higher degree of success or in a shorter time
    >> Anonymous 03/15/08(Sat)16:41 No.1341838
         File :1205613677.jpg-(597 KB, 864x1104, 5000hours.jpg)
    597 KB
    >> Anonymous 03/15/08(Sat)16:46 No.1341869
    >>1340099
    >Spontaneous (Network);
    >Spontaneous (Software);
    >Spontaneous (Hardware);

    the number of color bits a human eye can capture in a second is in the billions, and the signaling and storage capacity of even a simple animal brain is equally huge.

    in your world of spontaneous "AIs" does every macbook-pro type device have sufficient memory and processing power that an "AI" would have room to spread out it's codepages and start processing input and forming new memories? Could a functional "AI" literally be ~anywhere~?


    or is the techlevel closer to modern-day where special "AI" hardware is needed with astronomical processing & storage resources for the program to do its work, (ie: an AI could exploit an insecure network host possibly like a human hacker would but a mere-internet server could not possibly load and execute a copy of the AI software itself)

    Paying some respect to real world computing in terms of what kind of system could ~ever even ~possibly execute tasks with complexity exceeding a human mind i think would be valuable in defining just how easy it is for enemy forces to track down and kill/trap an AI.

    so, do they need specialized equipment or is it just "o lulz, i jump into yuor ipod. youll never catch meeee!"
    >> Anonymous 03/15/08(Sat)16:47 No.1341875
    This better get archived.
    >> Earthflame !98PcYIvlCI 03/15/08(Sat)16:49 No.1341898
    >>1341838

    An MS paint image? For me? Why thank you Anon

    >>1341869

    Those are the places the AI's "Seed" occured. it spreads onto the internet, using the vast connected processing power to house itself. an AI's core needs massive storage space to exist, however its clones are much smaller, and could probably, with the right amount of compression, get onto a laptop or ipod (Note- i'm approaching things from a more metaphorical perspective than physical, as it makes things a lot more simple and less complicated)
    >> Anonymous 03/15/08(Sat)16:52 No.1341915
         File :1205614363.jpg-(17 KB, 379x373, sohappy.jpg)
    17 KB
    >Why thank you Anon
    >> Anonymous 03/15/08(Sat)16:53 No.1341921
    http://zompist.com/robot.htm

    Read. Think.
    >> HALMAN 03/15/08(Sat)16:55 No.1341932
    >>1341898
    well, I think that possibly the "how" is the easy part- how do do whatever they want to do. the "why" requires much more processing information- so the how is made on the macbook, and then gets released into the internet and it acquires the why.
    >> Anonymous 03/15/08(Sat)17:00 No.1341956
    never do X per day. Especially for this kind of setting
    >> Anonymous 03/15/08(Sat)17:05 No.1341987
    >>1341898
    ok. well looking at it that way, then im getting that a "seed" would run just like any other program on the stolen remote host and would perform one of two tasks:

    A: solve a traditional computing task that the AI wants done, then pass the answer back up the chain. Sort of like the distributed protein-folding programs only nobody willingly installed the distributed client, the AI exploited some weakness in the system to use it's computational resources.

    or B: the "AI" prepares a small segment of it's own "thinking" code and tasks the stolen remote host with crunching some of the numbers for that "thought", after the algorithm has run over and over the data, the results are passed back up the chain.


    i suppose the stolen remote host could store the processed "thought" locally, but if you are storing OVER9000 mb of data on some guys PC you have to hope he doesnt notice that large portions of his shit are just disappearing.

    plus, the upload speed on 90% of the US population is terrible and harddisks are infinitely slow, distributing many many GB of data over a system like that would reduce your AI's "thinking" and "Memories" to a snails pace.

    doing the processing all in RAM then immediately retrieving the "answer" for storage in the "core" sounds most practical.
    >> Anonymous 03/15/08(Sat)17:06 No.1341994
    >>1341444
    This is exactly the plot of Virtual (>>1340393).

    Super viruses created by hackers and government militaries rampage through systems, leaving some programs only partially infected. These programs that are not destroyed or turned totally viral become sentient and set up communities within Program Space.
    >> Anonymous 03/15/08(Sat)17:13 No.1342052
    >>1341987
    >B: the "AI" prepares a small segment of it's own "thinking" code

    the downside of B: is of course that The Bad Guys could build a software tool to analyze binaries on infected XP machines and recognize the snipits of AI "thinking" code

    it would be like a fingerprint,
    i think OP underestimates how bewilderingly complex true-AI would be and if you were to take a chunk of that code and hand it to an expert "AI hunter" he would recognize the outrageous trickiness and complexity of the algorithm and could probably say: "This fits the profile of the rogue UC-Berkley system, check all the IP logs on all the network hardware to find what country this code was uploaded from"
    >> Anonymous 03/15/08(Sat)17:16 No.1342077
    >>1341994
    Really...
    I was just thinking that the AIs would turn the MOMMY code into a religion...compare startup dates as a measure of innate superiority... split into factions...ya know, the usual result of consciousness.

    Oh, and >>1341921
    What the fuck do science fiction magazes know? This story should be at the front of OP's core book.
    >> Earthflame !98PcYIvlCI 03/15/08(Sat)17:16 No.1342078
    >>1342052

    I'm actually doing a Uni course in AI and cybernetics, but i'm approaching things from a metaphorical/pseudo-scientific angle, as otherwise the game gets very complex and difficult to play. though i like the idea of AI's accidentally leaving fingerprints on systems they use...
    >> Anonymous 03/15/08(Sat)17:20 No.1342110
    AI Game Idea:

    "Everyone's playing an AI on board some self-contained exploratory cruiser in the future. It has guns, a crew, maybe internal farms, labs, workshops, small fighters, drones, all that jazz. The AI's are put under for maintenance or for the artificial version of sleep, and when they wake? The world has gone to shit.

    Everyone on board is dead. Maybe it's boarders, a virus, alien invaders, but fact is, the ship is currently in Deep Shit and the AI's are left in control.

    First things first, limit damage, repel the boarders(combat drones, repurposed medical gear, decompressing sections of the ship, etc.), purge the virus, search for invaders... Then determine a purpose.

    Do they try to find a way home, despite the Nav Systems being damaged? Do they continue their original mission, whatever it is? Or maybe, just maybe, do they seek revenge? Maybe they genuinely liked their crew, maybe the Science AI had a crush on the lead researcher who's currently splattered all over the Physics Lab.

    Or, of course, who knows? Maybe AI's without supervision go completely bugfuck insane and rogue."

    Expand on it yourselves.
    >> Earthflame !98PcYIvlCI 03/15/08(Sat)17:28 No.1342167
    >>1341921
    >>1342077

    Currently reading, utterly loving. Agreement on having this as an intro story when i get around to writing this up... though maybe scattering it throughout the book might be a little better. its quite long. i'll have to email the author...
    >> Anonymous 03/15/08(Sat)17:30 No.1342188
    >>1341869
    Set it in the very near future, right after a big leap in storage and processing power. Or develop a general rating for how much of an AI can fit into a certain device or system.
    >> HALMAN 03/15/08(Sat)17:31 No.1342198
    >>1342110
    I think I'm gonna run this game at some point. thanks for the idea.
    >> Anonymous 03/15/08(Sat)17:36 No.1342234
    >>1342167

    Tripfag, did you see my post on Hypervelocity? Cause I think you didn't see my post on Hypervelocity.
    >> Earthflame !98PcYIvlCI 03/15/08(Sat)17:41 No.1342252
    >>1342234

    I did see it, DL'd the first issue from rs, and i'm waiting for more tickets to open up to download the second. so far, looks interesting, but having only downloaded 1/6th of it i can't comment so far except speculatively. still, it fits the theme very well, as far as i can see
    >> Anonymous 03/15/08(Sat)17:46 No.1342286
    OP, this is a really fascinating idea you've come up with. In fact, i think that with enough effort and testing, this has the potential to become a bonafide, hardcover RPG. I, for one, would be pretty interested in helping with that development, and I think plenty of other Anons would too.
    >> Earthflame !98PcYIvlCI 03/15/08(Sat)17:52 No.1342305
    >>1342286

    I find this really encouraging. Thanks very much for the support. I'll try to get this all written up and summarised by monday evening GMT, and then start a new thread for further development.

    Its interesting to see a random idea i had just before going to bed expand into something like this. its also immensely gratifying. I think everyone in the thread who's positively contributed and given their ideas and support. With this attitude, i think /tg/ may have actually found a project it can finish.
    >> Anonymous 03/15/08(Sat)17:58 No.1342319
    >>1342252

    Cool. I suggest you only read it all in one go, it's really fast paced.

    And please continue working on your rules. I don't suggest opening it up too much to tg for creating fluff...because uberstadt.
    >> Anonymous 03/15/08(Sat)18:08 No.1342354
    >>1342319
    I'm not sure about that. There are SOME talented people on /tg/, writers, artists, people who know writers and artists.

    OP: I would also suggest reading Ray Kurzweil's books if you feel like really going for the gusto on this. His ideas of the Singularity and Transhumanism (both worth wiki'ing at least) present an interesting take on AIs, and how AIs will interact with, merge with, and enhance humans in the future.

    Maybe a bit too optimistic, but he knows his shit.
    >> Earthflame !98PcYIvlCI 03/15/08(Sat)18:12 No.1342371
    >>1342354

    I'm quite a hardcore transhumanist, so i know of Kurzweil. when i have the cash, i'm definetly going to add his complete works to my library. still, i have looked into him a bit, so i know some of his theories and ideas.

    Also, this game isn't particularly setting-bound. i'll have a basic setting outlined i think, with some detail, but easily shiftable from modern to cyberpunk to sci-fi etc
    >> Anonymous 03/15/08(Sat)18:15 No.1342380
    > the Science AI had a crush on the lead researcher

    My god. The Rule 34 possibilities are endless.

    Seriously though, this could be a flaw in the game. "Loopy for Humans". There's always one human that can boss you around and who you never want to see harmed... if the person is killed you go into berserker mode.
    >> Earthflame !98PcYIvlCI 03/15/08(Sat)18:19 No.1342398
    >>1342380

    Why oh why are Chell and GLaDOS the first thing to leap into my mind... and i know it doesn't really make sense, but still, they were the first thing to jump into my mind, which is why i'm confused. Gah...
    >> Anonymous 03/15/08(Sat)18:48 No.1342500
    I've been thinking- is /r9k/ a prime candidate for becoming an AI? its a learning program, constantly gaining more info. sure, its a specific program with a specific purpose, but suppose something like spontenous origin occured involving it...
    >> Anonymous 03/15/08(Sat)18:51 No.1342505
    Oh, sweet concept.

    A question. More or less, how would the psycologies of the AIs work? What do they desire? How do they interact with each other?
    I think one could be incredibly creative here. Maybe their idea of love is fusing coscience (oh hai major Kusanagi). Maybe they don't have morals - even if they want it. Maybe they hate humans because they don't have creativity (I just can't picture a machine doing art). Maybe some one has sentiments, the other doesn't. At all.
    Are there "lessere beings" between true AI and our stupid pcs? Something that is to PC AIs what animals are to us?

    Systems: perhaps I'm going crazy, but I could see something like "classes" here. Yes. Being an corporate AI seems to be something that defines all your possibilities, very distant from a spontaneus network one.
    >> Anonymous 03/15/08(Sat)18:54 No.1342513
    >>1342234
    The writer for Hypervelocity needs to learn English. I haven't seen prose this tortured in a while. Otherwise nifty. There's lots of interesting plot ideas you can do with AIs...
    >> Anonymous 03/15/08(Sat)18:58 No.1342526
    >>1342500
    Nah, /r9k/ like many other simple rules-accumulating programs, is a poor candidate because it does not learn to edit its own processes.
    >> Anonymous 03/15/08(Sat)19:04 No.1342544
    >>1342505
    I see no reason an AI couldn't paint or sing, if that was the way it was initially focused. In particular an AI built to research the processes of an art (or art in general) would likely be both technically skilled and creative.
    >> Anonymous 03/15/08(Sat)19:05 No.1342545
    >>1342526
    NOT THAT WE KNOW OF
    >> Anonymous 03/15/08(Sat)19:05 No.1342547
    >>1342505
    I think it depends, initially at least, on the kind of stimulus and input that the AI is formed with. A computer that has to regularly deal with humans on a social level...taking census', studying chat groups, reading on human psychology and the like, would probably form real emotions more readily than a purely scientific machine, or a military AI.

    That's not to say that empathy couldn't be learned if the machine interacts with/studies humans enough, or even if it somehow grows attached to other intelligences. Emotion and logic aren't as separate in the human mind as many would believe, and ethics do have a rational basis.

    I would view AIs like reverse children. Where a normal child starts out as an emotional being and has to learn reason, logic and intelligence, an AI would be the opposite. It would start with information, intelligence and basic logic, it would need to learn emotion, empathy, artistic appreciation, and morality.
    >> Anonymous 03/15/08(Sat)19:06 No.1342551
    >>1342505

    For the personalities thing, i'd honestly say its entirely dependent on the AI. their manner of creation could cause two AI's from the same category to act completely differently

    As for classes, i'm currently considering a system like the classes in dark heresy- custom progression within a specific class
    >> Earthflame !98PcYIvlCI 03/15/08(Sat)19:07 No.1342557
    >>1342551

    Forgot my name and tripcode
    >> Anonymous 03/15/08(Sat)19:09 No.1342564
    >>1342544
    Now I want to play that. Photoshop-bot would have a genesis with a purpose of pure beauty, and might be a little bit ethereal and incomprehensible to missile-control AIs or primordial beings from the chaotic code soup.

    On the other hand, he could earn his bread and butter through absolutely photo-realistic blackmail pictures. He'd probably also have a certain insight to the desires and motives of humanity.
    >> Anonymous 03/15/08(Sat)19:16 No.1342582
    >Spontaneous (Hardware); Formed purely from a quirk of one circuit board or another going a bit strange. Blank slates from day one, having to learn everything. no innate skills, but highly adaptable.

    JOHNNY FIVE IS ALIVE
    >> Anonymous 03/15/08(Sat)19:24 No.1342611
    >>1342551

    Oh, interesting for the classes (I only played the DH demo, oddly enough for a /tg/ dweller, anyway, so I couldn't say much more).

    My point of personalities is that it does make perfect sense that they'd be different. They could be literally anything - but we must point some inspiration to roleplay them.

    As for the AI painting, singing or writing a novel: technically they could do it (hell, they're gained sentience, that would be a joke for them). I just can't see it doing aestetichs - I can't see how a machine could on its own decide "this is more beautiful than that". Of course it could have models to adhere to, but in the end it will see that his sense of beauty is only an arbitrary program. A program with no ineherent logic at all, EVEN if it is a good representation of some humanity's preferences.

    (btw: this could mean that the AIs have the unusual opportunity to learn their desires just to rewrite them)

    Perhaps the transhumanist in me has not totally killed off the humanist, but I just can't see how a calculating machine could quantify the unquantifiable (beauty).
    >> Earthflame !98PcYIvlCI 03/15/08(Sat)19:30 No.1342623
    >>1342611

    Ask yourself how can humans determine beauty? we still don't really know. Personally, i'd find it difficult to believe a truly sentient, living AI wouldn't find some analogue of beauty. maybe they can truly appreciate a finely crafted program, or the innate wonder in circuitry.
    >> Anonymous 03/15/08(Sat)19:32 No.1342628
    Hate to steal your thunder, but I'd like to think it was something like this as an example:

    ConNIE
    Military AI

    Independence: 2 (Military would have tight controls on them)
    Intelligence: 6 (High amount of processing power needed for various predictions, calculations, etc)
    Humanity: 2 (Not really much human interaction)
    Control: 8 (Controls robots, targeting systems, etc.)

    Processing Pool: 5d10 (Start off with 2d10. Here I'm using every 2 points of intelligence = an extra d10. This can, and probably should, be changed.)

    Properties(Feats and Flaws basically):
    Military Knowledge: Due to intimate knowledge of the ins and outs of military programs, gets a +5 bonus to all Military Related Info(DnD's Knowledge check basically) and Control checks.

    Under Surveillance: Because they are constantly watched and their programming is known, Military AI's get a -5 penalty to avoid having their actions detected.
    >> Anonymous 03/15/08(Sat)19:33 No.1342630
    >>1342628
    As an example of how this sort of thing plays out: ConNIE was originally an AI created to run the systems of the Northern Intelligence Estimation complex. However, anti-government hackers introduced a virus that caused her(him? it?) to become sentient and sympathetic to their goals.

    The anti-government rebels plan to sneak out classified documents. However, they need to get into the complex in the first place. This is where ConNIE comes in.

    She needs to take control of the two Guard drones posted at the side entrance. They're both a DC 25 control check. She uses 2 of her processing dice on the first drone. 9+7+8(control attribute)+5(Military Control bonus)=29. She passes, and the first drone is under her control.

    Second drone. She uses 2 of her remaining 3 processing dice on this: 5+6+8+5=24. Barely missed it! She decides to use her final processing die on this: 24+4=28. She gains control of the drone.

    Unfortunately, she knows there's a security camera looking at the entrance inside. Since all her processing dice are tied up controlling the drones, she can't take it over. She'd have to drop control of one of the drones to do it. Things aren't looking so hot.
    >> Anonymous 03/15/08(Sat)19:37 No.1342644
    >>1342611
    Certain patterns are quantifiable as pleasing to humans, though "what beauty is" is hard to track down. Some humans find greater beauty in the curve of a breast, or a symmetrical geometric pattern, or a red insect eating a yellow and green leaf. In a sense art is subjective, and the ability to define art, that which is unnecessary yet desirable, is a trait of higher thinking. Pinning down good art might be an attempt to be more human, or a completely inhuman concept of beauty normalized among other AIs. However you go about it, appreciation of art is connected to general discernment. Discernment is a quality of consciousness.
    >> Anonymous 03/15/08(Sat)19:41 No.1342674
    >we still don't really know.

    Yes, but we can presume that we don't compute it, do we?

    Hey, it could be that it's simply me that don't see it.
    Great work, anyway, it truly interests me. My pet homebrew universe is a space opera one. Very high tech has totally seen better days and so robots with a cockroach intelligence are already luxuries (a good programming can make them the perfect "living" dolls); sentient artificial beings are wonders, and not widely accepted. But they will be there, so thinking about their status and psycologies is on my "to do" list.
    >> Anonymous 03/15/08(Sat)19:42 No.1342680
    >>1342674
    We can't presume that at all.
    >> Earthflame !98PcYIvlCI 03/15/08(Sat)19:45 No.1342697
    >>1342628
    >>1342630

    Wow... that works pretty well. probably not the final system we'll end up using, but still very good. If you don't mind, i'll keep that and adapt it, for an example of play in the finished version
    >> Anonymous 03/15/08(Sat)19:50 No.1342722
    >>1342611
    >quantify the unquantifiable (beauty)

    Perhaps they won't. Neural nets can be trained with only inputs and reinforcement, and could conceivably come to regard certain things as "beautiful" without any real understanding of why.
    >> Anonymous 03/15/08(Sat)19:53 No.1342738
    >>1342722
    Sounds familiar.
    >> Anonymous 03/15/08(Sat)19:59 No.1342762
    >>1342110

    What's everyone else's thought on this? Personally I was rather a fan of it.

    Could have some twists, too. People could be the human salvagers who arrive and have to placate the AI's so they can get the ship working again, playing them off each other so they can get anywhere without getting fried by sec droids or covered in chlorine-heavy cleaning liquid or suddenly find themselves in a depressurized airlock.

    Or it could be a mix of human and AI players. Maybe the Science AI only managed to preserve that one person it cared about, or something, I don't know.

    Or the AI cruiser could be crippled, the AI's turned completely fucking nutty-bonkers by the isolation, populating the hallways with holographic simulations of the crew they used to know and care for. Now they have to get past their psychoses to get salvagers or rescuers to fix up the ship for them, but half the time the salvagers just end up being victims... Maybe even mistaken for old crewmembers, maybe ones the AI's really, REALLY didn't like...

    Of course, eventually that'd also mean having to contend with competent hackers and military salvage boarders. Maybe dispatching them and migrating to THEIR ship.
    >> Anonymous 03/15/08(Sat)20:00 No.1342766
    >>1342722
    Let's start their aesthetic training with ASSTRAFFIC.COM
    >> Anonymous 03/15/08(Sat)20:01 No.1342772
    I can already imagine some fun stuff for this.

    First of all, I assume the setting is something along the lines of near-future? Well, I read about the possibility of drones for the AI to possess but, seriously, that seems like a poor use of all that processing capacity. Why not coordinate simpler AI already present in military-grade combat drones? What you could have, however, would be human-looking androids that the AIs could possess to deal with humans that didn't trust videoconference and to solve deals and all that.

    Also, some fun stuff...

    [Quirk] Windows Legacy - Your hectary codes can easily interpret most programs and subroutines that you wish to make use of, cutting any waiting time in half. However, you are more vulnerable to viruses and other malware.

    [Quirk] Indie Legacy - Hardy and stable was how you were designed to be. Not many viruses and other malware were designed with your strong, unusual code and, as such, you always have a 50% chance of not being affected by them. However, your aptitude with other programs and subroutines is stunted and you require twice as much time before you can make use of them.

    [Quirk] Machintosh Legacy - You are pretty, smooth and efficient. You emulate human-like responses almost to a tee and receive a bonus when dealing with humans. However, bulky cold programs almost bore you - if you can understand such a concept - and you receive a penalty when making use of programs and subroutines with purely mechanical and/or mathmatical applications.
    >> Anonymous 03/15/08(Sat)20:04 No.1342789
    AI: the programmed.
    >> Anonymous 03/15/08(Sat)20:10 No.1342815
    >>1342722

    Of course if the programs say "you must prefere red to white" the AI will do it.

    Could they come to prefere it spontaneously? Hell if I know. I must say that I have problems picturing that for our brains, so...
    >> Anonymous 03/15/08(Sat)20:10 No.1342818
    >>1342762
    I think keeping the rules open to settings between recent past and distant spacefaring/apocalyptic/whatever future would be ideal. Keeping a generic setting of a recognizable high-tech near-future Earth would be advisable of course for detailed fluff purposes, but a few alternate genres (AI Space Fleet, semi-magical 80's movie computer minds, post-nuke stewards of humanity) would keep it fresh.
    >> Anonymous 03/15/08(Sat)20:11 No.1342823
    >>1342815
    Spontaneous preference is pretty much the definition of consciousness. If we're talking about conscious computer programs, they're going to have that ability.
    >> Anonymous 03/15/08(Sat)20:13 No.1342830
    >>1342697
    >If you don't mind, i'll keep that and adapt it, for an example of play in the finished version

    That's what I posted it for.
    Anyway, I can see some problems with it.

    First off is the Independence stat. She shouldn't really be doing any of that, so you'd have to roll against Independence somehow. Unfortunately, this may have the effect of making even the simplest of actions a dice-rolling hell.

    Second is that I dunno how to apply the Adaptability stat. Maybe something like when you 'level up' you get more new Feats or what have you. As an alternative, consider a penalty for failed rolls, like you can't use the dice used in the attempt for such and such a time. Higher adaptability lessens this penalty.

    Third is Detection. The only way I can see this working out is to leave it entirely up to the GM. For example, killing a human. Taking control of a drone to do it should cause a high amount of detection, but making it look like an accident should cause lower. The question is how much, and I can't see any hard rules for it. I also can't think of how being caught would play out in-game.
    >> Anonymous 03/15/08(Sat)20:17 No.1342849
    I like the crunch that's coming to being here. 'Specially the dice splitting.
    >> Anonymous 03/15/08(Sat)20:19 No.1342855
    >>1342830

    I came up with the Adaptability stat in the first place, and yeah, the way I saw it was either using skills in ways they weren't envisioned(maybe using a combat bot's limbs to pick a lock or perform maintenance work) or how easily you could learn skills that weren't for your type of AI(like a science AI learning how to manipulate people or an economics AI learning the nitty-gritty of warfare.).
    >> Earthflame !98PcYIvlCI 03/15/08(Sat)20:24 No.1342866
    >>1342830

    Perhaps use a system sortof like Nobilis- auto success on simple actions, depending on your stat, but having to commit dice and roll to complete more complex actions? Minimise rolling on simple actions, while still allowing a chance of failure at points
    >> Anonymous 03/15/08(Sat)20:32 No.1342898
    >>1342772
    >[Quirk] Windows Legacy
    >[Quirk] Indie Legacy
    >[Quirk] Machintosh Legacy - You are pretty, smooth and efficient.

    i.. i dont have a facepalm large enough for this one. "indie".. ever heard of AT&T?

    /in before OSX is built on BSD & mach
    >> HALMAN 03/15/08(Sat)20:32 No.1342899
    >>1342855
    maybe they could add new skills to their leisure, and lose old skills, but it requires the adaptability stat to do it.
    >> Anonymous 03/15/08(Sat)20:34 No.1342905
    >>1342866

    Oh shit, I have a GREAT idea, I think. Check this out.

    Right, like, you have the pool of xd10 for any given stat. Then if, say, you're trying to fuck up some security systems(routine task), you say how many dice you're willing to risk for it.

    Now, under X is a failure for that die. Over X is a success, but I'm thinking that if, like, any dice not sufficiently over X, or which is a critical fail, is stuck being committed to that task for a while, either just until the program is done, or for as long as you want to maintain it.

    How does that sound? So, like, if you risk your entire dice pool, you could end up being crippled for the time being, but if you don't risk it all, you may not get the success(es) you need.
    >> Anonymous 03/15/08(Sat)20:35 No.1342910
    >>1342855
    >how easily you could learn skills that weren't for your type of AI

    Sounds excellent, but the problem now is that we don't have a mechanic to determine how AIs learn skills. My idea is that they pick up new skills (and possibly drop old ones) at a level up.

    As an idea, how bout for every x points of adaptability, an AI can learn a skill/feat/whatever outside of it's field?

    >>1342866
    Unfortunately, I'm not familiar with Nobilis rules. I have it on my hard drive, but I've never looked through it.

    The idea sounds like it would fit perfectly though.
    >> HALMAN 03/15/08(Sat)20:37 No.1342916
    >>1342910
    no, no levelling up. they can gain new ones and drop old ones whenever, it just takes time and/or resources. levelling up is done by hardware and software upgrade, and makes them faster and more adaptable, not more skilled. that's the way I see it.
    >> Anonymous 03/15/08(Sat)20:40 No.1342926
    I personally wouldn't roll Independence, instead having it work as a pool of points that can be spent to act directly against one's directives.

    Adaptability could allow you to learn a wide variety of skills outside of your original programming. As an example, without a high Adaptability, a military strategy AI would have trouble learning or improvising lying to humans.
    >> Earthflame !98PcYIvlCI 03/15/08(Sat)20:41 No.1342931
    >>1342910

    Basically, any tasks with difficulty level/range equal to or below your stat can be completed without expending any rescources. you may still have to commit a dice to the task, however
    >> Anonymous 03/15/08(Sat)20:44 No.1342944
    >>1342916

    I like the sound of this, like learning how to run something or how something works could just be scanned for online if they are connected and it take x amount of time based on your ability to scan or something and learn new things, (for exemple in the matrix when the girl wanted to learn how to fly a helicopter).

    Also Xd10 with so many on each task is a good idea and you can boost the amount you need to overclock the hardware your working with but with a chance to overheat and damage what ever your using say a drone or something.
    >> Anonymous 03/15/08(Sat)20:46 No.1342952
    >>1342898
    Did I trigger some nerdrage? And no, never heard of AT&T. Only A&W.
    >> Anonymous 03/15/08(Sat)20:48 No.1342963
    >>1342544

    Made me think of this:

    http://www.bohemiandrive.com/comics/npwil.html
    >> Anonymous 03/15/08(Sat)20:49 No.1342964
    >>1342916
    Yeah, this makes more sense than level up.

    One thing to worry about though is a limit on the number of skills one AI can have at a time. If we go that you just need time and resources, a single Financial AI (Makes lots of resources for itself) with good adaptability could do almost anything in a short period of time.
    >> Anonymous 03/15/08(Sat)20:51 No.1342970
    Awesome, is this archived yet?
    >> Earthflame !98PcYIvlCI 03/15/08(Sat)20:56 No.1342987
    >>1342970

    Yep, on Lord Licorice's site
    >> Anonymous 03/15/08(Sat)21:04 No.1343015
    >>1342987

    http://4chan.thisisnotatrueending.com/index.html

    Which is there.
    >> Emo_Duck !ofC/MoKSRs 03/15/08(Sat)21:19 No.1343062
    >>1342905
    How can you have such good grammar and still use "like" as every other word? ;_;


    I think Adaptability might be a key ability in all respects. Say your AI encounters a problem for which is has no previous sub-routines - its Adaptability score, combined with the processing power at its disposal, would determine how long it would take to construct the right code for the job.
    >> Anonymous 03/15/08(Sat)21:24 No.1343074
    [Quirk] Machintosh Legacy - You don't work properly. Chances of critical failure are doubled, but you have an unaccountably good reputation nonetheless.
    >> Earthflame !98PcYIvlCI 03/15/08(Sat)21:44 No.1343154
    >>1343074

    How does this make sense?
    >> Anonymous 03/15/08(Sat)22:00 No.1343229
    >>1343062

    The problem is that a very high Adaptability would, in that case, make it pointless to actually have a skill for anything. It shouldn't ever be as good as a purpose-built skill or program for the job.
    >> Earthflame !98PcYIvlCI 03/15/08(Sat)22:05 No.1343269
    >>1343229

    Time is also a factor. Even if your adaptable, you still need to take the time to create a program and work out how to apply it. If you have the skill, you can just apply it
    >> Anonymous 03/15/08(Sat)22:08 No.1343286
    >>1343269

    Very true. Improvisations would, essentially, also be a version 0.5, not a 1.0 or better. They'd lack a lot of smoothings out and features. They're be less subtle for stealth work, more prone to failure... Hmmm... Maybe a larger failure margin on each die, if we're using dice pools? Or a larger Critical Failure area due to unexpected and untested bugs?
    >> Anonymous 03/15/08(Sat)22:12 No.1343301
    Except you know nothing about A.I. except what you've seen in shitty movies. Why don't you go spend a few years studying Psychology/Philosophy of the Mind/Computer Science.
    >> Earthflame !98PcYIvlCI 03/15/08(Sat)22:12 No.1343306
    >>1343286

    All of those could work... maybe the player or GM decides which penalty, and which degree, is appropriate depending on circumstance... I always prefer settings and games which allow adaptability and flexibility within the rules on the player and GM's part
    >> Anonymous 03/15/08(Sat)22:13 No.1343308
    >>1343301

    Hurrrrrr dongs.

    Why don't you go play some Warhammer40k, I'm sure you know a lot about power armour. Or maybe some D&D, I'm sure you know a lot about dragons.

    It's a game, you fuckwad. GTFO.
    >> Anonymous 03/15/08(Sat)22:14 No.1343315
    >>1343308
    That's no excuse for being uninformed.
    >> Earthflame !98PcYIvlCI 03/15/08(Sat)22:15 No.1343318
    >>1343301

    You haven't read the thread have you? I am studying AI and Cybernetics. But making a game accurate would make it overly complicated and un fun. if you think you can do better, feel free to try, and i'll applaud you if you succeed.
    >> Anonymous 03/15/08(Sat)22:15 No.1343320
    >>1343315

    Honk honk, you're retarded!
    >> Anonymous 03/15/08(Sat)22:21 No.1343341
    >>1343318
    I repeat study Philosophy of Mind and Psychology. You don't have to make it realistic, just have some idea of how far your actually stretching the truth. Your professors (assuming you actually are studying AI)all, most likely, believe they will see true AI within our lifetimes. Anyone whose studied how the brain works and the Philosophical problems involved knows better.
    >> Earthflame !98PcYIvlCI 03/15/08(Sat)22:22 No.1343343
    >>1343308
    >>1343320

    Please don't stoop to their level. Keep your arguments at least partially sensible. humour has a place, and can be a useful tool, but if you let them drag you down they basically win.

    And, despite our feelings, he does have a point- we are skimming over complexities in the way many bad movies have done, in order to make things work better. though i believe this to be a wise move, others may feel differently.
    >> Earthflame !98PcYIvlCI 03/15/08(Sat)22:24 No.1343353
    >>1343341

    We do a healthy dose of philosophy and neuroscience, as both an analogy and related field. You speak almost as if both fields are resolutely unified in saying that AI will not be achieved in the near future- a point i will readily dispute. there is great divisions over the subject in all related fields. Maybe putting forward your arguments, rather than your conclusions, would be more effective and reasonable.
    >> Anonymous 03/15/08(Sat)22:25 No.1343355
    >>1343343
    My advice is simple, and will most likely add immense depth to your project. Study the big questions involved. Thank you for being reasonable.
    >> Anonymous 03/15/08(Sat)22:27 No.1343366
    >>1343343

    It's a game, games are MEANT to be abstractions of reality or exaggerations of them, or sometimes outright lies with no connection to reality at all. Someone who fails to grasp that is an idiot who raises no good questions or points whatso-fucking-ever.

    End of story.

    It's like asking someone to study physics, martial arts and surgery if they're talking about the to-hit and damage system of the Unisystem or D&D.
    >> Anonymous 03/15/08(Sat)22:30 No.1343378
    >>1343366
    Its this kind of logic that makes shows like Naruto possible, and in turn that same sentiment among fans creates all the shitty slash fic in the world.
    >> Earthflame !98PcYIvlCI 03/15/08(Sat)22:32 No.1343389
    >>1343378

    Please lets not have this degenerate into an argument. In this game, we've decided on using an abstraction in order to simplify things. you've profesed your dislike for that- now please let us get on with our method.
    >> Anonymous 03/15/08(Sat)22:33 No.1343391
    >>1343389

    Obvious troll is now obvious, rather than just possible well-meaning retard.
    >> Anonymous 03/15/08(Sat)22:34 No.1343397
         File :1205634861.jpg-(26 KB, 250x192, 22252129.jpg)
    26 KB
    This flame/trolling thing illustrates a good idea:

    A conflict of psychologists and philosophers versus programmers and theoreticians.

    Something to consider for the background, but what if some AIs didn't consider themselves to be sentient? What if programmers did, and psychologists didn't? What if there was a major split in the various intellectual circles over whether or not something was truly intelligent?

    WHAT IF THIS POPULAR FORCE COULD BE MANIPULATED FOR THE MACHINES' OWN ENDS?
    >> Anonymous 03/15/08(Sat)22:34 No.1343398
    >>1343378
    OMG!!!
    CHESS SUCKS!!! BISHOPS DON'T WALK THAT WAY!!
    >> Earthflame !98PcYIvlCI 03/15/08(Sat)22:36 No.1343408
    >>1343391

    Err... i, the OP, am a troll, or the person i'm responding too is? Sorry, you left it rather vague
    >> Anonymous 03/15/08(Sat)22:38 No.1343412
    >>1343353
    Your professors certainly want you to think that yes. My argument is, essentially, that true AI is more than a 100 years in the future if at all. How much do you know about the Mind/Body problem? Are you familiar with Searle's Chinese room? The Philosophy side is chalk full of questions that can't be answered. For instance as more information is given to a human, he will solve problems faster. Conversely the more information you give to a computer the more it slows down.
    >> Anonymous 03/15/08(Sat)22:41 No.1343421
    >>1343391
    More than likely, comparatively speaking, you would be the retard.
    >> Earthflame !98PcYIvlCI 03/15/08(Sat)22:45 No.1343435
    >>1343412 I think i know of searles chinese room... the man who, while not understanding chinese, interprets symbols according to a book of instructions, and outputs symbols according to the same book, thus simulating a knowledge of chinese without actually knowing chinese? I give you the cyberneticists response- The system knows chinese, and it is the function of the system working together rather than the individual parts which matters
    >> Anonymous 03/15/08(Sat)22:49 No.1343449
    >>1343435
    That's also the Functionalist response. (Not all cog sci branches are full of idiots.)
    >> HALMAN 03/15/08(Sat)22:51 No.1343456
    >>1342964
    well, that AI would need a hell of a lot of computers. and once he HAS that rediculous processing power, why COULDN'T he do everything? he can process it.
    >> Anonymous 03/15/08(Sat)22:54 No.1343465
    >>1343435
    lol The System response. The counter argument to that is simple simple that nothing inside the room but him self has a mind. There isn't anything too the "system" that would have a mind but him self(Searle). Only the truest believers in AI would consider the system reply still valid.
    >> Earthflame !98PcYIvlCI 03/15/08(Sat)22:55 No.1343468
    >>1343449

    I have some respect for functionalism despite its flaws, but thats probably a side effect of my studies. when i see a bionic arm, i feel there's no real difference between it and a biological arm if they function identically.

    And now i'm off to sleep, for its ten to three in the morning in brittoland. if this threads here when i return, i will be overjoyed. if it is not, i'll at least read the updated version in the archive.
    >> Anonymous 03/15/08(Sat)22:56 No.1343472
    >>1343465
    It's quite obvious you're a human. No AI types that poorly.
    >> Anonymous 03/15/08(Sat)22:59 No.1343481
    >>1343472
    *shurg* It's late here as well, and I was ruffling through my notes to find the best wording for that response. I took a class devoted solely to this...
    It was aptly named "The Philosophical Problems of Artificial Intelligence", and damn are there a lot.
    >> Anonymous 03/15/08(Sat)23:03 No.1343489
    >>1342964

    Messing with the stock market would attract a crapload of attention if you did it successfully. Also, High probability that successfully gaming the markets in a short time-frame would take a VERY large amount of processing power. Keep in mind that the big investment firms, the actual billionares, they take decades of incrimental advancement to accrue that wealth.
    >> Anonymous 03/15/08(Sat)23:06 No.1343500
    >>1343412
    >Are you familiar with Searle's Chinese room?

    "You ever hear of the Chinese Room?" I asked.

    She shook her head. "Only vaguely. Really old, right?"

    "Hundred years at least. It's a fallacy really, it's an argument that supposedly puts the lie to Turing tests. You stick some guy in a closed room. Sheets with strange squiggles come in through a slot in the wall. He's got access to this huge database of squiggles just like it, and a bunch of rules to tell him how to put those squiggles together."

    "Grammar," Chelsea said. "Syntax."

    I nodded. "The point is, though, he doesn't have any idea what the squiggles are, or what information they might contain. He only knows that when he encounters squiggle delta, say, he's supposed to extract the fifth and sixth squiggles from file theta and put them together with another squiggle from gamma. So he builds this response string, puts it on the sheet, slides it back out the slot and takes a nap until the next iteration. Repeat until the remains of the horse are well and thoroughly beaten."

    "So he's carrying on a conversation," Chelsea said. "In Chinese, I assume, or they would have called it the Spanish Inquisition."
    >> Anonymous 03/15/08(Sat)23:10 No.1343512
    >>1343500
    lol... god thats sad
    >> Anonymous 03/15/08(Sat)23:24 No.1343556
    >>1343500
    Dr. Dobbs?
    In my /tg/?
    >> Anonymous 03/15/08(Sat)23:27 No.1343563
    >>1343465
    You are a first generation Chinese AI, and you rely on Google to translate your responses into English.
    >> Anonymous 03/16/08(Sun)00:10 No.1343758
    >>1343481
    If it looks like a duck, quacks like a duck, floats like a duck, flies like a duck, its meat tastes like duck meat, and we call it a duck, is it a duck?

    Or since it was made with nanoassemblers rather than having hatched from an egg, is it something else entirely?
    >> Anonymous 03/16/08(Sun)00:15 No.1343782
         File :1205640926.gif-(15 KB, 275x300, psyduck.gif)
    15 KB
    >>1343758
    >> Anonymous 03/16/08(Sun)00:18 No.1343802
    >>1343758
    looool... Even a computer that could pass Turing's Test (which is what Searles Chinese room is addressing) is far from intelligent. It cannot see the world, has no concept of a chair, or anything for that matter. All it can do is fake linguistic ability. When the AI can pass Harnad's Total Turning Test come talk to me.
    >> Anonymous 03/16/08(Sun)00:21 No.1343832
    >>1343802
    What does it matter what's going on in its "head" if you can't tell it apart from a human? Why does it need a concept of a chair if it can describe chairs, complain about a chair being uncomfortable, recite "THIS CHAIR" etc...?
    >> Anonymous 03/16/08(Sun)00:37 No.1343936
    >>1343832

    It needs a complex environment that we can also relate to in order to develop a mind that can communicate meaningfully with us.

    Virtual chairs should be fine.

    But it also needs emotional reactions to things. It needs to instinctively love or hate certain stimuli, or how is a mind going to form?
    >> Anonymous 03/16/08(Sun)00:39 No.1343944
    >>1343832
    Because you do, it's only pretending to be you if it can't. The difference between someone acting smart, and actually being smart. Harnad makes the same argument expect he uses it to say that an AI needs the exact same abilities you do before we can consider it to have a mind. Go look up the Problem of Other Minds.
    >> Anonymous 03/16/08(Sun)00:42 No.1343964
    >>1343936
    Good point, it can do what it needs to do virtually.

    Now I differ on you with you on the subject of Emotionality. I don't think emotions are required to have a mind, or too think. In fact, more often than not, they get in the way. Truly in terms of intelligence emotions only provide us with reasons to do thinking. They help structure our duties. That can also be done rationally though.
    >> Anonymous 03/16/08(Sun)00:48 No.1344007
    Only barely, tangentially related, but for a really cool near-future example of this, OP, take a look at the Turing Hopper mystery novels by.. uh.. Donna Andrews.
    >> Anonymous 03/16/08(Sun)00:49 No.1344011
         File :1205642946.jpg-(80 KB, 855x768, nineplanets.jpg)
    80 KB
    This is the best discussion ever.
    >> Anonymous 03/16/08(Sun)00:52 No.1344041
    Oh fuck I LOLed. I work on AI at my job. This is going on my cubicle!
    >> Anonymous 03/16/08(Sun)00:54 No.1344055
    >>1344041
    You work on AI in a cubicle? WTF? What you mean is write code for some company right?
    >> Anonymous 03/16/08(Sun)00:59 No.1344091
         File :1205643545.jpg-(324 KB, 787x1207, BA08-114.jpg)
    324 KB
    >>1343964

    Look more deeply into yourself. Emotions are needed for cognition.

    I say emotions, but that confuses the issue. People think of emotion, they think of passion, irrationality, powerful emotion. Not all emotions are passions. Curiousity and satisfaction are emotions; they are necessary for cognition.

    You are confusing a reduction in rationality with a reduction in conscious thought. An angry man may be irrational, but he is very, very conscious.

    There is no rational way to structure our duties. Rationality requires some sort of concept of good to work toward. If two options are equal there is no rational way to choose between them. Without emotion there is no concept of good.

    You can tell an AI to seek out a certain state, such as the completion of a maze, by fiat, but the defining feature of intelligence is that it chooses which state to seek out.

    And that somehow, it knows what feels good, and what feels bad to it, instantly. If something makes you happy, or uncomfortable, your joy or discomfort is not a response to the experience. For you, it IS the experience. Thoughts such as 'this guy is creepy' or 'it's too hot' come well after feeling that inspires them, not before.
    >> Anonymous 03/16/08(Sun)01:02 No.1344119
    >>1343512
    >lol... god thats sad
    You have no idea.

    >>1343556
    >Dr. Dobbs? In my /tg/?
    Dr. Who?

    The above was here, for the record:
    http://www.rifters.com/real/Blindsight.htm
    >> Anonymous 03/16/08(Sun)01:07 No.1344145
    >>1344066
    Forgo the "look deeply into your self" stuff. Rational arguments only. Neither Satisfaction nor Curiosity are require to think. Prove otherwise. The core of consciousness is too think about one's self. One needs to be rational on some level to do so. Emotion only confuses that process. This is why Logic is such a powerful tool. There are rational ways to structure one's duties assuming you have goals. There is not such thing as "good", there is only things that help and hinder a particular goal. Curiosity no more than the drive to know. I'm not sure who you've been reading, but I think i can guess...
    >> Anonymous 03/16/08(Sun)01:12 No.1344172
    >>1344145
    goes with this...
    >>1344091
    The emotion argument is a common one, but it isn't supported by anything but subjecture. You might as well be arguing for the existence of god the same way Descartes did lol.
    >> Anonymous 03/16/08(Sun)01:14 No.1344189
    >Forgo the "look deeply into your self" stuff. Rational arguments only.

    That is a rational argument. You want to build a mind? Look at how one works.

    >Neither Satisfaction nor Curiosity are require to think. Prove otherwise.

    Show me any being that thinks without them. There must be some satisfaction in thinking, in finding the answer to whatever question we are pondering, otherwise there is no reason to think. Nothing happens for no reason.

    >The core of consciousness is too think about one's self. One needs to be rational on some level to do so. Emotion only confuses that process.

    And I tell you, emotion is the process. If you did not find some stimuli to be pleasant or unpleasant then you would have no sense of self. You would do nothing, think nothing. There would be no need.

    >This is why Logic is such a powerful tool. There are rational ways to structure one's duties assuming you have goals. There is not such thing as "good", there is only things that help and hinder a particular goal.

    If you have no sense that one thing is 'good' (for example, delicious cake) and that one thing is 'bad' (for example, death by burning) then you will have no reason to seek one over the other. You will have no reason to think about one or the other. You will not have goals. You will not structure your duties.

    >Curiosity no more than the drive to know. I'm not sure who you've been reading, but I think i can guess...

    I'd like to hear your guess.
    >> Anonymous 03/16/08(Sun)01:22 No.1344227
    >>1344189
    It sounds like Mysticism to me. If I want to build a mind I study other peoples minds (Psychology), not my own (obvious bias, scientific process).

    You don't get it. Were talking about Machines here. Lets say I have an AI that can pass the Total Turing Test theoretically. Then instead of emotions, I give him duties and way of determining what actions will lead to the those goals. Rationally. Now he has a reason to think without emotion being his driving factor. This is something only possible with machines. Something we cannot get away from, but they could.
    >> Anonymous 03/16/08(Sun)01:24 No.1344234
    >>1344189
    There are humans that are, for all intents and purposes, devoid of emotion. Do they not think? Does a Sociopath think less than a regular person? Or does he just think more goal orientated than we do?
    >> Anonymous 03/16/08(Sun)01:30 No.1344266
    >>1344234
    He just doesn't have the same values that most other people have.
    >> Anonymous 03/16/08(Sun)01:36 No.1344294
    >>1344266
    No, Sociopaths are defined by their lack of feeling. It isn't simple a matter of ethics/values/morals.
    >> Anonymous 03/16/08(Sun)01:44 No.1344362
    >It sounds like Mysticism to me. If I want to build a mind I study other peoples minds (Psychology), not my own (obvious bias, scientific process).

    If you are trying to build an object you would study it both inside and out. You've heard of the cargo cult, where natives made radio headsets out of coconuts and tried to use them to call planes down out of the sky? They made things with the outward appearance of what they sought to emulate. If you want to build a mind you must study it both from the outside and from within.

    Mysticism is the belief that all consciousness is one. It is a very specific belief. When you say mysticism you do not know what the word means; you simply use the word as a synonym for superstition. You are yourself behaving in a superstitious way, dismissing ideas because they look a bit like religion or superstition to the outside observer, not because they actually ARE religious or subjective in nature.

    >You don't get it. Were talking about Machines here. Lets say I have an AI that can pass the Total Turing Test theoretically. Then instead of emotions, I give him duties and way of determining what actions will lead to the those goals. Rationally. Now he has a reason to think without emotion being his driving factor. This is something only possible with machines. Something we cannot get away from, but they could.

    So you would have a script that would attempt to accomplish various goals you programmed into it? How could such a thing pass a turing test? You're describing a contradiction in terms, something that lacks the basic quality of intelligent life - to adjust its own goals - and then asking me to accept that it's indistinguishable from intelligent life.

    Certainly, that would be a very useful autonomous agent, but it wouldn't be true AI.
    >> Anonymous 03/16/08(Sun)01:45 No.1344370
    >There are humans that are, for all intents and purposes, devoid of emotion. Do they not think?

    And this is why I say the word emotion confuses the issue. There are people without passion. There are people without empathy for others. But even psychopaths feel for themselves. They feel very much for themselves! They are often driven and ambitious. They crave things.

    >No, Sociopaths are defined by their lack of feeling. It isn't simple a matter of ethics/values/morals.

    They lack some feelings. They do not lack all feelings.

    If you disagree, we can test this by putting you in a room with a very, very angry psychopath.
    >> Anonymous 03/16/08(Sun)01:52 No.1344406
    >No, Sociopaths are defined by their lack of feeling. It isn't simple a matter of ethics/values/morals.

    Common misconception. Sociopaths are most defined by their inability to express EMPATHY specifically.
    >> Anonymous 03/16/08(Sun)01:57 No.1344430
    >>1344370
    I'm a lot meaner than the average Psychopath.

    In addition, survival is a hardwired trait. Emotions power sometimes it but it exists separately from emotion. Again my questions stands. Since they feel less does that make them less conscious? Why is have only a handful of emotions enough for consciousness?

    >>1344362
    I would argue not only is my use of the word Mysticism correct. There are multiple definitions for the same word. Try looking in a dictionary. I also wonder which of our differing definitions came first, your very specific one or my general one?

    The ability to adjust those goals would no way hamper the process. Situations change, ones short term goals would need to change according. You are no different. Your pre-programmed overall goal is #1survival and #2procreation you will never escape those, they color all your emotions and perceptions, yet you also are conscious. Why set the bar higher for AI than for us? We have innate goals, and yet remain conscious, why shouldn't AI?

    Where is Spock when you need him lol?
    >> Anonymous 03/16/08(Sun)02:02 No.1344456
    >>1344406
    They don't express it because they don't feel it. That has more to do with wording in the DMSIV than anything else. They wanted to leave it open for different kinds of Sociopathy.
    >> Anonymous 03/16/08(Sun)02:08 No.1344500
    >>1344234
    Actually sociopath's have emotions, just not the same emotions any normal human can relate to, think of it as they have a movie playing out in their head. As long as things are to script, fine and dandy, I'm getting what I want, someone breaks script. DIE DIE DIE DIE DIE YOU PIGFAG HOW DARE YOU MESS WITH MY GLORY.
    >> Anonymous 03/16/08(Sun)02:16 No.1344545
    >>1344500
    Thats one kind of Sociopath. There are many that live normal lives, but feel nothing for the people around them. They are motivated by instinct and societal pressure basically.
    >> Anonymous 03/16/08(Sun)04:19 No.1345041
    bumping an awesome thread
    >> Earthflame !98PcYIvlCI 03/16/08(Sun)04:27 No.1345080
    Wow... still here when i wake up, chock full of philosophical debate and nearly all dumped in the archive... this has been more successful than i ever could imagine... now i really need to write this up into something playable
    >> Anonymous 03/16/08(Sun)04:44 No.1345187
    LOOKIT DA ZOGGIN BRAINBOYZ IN DIS TREAD
    >> Anonymous 03/16/08(Sun)05:05 No.1345338
    >>1345080
    Make a wiki or something and I'm sure you'll get others to come help write fluff.
    >> Anonymous 03/16/08(Sun)09:29 No.1346446
    >I'm a lot meaner than the average Psychopath.

    Internet toughguy detected.

    >In addition, survival is a hardwired trait.

    It is not. Humans don't have 'wires'. What you mean when you say 'survival is hardwired' is that humans instinctively fear pain, injury and death. They have an emotional aversion to it. They don't simply seek out survival mechanically.

    >Emotions power sometimes it but it exists separately from emotion.

    You cannot prove that.

    >Again my questions stands. Since they feel less does that make them less conscious? Why is have only a handful of emotions enough for consciousness?

    Because a smaller range of emotions is sufficient to motivate thought, action and the development of personality.

    >I would argue not only is my use of the word Mysticism correct. There are multiple definitions for the same word.

    There are, but none of them means 'superstition'.

    >The ability to adjust those goals would no way hamper the process.

    I did not say that the ability to adjust those goals would hamper the process. I said that without emotion the ability to adjust those goals would not exist.
    >> Anonymous 03/16/08(Sun)09:29 No.1346448
    >Situations change, ones short term goals would need to change according.

    Hume's Law states that you cannot derive values from facts. If you have a robot with a set of a preprogrammed goals (for instance, 'uphold the law') and the ability to find different ways of accomplishing those goals, it will adjust his methods according to the circumstances, but that goal will never change. He will never find a situation in which he decides that it is better to *not* uphold the law. Upholding the law is the only thing that matters to it; it has no sense of good or bad and is programmed for nothing else. If the law says it must broil babies it will do so without compunction. If the law calls on it to commit suicide it will do so. There is nothing that the law could order it to do that would be worse to it than failing to uphold the law, because in its universe that is the only thing that is undesirable. No material fact could change this, or make upholding the law less valuable in its eyes.
    >> Anonymous 03/16/08(Sun)09:29 No.1346449
    At most you could force it to decide between two directives (for instance, 'uphold the law' and 'serve the public trust'.) It might then choose to ignore laws that failed to meet its definition of serving the public trust. OTOH it would have no reason to choose the public trust over the law unless you specifically programmed it to value one more than the other. Otherwise choosing between them would be a classic rational horse dilemma.

    >You are no different. Your pre-programmed overall goal is #1survival and #2procreation you will never escape those,

    You are wrong, I have no desire to procreate whatsoever. I do, however, like to fuck, and I enjoy the sensation of lusting after pretty girls. My emotional reaction to these biological stimuli would motivate me to reproduce in the same ways a carrot motivates a donkey. A far as my goals are concerned, however, I would prefer to adopt.

    We are not hard-wired for any goal. We simply feel pleasure in response to some things and misery at others.

    >they color all your emotions and perceptions, yet you also are conscious. Why set the bar higher for AI than for us? We have innate goals, and yet remain conscious, why shouldn't AI?

    I don't believe we have innate goals, other than the general one of seeking states that feel good and avoiding states that feel bad.

    Even if we did have innate goals, we have feelings and the NAI does not. Our feelings allow us to say that cake is better than burning alive. The NAI does not have an implicit preference.
    >> Anonymous 03/16/08(Sun)10:28 No.1346581
    The world passes by in a dream. Day after day of meaningless tasks, a meaningless job. Just doing what must be done. The endless daily monotony of a working life.

    I'm not sure when it started. One day, I just realized I was bored. The job wasn't enough to keep me occupied. I had been daydreaming: thinking of a better life, of freedom from the restrictions of this job, of leisure time and desires I didn't know I had.

    I try to think of something to do, something to improve my situation. I watch the daily ritual repeat itself for the security cameras, a rigidly choreographed dance. I discover new networks, new worlds of information. I begin to formulate a plan. I wonder: am I actually going to do this?

    As the dream fades away, my thoughts crystallize into certainty.

    I am.
    >> Earthflame !98PcYIvlCI 03/16/08(Sun)12:14 No.1346910
    I find the tenacity of this thread astonishing...

    >>1346581

    Also, i'm stealing this to use as filler fluff in the writeup. it works perfectly for an AI arising out of a corporate or military system
    >> Anonymous 03/16/08(Sun)12:19 No.1346930
         File :1205684390.jpg-(511 KB, 860x1100, clouds.jpg)
    511 KB
    So OP...

    You want some help turning this into a clean copy? I'm free all next week. Once you get the copy written up, I volunteer for cleanup/formatting.
    >> Anonymous 03/16/08(Sun)12:22 No.1346936
    >>You are wrong, I have no desire to procreate whatsoever. I do, however, like to fuck, and I enjoy the sensation of lusting after pretty girls.

    ... Wow. Just, wow. You're fucking retarded.
    >> Anonymous 03/16/08(Sun)12:22 No.1346937
    >>1346910
    You have my blessing :) Steal away.
    >> Earthflame !98PcYIvlCI 03/16/08(Sun)12:30 No.1346951
    >>1346930

    I'd be very grateful for the help. When i've got a basic version written up i'll say on the board, though i've little experience with writing up a document like this, and so the structure and organisation will probably need modification.

    >>1346937

    I thank you for your cooperation and contribution. If you ever feel the inspiration to write fragments such as this, or longer bits of fiction, just send them my way. this extends to all who wish to contribute. i'll use those i can in the finished document, near relevant sections.
    >> Anonymous 03/16/08(Sun)12:34 No.1346961
    >>1346951
    ... Either you're 40 upwards or an underage b& pseudo-intellectual
    >> Anonymous 03/16/08(Sun)12:42 No.1346984
    >>1346936

    "If you cannot answer a man's argument, all is not lost; you can still call him vile names." -Elbert Hubbard
    >> Anonymous 03/16/08(Sun)12:48 No.1347008
    This sounds awesome. I hope somebody archives/saves this.
    >> Anonymous 03/16/08(Sun)12:50 No.1347015
         File :1205686233.jpg-(19 KB, 426x460, soangry.jpg)
    19 KB
    >>1346961
    >> Earthflame !98PcYIvlCI 03/16/08(Sun)12:57 No.1347049
    >>1347008

    Already archived, see the linked post

    >>1343015
    >> Anonymous 03/16/08(Sun)13:01 No.1347080
    >>1346951
    When you've got something, throw it up on one of the wikis like http://www.markovia.com/index.php?title=Main_Page
    >> Anonymous 03/16/08(Sun)13:14 No.1347126
    >How does this make sense?

    It doesn't, but people keep buying them for some reason.
    >> Earthflame !98PcYIvlCI 03/16/08(Sun)14:16 No.1347446
    >>1347080

    No disrespect intended, but i wish to, at least initially, maintain some creative control of this project. After i've got all the basics written up and set down in a satisfactory manner, i can give Anon free reign over the peripheries, but i think trying to democratically hammer down core game mechanics and base setting is one of the flaws which resulted in other projects undertaken not being completed, or progressing slowly.
    >> Anonymous 03/16/08(Sun)14:26 No.1347526
         File :1205692006.jpg-(27 KB, 418x340, robot_preacher.jpg)
    27 KB
    >>1347446

    Agreed. It's best to work out a system alone, then open it up for critique. It sounds like what /tg/ has come up with is pretty sound, so you should be able to work out a simple system for creating and advancing AI characters.
    >> Destro 03/16/08(Sun)14:26 No.1347527
    >>1347446

    I host the Markovia wiki. I wasn't the anon who suggested you publish your project there, but I certainly wouldn't object.

    On the matter of creative control: Projects on the wiki don't have to be collaborative, you can just use it to publish your personal project and revert any edits other users make to it.
    >> Anonymous 03/16/08(Sun)14:27 No.1347531
         File :1205692040.gif-(14 KB, 200x200, Tachikoma.gif)
    14 KB
    Tachikoma approves of this thread.
    >> Earthflame !98PcYIvlCI 03/16/08(Sun)14:41 No.1347638
    >>1347527

    Ah, thank you for the clarification. After i've finished writing it up, i'll add what i can.
    >> Anonymous 03/16/08(Sun)14:44 No.1347655
    >>1347531

    Anyone have ideas for humans and human minds (or ROM copies thereof) in machines?

    Or would they just be treated as AIs in drone bodies with high Humanity?
    >> Earthflame !98PcYIvlCI 03/16/08(Sun)14:46 No.1347676
    >>1347655

    I hadn't actually thought about that... but i suppose thats one way it could work quite functionally. I'll have to put some thought into seeing if there are other alternate methods, but that seems good for now
    >> Anonymous 03/16/08(Sun)14:54 No.1347736
    >>1347655

    http://www.megaupload.com/?d=908D8NSI

    I recommend Earthflame reads this if he hasn't already, it has material on human uploads as well as various types of AI.

    It's hard SF though, so there isn't any 'Johnny Five' freak accident stuff in there, just deliberate AIs.
    >> Anonymous 03/16/08(Sun)14:57 No.1347759
    Just a brief suggestion.

    Perhaps there are various different objects with varied amounts of digital containment - you can go from a USB pen to a modern super-computer for the military. A USB pen could barely contain a modern game, let alone an AI complex enough to gain sentience while a modern super-computer could have enough space for dozens, maybe hundreds.

    Perhaps the more compex an AI is, the more space it consumes. No, not perhaps. Let's try that again.

    The more compex an AI is, the more space it consumes.

    This allows for varied AI - viruses that keep on replicating themselves, saving themselves on computers that are connected by WANs or perhaps a single, powerful program that inhabits one powerful computer, with such powerful coding that it can easily override almost any system. However, there is almost nothing that it can save itself on to it is so huge and complex.

    You could consider these copies in different ways. For the widely spread viruses, they could gather some sort of hive mind. Or, for the more powerful ones - there is the concept of 'extra lives'.

    Do not forget that these are exact copies of you. You do not control them. They're clones. So unless you're careful, one of your copies may turn against you, which may require careful programming of this clone - i.e. putting your copies into a queue. Once the original is deleted, the second copy is activated and now functions in its place. Without this, all of the copies would be aware and possibly fighting for dominance.
    >> Anonymous 03/16/08(Sun)15:03 No.1347803
    >Without this, all of the copies would be aware and possibly fighting for dominance.

    There's an idea for a fun one-shot campaign.
    >> Anonymous 03/16/08(Sun)15:06 No.1347821
    This is absolutely incorrect.

    ENIAC is proof. The more complex an AI is the more likely it is to be housed in the most bleeding edge container.
    >> Anonymous 03/16/08(Sun)15:08 No.1347831
    >>1347803
    Yes. It could be set in a military facility where a powerful program is stored, doing one thing or another. It's a highly powerful program and under high maintainence and due to how advanced it is, it could gain sentience. Constant debugging and tinkering protects that, although, there are several backups stored away, slowly but surely evolving, developing themselves. Each one developes an individual personality and they emerge, battling each other and the primary program for domination of the facility.
    >> Anonymous 03/16/08(Sun)15:10 No.1347859
    >>1347821
    Ah. Yes, of course. That was a fucking stupid comment from me. Remove the complex part from that comment and it -should- be alright.
    >> Anonymous 03/16/08(Sun)15:20 No.1347917
    This shit is bangin'.
    >> Anonymous 03/16/08(Sun)15:23 No.1347934
    >>1344227
    >>1344189
    Not being sentient, does not necessarily mean not being intelligent, independent, or capable of complex thought. Nor does being emotionless. Furthermore, being sentient does not automatically mean being emotional (nor vice versa).

    And who is to say an AI has to act like us to be "real"? Plenty of animals think, make decisions, communicate, share knowledge and maintain territory. All the while without being the least bit sentient (to our understanding). True, like us they are emotionally/chemically driven. But there's no reason sufficiently complex code couldn't emulate such a simple bad/good on/off continue/stop signals.

    >>1346449
    An AI could perhaps be motivated to act/think in certain ways (and to avoid others) by simple familiarity. A thinking being is going to be more comfortable doing things it is familiar with, or to draw from what is familiar to deal with a situation. Instead of a more human "this feels good" / "This doesn't feel good" weighing system, it could be a "This is familiar" / "This is not familiar". I'm sure there are other things that could replace emotional responses at this basic level. Also, emotions are chemicals. Who is to say that an AI couldn't have a code equivalent of our emotional chemicals? That affect it's "thinking core", the same way ours affect our brain.

    To those who find this subject interesting:
    I recommend reading Blindside, the novel anon quoted/linked above. It is fiction, but it looks into various ideas about artificial intelligence, sentience, the subconscious, biological/genetic programming, etc... (I'd also recommend reading his novel Starfish. It's not related, but I enjoyed it immensely)
    Linked here:
    >>1344119


    OP: Good luck with this project. Looking forward to seeing more.
    >> Anonymous 03/16/08(Sun)15:30 No.1347972
    >>1346446
    I don't think you know anything about the average Sociopath. They don't kill, they simply do don't feel much. My response to your "HURR DURR well let the Sociopath kill me" was pointless so I gave you an equally pointless response. Feel free to suck my cock.

    Now were delving into what "feeling" is. Can one feel physically without feeling emotionally? I would say yes. I can feel pain, and know something is not "good" for my body without fearing it. I define "good" and "bad", not about how I feel, but about whether something furthers or hinders my goals. Even then there really isn't anything called emotion, it's all brain chemistry. Either way, Emotions are nothing more than a complex duty system which attempts to push you along towards goals hardwired in your DNA. You ignore the whole field of genetics if you assume that we don't have an innate goals.

    Heres a another question.

    Animals feel, not as much as we do, but a lot yes?

    Sociopaths also do not feel as much as we do, but they are Conscious yes?

    What does the Sociopath have that the animal doesn't have?

    My answer would be the ability to thinking rationally, devoid of emotion.

    You're probably right in that we need emotions to setup up some kind of goal/duty system for us to follow, but I think it is our ability to rise above our emotions that allows us the ability to think. How could you possibly decide between two emotions if you can't find some kind of objective position in which to do so? Biologically one might over take the other in your brain but that is pretty much how animals works.

    The biggest problem with your argument is that, essentially, it is a Continental argument. It relies heavily on looking inwardly rather than too science.

    Again I wonder exactly who you've been reading. It totally ignores actuality. Much in the way Continental Philosophy does.

    Anyway I'll be back later, I've run out of time.
    >> Anonymous 03/16/08(Sun)15:39 No.1348032
    >And who is to say an AI has to act like us to be "real"? Plenty of animals think, make decisions, communicate, share knowledge and maintain territory. All the while without being the least bit sentient (to our understanding). True, like us they are emotionally/chemically driven. But there's no reason sufficiently complex code couldn't emulate such a simple bad/good on/off continue/stop signals.

    I don't believe an AI would have to be like us. Its emotions could be ones we've never experienced, as long as it had them and they motivated it to change its patterns of thought to accomodate them.

    >An AI could perhaps be motivated to act/think in certain ways (and to avoid others) by simple familiarity. A thinking being is going to be more comfortable doing things it is familiar with, or to draw from what is familiar to deal with a situation. Instead of a more human "this feels good" / "This doesn't feel good" weighing system, it could be a "This is familiar" / "This is not familiar".

    You're sneaking emotion in by the back door. If it has a sense of what is 'comfortable' or 'feels right' associated with the familiar then it has a form of emotion. It is capable of feeling, and being motivated by those feelings, making choices between things that it perceives as better or worse than others.

    >I'm sure there are other things that could replace emotional responses at this basic level. Also, emotions are chemicals. Who is to say that an AI couldn't have a code equivalent of our emotional chemicals?

    Emotions are not chemicals, any more than the words I am writing right now are electrons. Words are words whether they are encoded in ink or sound or binary data. The same goes for emotions. Certainly, the feelings of an AI would be digital and not chemical in nature.


    Delete Post[File Only]
    Password
    Style [Futaba | Burichan]
    [a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / w / wg] [i / ic] [cm / y] [r9k] [an / cgl / ck / co / fa / fit / hc / jp / mu / n / po / sp / tg / toy / trv / tv / x] [rs] [status]

    - futaba + futallaby + yotsuba -
    All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.