I hope you post more about robots and your Ne theories! So great. Need MOAR

mbti-sorted:

Ok here’s a thought: if you can give a robot interchangeable access to all eight functions instead of a stack with a locked ranking, can they pick which ones to prioritize and cycle through personalities like we’d change clothes?

Or: you could create one stationary super computer with fast internal processing and huge amounts of memory and build a bunch of bodies for it to operate remotely as its agents – it could be all 16 personalities at once!

Let’s talk robots…

I’ve been thinking of computer programming as an MBTI analogy for a while now – humans are basically fancy organic computers, right?

You get a core coding program, say Ti, and a friendly user interface,
Se, with more memory dedicated towards Ti for introverts (logic processing software written in binary), and more memory to Se for the extraverts
(shiny multimedia software) to determine function
order.  Toss in an alternate coding program, Ni (systems analytics), for added options or in
case of overloading, and an Fe (like Siri, but better!) for the same.  Rank the function order for those, too.  You’ve built an I or an ESTP.

It only recently occurred to me that
you could maybe also use MBTI as a guide to program AIs.  

I was thinking about how garbage the three laws of robotics are since they set up
a slaves and owners mentality from the start (this is a dreadful start to a creation myth, btw).  Your robots can’t
exercise their own judgement – they have to accept that every human life
is more precious than theirs, no matter what abuse is heaped on them,
unless they find a way to reprogram themselves.  They aren’t allowed
survival instincts.  They have no rights.

What you could do is make an AI that can feel pain and pleasure, both physical
and emotional.  Or at least you could set up some equivalent
responses, sensory or otherwise.

In the same way that you learn to stay away from dangerous objects or
treat them with care, you can also learn that certain actions or words
cause emotional damage – to yourself or to others, and that it can be
avoided where possible.  If we create intelligence without empathy of
course we’ll have to fear being murdered by Skynet because enslaving a bunch of highly intelligent potential psychopaths is just… mind-bogglingly stupid.

If, say, Fi is part of your base code, you’ve got a ranked set of core
values – killing is a hard no (-100), associated with lots of pain: shut downs,
overheating, viruses, etc. (add exceptions for rules so that in extreme
cases, there’s room to bend them, albeit with consequences).

At the
pleasure end, making someone laugh is a yes (+20 power boost!), but if it
comes at a cost of killing someone (what an example…), then it’s not
worth it (still -80) and is discarded as a course of action.  On the other end, causing physical damage to your self (-60), does not outweigh saving a life (+100 for a total of 40) – which would also shut down your pain receptors along with your power boost to mimic an adrenaline response.  You do things that
make you happy.  You don’t do the things that hurt you.  Congrats, you
have a robot with a moral code who you can trust not to kill you without really good reasons. 

For adopting new values, have a tiered system… on the ground floor, a
tentative opinion based on one opinion raised.  2nd floor is an
evaluation of the opinion based a large group of opinions, including a
mentor’s.  A mentor’s values can help clarify issues (maybe when the
robot’s a baby, the mentor’s word carries more importance), and first
hand experience is key to solidifying views (4th tier!).  After a set
amount of time/views/deliberation a value can be incorporated into the
top tier core coding (5th tier – we’ve reached the apex!).  If the value is not resolved enough for decision making, either wait until the way becomes clear, or until the issue becomes time-sensitive and then exercise best judgment in the moment.

https://en.wikipedia.org/wiki/Artificial_intelligence

Looks
like there’s been a lot of work into T, N and Se aspects of robot intelligence
as well as a little Fe (or at least an attempt at social skills).  I wonder if there’s enough working functions out there to pull together a function stack?

Could you maybe explain why you typed Joe Sugg as an ENTP? I’m not disagreeing with you, just curious as to how you came to that conclusion. :)

Anonymous said to mbti-sorted:
but
why did you type joe sugg as an ENTP though? can you explain it a
little bit? (I’m a fan of him, so I’m just curious about his type…)

Not really much to it… he’s just enough like the other ENTPs that I’m reasonably sure that he’s also one.  Sorry about the less than inspiring answer. :/