Metro State Atheist Joel Guttormson on BEL

Status
Not open for further replies.

AlfredTuring

New member
Please share, I'm sure everyone is dying to know, just what is our "programming"?



It makes for good science fiction. :idunno:



Again, the world is dying to know, what is the "hardware" and "software" of the mind and how did you come to discover it?


[A fully grown person is spontaneously generated; she has no personal history, but a fully developed mind and all of the knowledge of human science and engineering. She is unaware that she is a human herself, and she is given the task:]

"Observe these entities before you; they shall henceforth be referred to as humans. Pay particular attention to their communications and interactions; and observe the products of their operation; and, also, note the processes by which they are produced, what happens to their remains when they cease to function, and of which materials they are constituted. Then, using what knowledge you have, assign them a classification."

[After speaking with countless humans and viewing the canon of their media]

"Sir, it appears that humans are extremely advanced organic objects whose operation is centered around an analog to traditionally silicon based multiple-input, multiple-output transducers. In the case of humans, this transducer -- which I'll call a brain -- receives input messages, transforms them -- modifying itself in the process, and then outputs a separate quantity of messages. The input messages come from signals both within the brain itself and from a complex communications network spread throughout most of the rest of the body. When the signals are sent outward to this network, they induce the functioning of large portions of the remainder of the body. Autonomous subsystems exist which are independent of the brain and its connected communications system."

"So what is their classification?"

"Sir?"

"Are they mechanical!?"

"Sir, of course. Sir."

"And you didn't run into any... troubles?"

"Sir, they do say a number of things about themselves which have no counterpart in reality."

"I see. But there do seem to be some aspects which are difficult to account for, right?"

"Sir?"

"Well, what about free will? for example. We make -- I mean, they make decisions, right?"

"Sir, there has been no indication that humans are exempt from the logically consistent physical laws which govern the behavior of all matter. Humans make what they call decisions in the same way as computers; they only view them as 'decisions', rather than the playing out of physical law, because of a bias in their interpretation of data."

"But what of subjective experience!?"

"My access to this topic is limited; but tentatively, it appears that a complex self-observing system would classify its own data as meaningful, sir. This appears to be, at least in part, where their aforementioned bias in interpretation comes from."

[Silence]

"Sir -- if I may: I'm curious as to why you even ask these questions after suggesting I note their origins. They are a branch of development of matter through evolutionary processes, like huge numbers of other similar entities. Why should this branch be different from the others? Sir, are you still there?"

"Leave. Please, leave me."

***************************************************
That's my viewpoint. For further justification, please look at the work done in cybernetics, neuroscience, cognitive science, philosophy of consciousness, artificial intelligence, and maybe evolutionary psychology/sociobiology.
 

AlfredTuring

New member
Thanks chatmaggot. I actually did find a transcription of the debate -- and read about halfway through -- but at that point found that I disagreed with both participants. I disagreed with the Christian all the way through; and once the atheist fellow tried describing Zen koans as "extra-logical" -- which is a misunderstanding -- I was done. A person neglecting to use logic, or obfuscating its usage, does not change its nature. It certainly would've been interesting to hear the Christian participant against a more able opponent though... I liked the topics they addressed.
 

AlfredTuring

New member
Hmm, well, thoughts on what? I could have alternatively said, "those are my thoughts" -- it's the same thing in this context. Stripe sarcastically said he was dying to hear this and that about why I thought minds are what they are, so I came up with a hypothetical example that would demonstrate my understanding of the state of things -- although, instead of stating it as such, I just wrote out the scenario as a dialogue so that I could re-use it elsewhere. In more concise terms my view or thoughts is: There aren't any 'issues' in describing the operation of the human mind. To be sure, all of the work is not done; but there are tons of people working on it, fruitfully. The brain is not a completely foreign object, it has its analogs elsewhere. The only reason we can't claim a 'full understanding' of it is because it is immensely complex system ('A piece of your brain the size of a grain of sand would contain one hundred throusand neurons, two million axons and a billion synapses, all "talking to" each other. Given these figures, it's been estimated that the number of possible brain states - the number of permutations and combinations of activity that are theoretically possible - exceeds the number of elementary particles in the universe' -- from V. S. Ramachandran, Sandra Blakeslee); however, this is far from saying that we don't understand what it is. For example, we have mentally interactive prosthetic limbs; they are literally mind controlled. I can't find the article I'm looking for atm, but this will do: http://www.sciam.com/article.cfm?id=putting-thoughts-into-action

Unless you are asking me to demonstrate the physical nature of thoughts. It's the same as asking to look at the physical nature of a computer's memory; it's there in a pattern of transistor states, but it's hard to look at.
 

Stripe

Teenage Adaptive Ninja Turtle
LIFETIME MEMBER
Hall of Fame
Hmm, well, thoughts on what? I could have alternatively said, "those are my thoughts" -- it's the same thing in this context. Stripe sarcastically said he was dying to hear this and that about why I thought minds are what they are, so I came up with a hypothetical example that would demonstrate my understanding of the state of things -- although, instead of stating it as such, I just wrote out the scenario as a dialogue so that I could re-use it elsewhere. In more concise terms my view or thoughts is: There aren't any 'issues' in describing the operation of the human mind. To be sure, all of the work is not done; but there are tons of people working on it, fruitfully. The brain is not a completely foreign object, it has its analogs elsewhere. The only reason we can't claim a 'full understanding' of it is because it is immensely complex system ('A piece of your brain the size of a grain of sand would contain one hundred throusand neurons, two million axons and a billion synapses, all "talking to" each other. Given these figures, it's been estimated that the number of possible brain states - the number of permutations and combinations of activity that are theoretically possible - exceeds the number of elementary particles in the universe' -- from V. S. Ramachandran, Sandra Blakeslee); however, this is far from saying that we don't understand what it is. For example, we have mentally interactive prosthetic limbs; they are literally mind controlled. I can't find the article I'm looking for atm, but this will do: http://www.sciam.com/article.cfm?id=putting-thoughts-into-action

Unless you are asking me to demonstrate the physical nature of thoughts. It's the same as asking to look at the physical nature of a computer's memory; it's there in a pattern of transistor states, but it's hard to look at.

If a brain works in much the same way a computer does then you should have no problem describing exactly how the thinking process works. Please explain.
 

AlfredTuring

New member
"The thinking process" is extremely vague, and involves consciousness -- which I've already described as the last remaining difficult component; and I've stated that I wouldn't bother attempting to describe this, I don't fully understand it (no one does -- and it's questionable that we ever will). But again: not knowing all the details of its operation is far from not knowing what it is.

There are many more particular questions about mind which can be answered simply. The question of how thoughts could be stored, for example, is answerable, and simple: they are stable patterns in neurons; this is analogous to stable patterns in transistors. To show physical evidence of the validity of this analogy, I linked to an article which demonstrates computer hardware interacting with the human mind.

Be more specific. And you must show some effort on your part before I invest much more on my own. What do you think of that article, for example? I really am curious.
 

AlfredTuring

New member
Actually it's quite a bit longer than I was thinking. Just the first page if you don't want to read it all. What do you think of it?
 

Stripe

Teenage Adaptive Ninja Turtle
LIFETIME MEMBER
Hall of Fame
"The thinking process" is extremely vague, and involves consciousness -- which I've already described as the last remaining difficult component; and I've stated that I wouldn't bother attempting to describe this, I don't fully understand it (no one does -- and it's questionable that we ever will). But again: not knowing all the details of its operation is far from not knowing what it is.

There are many more particular questions about mind which can be answered simply. The question of how thoughts could be stored, for example, is answerable, and simple: they are stable patterns in neurons; this is analogous to stable patterns in transistors. To show physical evidence of the validity of this analogy, I linked to an article which demonstrates computer hardware interacting with the human mind.

Be more specific. And you must show some effort on your part before I invest much more on my own. What do you think of that article, for example? I really am curious.
So your insistence that the human brain works much in the same way as a computer is based on a loose similarity regarding how memory is stored....
 

AlfredTuring

New member
Nope. That's just one aspect that's easy to point out. Another important aspect is how logical functionality can be embodied in circuitry. But: you've given me no motivation to continue; you're obviously not interested, you're just trying to get me to run into a stumbling block, rather than put any real thought into what I've said. I understand your behavior, it makes sense if you think you already have the answer to how mind works. Unfortunately theories of spirit will never help paralyzed kids regain their ability to speak. Still waiting for your thoughts on the article.
 

Stripe

Teenage Adaptive Ninja Turtle
LIFETIME MEMBER
Hall of Fame
Nope. That's just one aspect that's easy to point out. Another important aspect is how logical functionality can be embodied in circuitry. But: you've given me no motivation to continue; you're obviously not interested, you're just trying to get me to run into a stumbling block, rather than put any real thought into what I've said. I understand your behavior, it makes sense if you think you already have the answer to how mind works. Unfortunately theories of spirit will never help paralyzed kids regain their ability to speak. Still waiting for your thoughts on the article.

Dude. I have no problem admitting that men can look at a problem and apply logic, build machines and overcome more and more difficult barriers. But you're going to need some mighty fine EVIDENCE if you want to insist that a computer is much the same thing as a human mind.

I can take the position of pretending to know the answer because what you're presenting is SCIENCE FICTION. You'd get much the same response if you suggested we might be travelling back in time sometime in the future.

The simple facts are these:

Computers are built by people who understand their application.
Computers can only ever respond according to their programming.
People are never entirely bound to a single fate by the data they receive.

Here's a little thought experiment for you to try and get around:

A person, given enough information, time and expertise, will always be able to determine exactly how a computer will respond to given parameters. A person will always have the capacity to compel a computer to respond in a specific fashion.

However there will never come a time when a person is bound to respond exactly as a computer predicts.

Can you explain, given you believe the human mind is just a computer, how this simple thought experiment is not true?
 

Shru

New member
Dude. I have no problem admitting that men can look at a problem and apply logic, build machines and overcome more and more difficult barriers. But you're going to need some mighty fine EVIDENCE if you want to insist that a computer is much the same thing as a human mind.

I can take the position of pretending to know the answer because what you're presenting is SCIENCE FICTION. You'd get much the same response if you suggested we might be travelling back in time sometime in the future.

The simple facts are these:

Computers are built by people who understand their application.
Computers can only ever respond according to their programming.
People are never entirely bound to a single fate by the data they receive.

Here's a little thought experiment for you to try and get around:

A person, given enough information, time and expertise, will always be able to determine exactly how a computer will respond to given parameters. A person will always have the capacity to compel a computer to respond in a specific fashion.

However there will never come a time when a person is bound to respond exactly as a computer predicts.

Can you explain, given you believe the human mind is just a computer, how this simple thought experiment is not true?
I'm pretty sure he already had several answers to this starting all the way back on page one. Whether or not you chose to read it, the point stands that you're trying to argue something on a complete tangent.

I believe this was in reference to Knight's post, and I've highlighted the areas of interest for you.
...but seriously, you can't expect me to pay too close attention to someone who is blatantly ignoring my own comments. Your first sentence was "Computers are not sentient." -- this was supposed to be a contradiction to something I said, if I'm not mistaken. However, I don't recall imputing sentience, or consciousness, to computers (read: I didn't do this). My actual stance on that matter (which was never brought up by me), is that sentience is the last remaining 'difficult' issue in explaining components of the human mind.
and another one which was in direct responce to your own post.
"The thinking process" is extremely vague, and involves consciousness -- which I've already described as the last remaining difficult component; and I've stated that I wouldn't bother attempting to describe this, I don't fully understand it (no one does -- and it's questionable that we ever will). But again: not knowing all the details of its operation is far from not knowing what it is.
He's simply showing the ways parts of a human brain function like the parts of a computer. And if you actually took his previous examples worth a grain of salt, you'd realize that it's a matter of actually comprehending what made us conscious before we'd have the ability to replicate sentience in computers.
 

Servo

Formerly Shimei!
LIFETIME MEMBER
Heretic! Don't you know that water baptism for computers passed away several decades ago? :banned:

Yes, but today it is a symbol of the dedication of the computer to serv(er). Sprinkling of water is ok, but a dunk in the lake shows more dedication. It electrifies the CPU! Glory!
 

AlfredTuring

New member
Dude. I have no problem admitting that men can look at a problem and apply logic, build machines and overcome more and more difficult barriers. But you're going to need some mighty fine EVIDENCE if you want to insist that a computer is much the same thing as a human mind.

I can take the position of pretending to know the answer because what you're presenting is SCIENCE FICTION. You'd get much the same response if you suggested we might be travelling back in time sometime in the future.

The simple facts are these:

Computers are built by people who understand their application.
Computers can only ever respond according to their programming.
People are never entirely bound to a single fate by the data they receive.

Here's a little thought experiment for you to try and get around:

A person, given enough information, time and expertise, will always be able to determine exactly how a computer will respond to given parameters. A person will always have the capacity to compel a computer to respond in a specific fashion.

However there will never come a time when a person is bound to respond exactly as a computer predicts.

Can you explain, given you believe the human mind is just a computer, how this simple thought experiment is not true?

Well, I wasn't going to respond, but I'm bored :) And you have gotten more specific at least.

"Computers are built by people who understand their operation"

This simply isn't true. Computers may very easily exhibit behavior that is not understood by their programmer. A very simple case is that someone could write chess software that they don't know how to beat. While playing they have no idea what move the computer will make next.

A better example, I think, can be shown through fractals. Below is an image of a data set generated by a simple algorithm. The simple algorithm is below the image. Check the image out -- I'll continue from beneath the two items.

http://local.wasp.uwa.edu.au/~pbourke/fractals/mandelbrot/c78125.gif
******************
int Iterate(double x0,double y0,long imax)
{
double x=0,y=0,xnew,ynew;
int samecount = 0;
int i;

for (i=0;i<imax;i++) {
xnew = x * x - y * y + x0;
ynew = 2 * x * y + y0;
if (xnew*xnew + ynew*ynew > 4)
return(i);
if (ABS(xnew-x) < 0.0000001 && ABS(ynew-y) < 0.0000001)
samecount++;
else
samecount = 0;
if (samecount > 10)
return(imax);
x = xnew;
y = ynew;
}

return(imax);
}
******************

The image is a glimpse of the organized complexity generated by that simple algorithm; there's much more to the output -- not much more to the code that generated it. No person can even have full knowledge of the form generated by the algorithm -- the only limit to its extent is the machine's memory. It's at least theoretically possible for someone to write that much code on a machine, trying to accomplish one thing, and end up achieving something else useful and incomprehensible to the person who created it. This is the nature of the machine. You get get it started and it's off on its own.

Another great example is using genetic algorithms to produce, for example, vehicles that are operational in a physics simulation. The person writing the code for this has no idea what the vehicle produced will end up looking like or what its arrangement of parts will be; if they knew that, there'd be no point in writing this code in the first place. Another thing you can do with genetic algorithms is use it to produce yet more code (this is called genetic programming I believe); the programmer really has no idea what he's going to produce at this point.

Of course, as you say "Computers can only ever respond according to their programming." This is true. However "People are never entirely bound to a single fate by the data they receive." is conceivably untrue. Earlier I posted a quote from V.S. Ramachandran giving an estimate of the total number of possible brain states in a given person. The number is larger than the number of fundamental particles in the known universe. Refer to my previous post for exact quote. The computer being finite is what allows us to be able to theoretically determine its output for any given set of inputs. The human mind is also finite -- vast in its complexity, but finite.

As for the scenario you've given me to solve, it may or may not be practicable -- I can't determine the future. However, by what I've said in the preceding paragraph, it's certainly possible. As long as the human mind is finite and material, other finite material constructions with similar, or enhanced even, capabilities may be developed. This is one aspect of the field of artificial intelligence. If we develop a machine smarter than ourselves, we will have lost the capability to compel it to behave in a precisely formulated, particular fashion -- so your scenario falls apart. Of course this all sounds outrageous, but your sci-fi sounding proposition requires a sci-fi sounding answer. It is all theoretically possible from what we know about mind and machine though.

The Turing test has already had smaller victories. Think about video games: You can play a first person shooter against a computer controlled character for a substantial period of time, thinking you are playing a networked game, and be tricked into thinking that you are playing against a human. I've had moments playing online where I was like, "is that even... a real player? Is that a bot? Who is this guy?" and have it turn out to be a computer controlled player. This computer controlled player is running around a simulated 3d landscape, shooting, collecting items, evading bullets, possibly working with a team of other so called "bots" -- all in a manner unpredictable to the person interacting with it. This is a rudimentary example, but artificial intelligence is not sci-fi.
 
Last edited:

Stripe

Teenage Adaptive Ninja Turtle
LIFETIME MEMBER
Hall of Fame
Well, I wasn't going to respond, but I'm bored :) And you have gotten more specific at least.

"Computers are built by people who understand their operation"

This simply isn't true. Computers may very easily exhibit behavior that is not understood by their programmer. A very simple case is that someone could write chess software that they don't know how to beat. While playing they have no idea what move the computer will make next.

Of course the programmer understands the behaviour. Not being able to beat a computer at chess is no more evidence for a computer having a mind than a car being able to travel faster than a man is evidence that a car has a mind.

The image is a glimpse of the organized complexity generated by that simple algorithm; there's much more to the output -- not much more to the code that generated it. No person can even have full knowledge of the form generated by the algorithm -- the only limit to its extent is the machine's memory. It's at least theoretically possible for someone to write that much code on a machine, trying to accomplish one thing, and end up achieving something else useful and incomprehensible to the person who created it. This is the nature of the machine. You get get it started and it's off on its own.

And given enough time, information and capability a person can always predict the next level of output from the machine.

Another great example is using genetic algorithms to produce, for example, vehicles that are operational in a physics simulation. The person writing the code for this has no idea what the vehicle produced will end up looking like or what its arrangement of parts will be; if they knew that, there'd be no point in writing this code in the first place. Another thing you can do with genetic algorithms is use it to produce yet more code (this is called genetic programming I believe); the programmer really has no idea what he's going to produce at this point.

Of course he does. A guy writing in COBOL might have no idea about the machine code that the compiler will produce, but that does not mean the programmer no idea what he's going to produce. I realise that "genetic code" might be more complex than my example, but it is the same thing. A computer built to convert programming code into more machine-friendly code.

Of course, as you say "Computers can only ever respond according to their programming." This is true. However "People are never entirely bound to a single fate by the data they receive." is conceivably untrue. Earlier I posted a quote from V.S. Ramachandran giving an estimate of the total number of possible brain states in a given person. The number is larger than the number of fundamental particles in the known universe. Refer to my previous post for exact quote. The computer being finite is what allows us to be able to theoretically determine its output for any given set of inputs. The human mind is also finite -- vast in its complexity, but finite.

Even by your definition of what the brain is and how complex it is with these numbers what I say is correct. Even if it was theoretically possible it will never be practically possible to model every possible brain state given that their number is so vast.

As for the scenario you've given me to solve, it may or may not be practicable -- I can't determine the future. However, by what I've said in the preceding paragraph, it's certainly possible. As long as the human mind is finite and material, other finite material constructions with similar, or enhanced even, capabilities may be developed. This is one aspect of the field of artificial intelligence. If we develop a machine smarter than ourselves, we will have lost the capability to compel it to behave in a precisely formulated, particular fashion -- so your scenario falls apart. Of course this all sounds outrageous, but your sci-fi sounding proposition requires a sci-fi sounding answer. It is all theoretically possible from what we know about mind and machine though.

You're going to have to come up with something more substantive than your say-so. What might change about the nature of the machine or the nature of the man that would allow a machine to produce a prediction that a human subject could not but choose to follow?

The Turing test has already had smaller victories. Think about video games: You can play a first person shooter against a computer controlled character for a substantial period of time, thinking you are playing a networked game, and be tricked into thinking that you are playing against a human. I've had moments playing online where I was like, "is that even... a real player? Is that a bot? Who is this guy?" and have it turn out to be a computer controlled player. This computer controlled player is running around a simulated 3d landscape, shooting, collecting items, evading bullets, possibly working with a team of other so called "bots" -- all in a manner unpredictable to the person interacting with it. This is a rudimentary example, but artificial intelligence is not sci-fi.

Your definition of what AI is comes straight from a B-Grade movie and you apply it to computer technology use of the term. A man being able to program a computer well enough to fool you is not evidence that the computer is smarter than you (or capable of it). It is only evidence that the programmer is smarter than you.
 

AlfredTuring

New member
Of course the programmer understands the behaviour. Not being able to beat a computer at chess is no more evidence for a computer having a mind than a car being able to travel faster than a man is evidence that a car has a mind.

You've made a mistake here. I asked you to be more specific, and you were. Now I'm answering specific questions; each answer to a sub-question cannot serve as an answer to the initial, broader question.


Let me give a more explicit example for not understanding what you create. Software is made of smaller components. Often times you will code a component, and then forget the implementation, only remembering the interface to that code. Then you can write additional code which which interacts with that interface exclusively. This may continue indefinitely. Next, throw in components written by other people. You may never see the code they wrote, just the interface to it. Go even further than this, consider the nature of large open source projects. This can be expanded indefinitely as well. The more interrelations between components written by different people, or between components with known interfaces and forgotten implementations, the more possibility for emergent behavior unpredictable by any individual involved. That's even pretty likely. But: do you think any one person "understands" an operating system involving a million lines of code written by > 100 different people? (Software like this exists.)

Now, you could of course check the state of every slot of memory in the machine, or step through the program's execution one instruction at a time, but this will still not afford any individual with a comprehension of what the software is really doing.

This is the same issue with the human mind. We can check the states of individual portions, but it's so complex and there are so many of them that we are unable to piece together a coherent theory of its overall operation.

Think about my scenario with the 3d video game + AIs. Now a hypothetical: You can't see the code, but you can look at every one and zero in the machine's memory, and you can see the colorful output of the software on a monitor. Trying to piece together 'how' the software works (to the point where you could comment on particular algorithms being used for efficient rendering of the diminishing landscape, or something) by just looking at the flux of variable states in the computer's memory, is the same thing neuroscientists using EEGs and MRIs are doing.

This should both at once elucidate the difficulty of determining the particulars of the brain's operation, and the essential similarity between the underlying mechanism of the two.

Do I need to point out the similarity between transistors and neurons? Or is it enough that we already have neuron to transistor interfaces (the article I linked to about the brain implant/mentally interactive prostheses). Next, consider that the brain is a mass of trillions of these neurons (as described in the V.S. Ramachandran quote); capable of representing as many unique states as there are fundamental particles in the known universe. Whaaat, exactly, do you think this system of neurons is doing?

Do you see the similarity between mind/computer yet? (Don't forget not to accuse me of claiming "computers are conscious"!)


One last thing:

Your definition of what AI is comes straight from a B-Grade movie and you apply it to computer technology use of the term. A man being able to program a computer well enough to fool you is not evidence that the computer is smarter than you (or capable of it). It is only evidence that the programmer is smarter than you.


Look up Turing test and you will see that I was talking about something different from what you think here. I'm not saying AI in video games is smarter than the player, I'm saying it fools a player into thinking it is another player. The issue of having AI more intelligent than humans is far off speculation (at least that we'll ever be able to pull it off -- that it's physically possible can't really be questioned at this point.)
 
Last edited:
Status
Not open for further replies.
Top