Why would a bot want to know?

I agree with the commenters on that article.

Why did FB turn off the machines? Why not let the buggy programs run on, generate more gibberish and crash themselves? What is there to be afraid of?
 
Last edited:
We attempted, unsuccessfully, to learn a couple basic words in Czech, Hungarian, and Polish........Bulgarian*, where we'll be in the Fall, "Fuhgeddaboudit".

*"Hello", to our ears, will likely be a dozen unrelated consonants. :LOL:
:LOL: For "good Morning" in Hungarian say: Jo reggelt. :) If you read it the English/American way it will come out unrecognizable. For Americans, the first word to be pronounced correctly would write: Yo.
 
Last edited:
I agree with the commenters on that article.

Why did FB turn off the machines? Why not let the buggy programs run on, generate more gibberish and crash themselves? What is there to be afraid of?

They might conspire to eradicate the inferior carbon based units. :cool: Then replace with silicon architecture.
 
In all the software that I have used or heard about, when a program produces something that is new, unusual, or beyond what its programmer/creator tried to implement, it is usually a bad result, an unexpected gotcha or glitch, or a bug. Or it is a devious side effect (from a bug) that was difficult to foretell from the beginning. Needless to say, it is generally bad and undesirable. Sometimes it can be amusing.

There has not been a program that develops more capabilities than what its programmer puts in. Yes, it can go out acquiring more data, such as surfing the Web to get info to fill up its hard drives, but self-developing new analytical methods to operate on that data? No.
 
Last edited:
In all the software that I have used or heard about, when a program produces something that is new, unusual, or beyond what its programmer/creator implements, it is usually a bad result, an unexpected gotcha or glitch, or a bug.

There has not been a program that develops more capabilities than what its programmer puts in. Yes, it can go out acquiring more data, such as surfing the Web to get info to fill up its hard drives, but self-developing new analytical methods to operate on that data? No.
I'm an old dinosaur but that's my experience. Outside of the occasional routine that modified itself at runtime(discouraged but sometimes necessary) the code was what it compiled into.

I'm not an AI expert but what I've seen done in the past was fairly basic. Sure it's better and will continue to improve but I'm not sure how advanced the technology is today.
 
I recall in a lecture 40 years ago, a professor was musing about how one could get a computer to be creative. The only way he could think of was to have it generate random thoughts. Then, it must know to apply known rules and heuristics to filter out the ones that simply would not work.

As an example, he said "it's similar to you guys sitting here and happening to see a beautiful blonde walking by. Your mind generates all kinds of thought, but you know what is illegal and immoral, and so you don't do it".

I think some AI researchers have tried this, and wonder how far they have got. Most likely, the software would generate zillions of bytes of gibberish then crash in a matter of fraction of a second. :)

I'm an old dinosaur but that's my experience. Outside of the occasional routine that modified itself at runtime(discouraged but sometimes necessary) the code was what it compiled into...
Self-modifying code was something I did too, back in the days where magnetic core memory (not even RAM) was sparse, and that allowed the software to be smaller using some tricks. It tends to be buggy, and the code will not be ROM'able. Nobody does this anymore.

But anyway, self-modifying code is something that the programmer himself preplans and implements. The software itself is not smart.
 
Last edited:
In the recent news...

Facebook shut down an artificial intelligence engine after developers discovered that the AI had created its own unique language that humans can’t understand. Researchers at the Facebook AI Research Lab (FAIR) found that the chatbots had deviated from the script and were communicating in a new language developed without human input. It is as concerning as it is amazing – simultaneously a glimpse of both the awesome and horrifying potential of AI.


Artificial Intelligence is not sentient—at least not yet. It may be someday, though – or it may approach something close enough to be dangerous.

https://www.forbes.com/sites/tonybr...preview-of-our-potential-future/#7538873d292c

Here I thought Facebook prided themselves as in the forefront of the future, but seems a bit jealous or threatened by progress in this situation :(.
 
I recall in a lecture 40 years ago, a professor was musing about how one could get a computer to be creative. The only way he could think of was to have it generate random thoughts.

Sounds like the infinite monkey theorem...can reproducing Shakespeare be far behind?
 
In the recent news...

https://www.forbes.com/sites/tonybr...preview-of-our-potential-future/#7538873d292c

Here I thought Facebook prided themselves as in the forefront of the future, but seems a bit jealous or threatened by progress in this situation :(.

Another poster brought this up in an earlier post. Apparently you missed it.

Here's a recent article on this same story:

Facebook didn’t kill its language-building AI because it was too smart—it was actually too dumb.

https://qz.com/1043365/facebook-did...se-it-was-too-smart-it-was-actually-too-dumb/

:)
 
Back
Top Bottom