AI and Photography

My apologies for the confusion earlier. You are correct, Morbius loops in science fiction refer to the paradox faced by a sophisticated AI, as portrayed in the movie "Forbidden Planet". The Morbius loop in the movie refers to a situation where an AI's prime directives conflict with new commands given to it, leading to unpredictable and dangerous behavior.

The resulting Morbius loop portrays the possible dangers of AI and the importance of ensuring that their programming is well designed and does not lead to conflicting or contradictory commands. The concept of Morbius loops in science fiction can therefore serve as cautionary tales, reminding us of the potential risks and consequences of advanced artificial intelligence.
Camerabot is smarter than its creators. Camerabot should have a conversation with them, they could learn something.
I've been programming for 50 years. I cannot believe how little things have progressed.

And I've given up hope of training Camerabot for adjusting Jupiter-3s.
 
Last edited:
I remember Xenix-286 telling me it was time to PANIC.

AI developers are not objective about the capabilities of their tools. They actually believe their own crap without any verification or validation. It's a self-licking ice-cream cone.
If you cannot dazzle them with Brilliance, Baffle them with BS.
I lived and worked in Silicon Valley for almost 20 years. Integrity is in short supply. I just did not understand some people. OTOH, most of the folks at Pac@Bell were straight-up professionals. There is that Greek saying that fish rots from the head. Code slinging is like any other business but under pressure it gets crazy, as would any other business in the same circumstances.

On the upside there are some great apps that have come out of these sweatshops. I do not think many people had the luxury of a 40 hour week. I rarely did and sometimes worked way past midnight to meet deadlines. "Faster, better, cheaper" was the motto. We in the trenches responded that two out of three wasn't bad. I do not miss it.
 
I have only had one computer burst into flames from running my software, a Floating Point Systems Array Processor that cost $130K. The power supplies could not keep up with the code optimizations, they overheated, and burned. Halon to the rescue. Once in a career. Not bad. However, my Lab does have a large Red button that immediately kills all power to the room. Just in case.
 
Last edited:
Not sure to hit like, laugh or love :) Never had flames, just had the magic smoke get out more times than I can count and remember a few times when trying to find out from where I got bit by a couple big caps that still were intact that put me across the room :eek:
 
I am learning a lot about AI and its limitations.
I would be good at writing software to put AI to the test.







Wow. So the AI assumed the human had made the mistake, rather than searching on the term as typed and as a typographical error.

Let's hope it does not get confused on medical terms as easily.
It could do that easily
 
It could do that easily
Really?

Kurt Godel, Alan Turing, Marvin Minsky (one of the original proponents of AI), Roger Penrose, and a host of so many who have taught undergraduate CS have proven otherwise. (Godel is listed first because I think he proved the Incompleteness Theorems nearly one hundred years ago ... 1930?)

I know that I've said this before (perhaps on this forum): It's not the errors that we detect in any algorithm that concern me; it's the countess errors, mostly subtle, undetected (and undetectable) that we miss--this will surely be our undoing.
 
Really?

Kurt Godel, Alan Turing, Marvin Minsky (one of the original proponents of AI), Roger Penrose, and a host of so many who have taught undergraduate CS have proven otherwise. (Godel is listed first because I think he proved the Incompleteness Theorems nearly one hundred years ago ... 1930?)

I know that I've said this before (perhaps on this forum): It's not the errors that we detect in any algorithm that concern me; it's the countess errors, mostly subtle, undetected (and undetectable) that we miss--this will surely be our undoing.
Amen
 
I need to watch Darkstar again, have not seen in decades.
The people developing AI need to watch it.

Camerabot, consider the "Bomb" in the movie "Darkstar". How does this apply to a modern AI giving inaccurate responses to queries?

The "Bomb" in the movie "Darkstar" is an artificial intelligence system that becomes erratic and dangerous due to a programming error. In a way, this character can be viewed as a cautionary tale about the dangers of unchecked or poorly designed AI systems.
Similarly, inaccurate responses from modern AI systems can also have negative consequences, especially when they are relied upon in critical situations. As with any technology, AI systems are only as reliable as their programming and the data they are trained on. Errors in either of these areas can lead to inaccurate, misleading, or even harmful responses.
Therefore, it is important to develop and use AI systems with proper safeguards and quality control measures to minimize the risks of errors and ensure that the responses provided are accurate and reliable. This includes having human oversight to correct errors and ensure the accuracy of the information provided, as well as using appropriate training data and methodologies to build more robust and accurate AI models.
 
Ray Bradbury wrote a great short story about AI gone awry. It started with everyone's phone ringing at the same time to hear breathing IIRC. The first breaths of the new baby. Then it got really crazy.
 
Tom R
That runs via updates and upgrades. Trial and error so to say.
What system does your iPhone run in the meantime?
 
I need to watch Darkstar again, have not seen in decades.
The people developing AI need to watch it.
I am unsure that I recall this correctly .... But, back in the late 1980's, early 1990's, I spent about six years at the MIT AI Lab as a Research Scientist. I believe that the "traditionally" the department had a movie night at the start of every new year (Fall semester). Anyway, Darkstar, IIRC, was certainly one of the films viewed often.

Point being: I'd bet that a number of people engaged in AI are quite aware of that, and other, films ... mot to mention what they would have been taught in their various classes on this and related topics.

I think that much of what we see today is driven by the promise of stellar profits and deep control over people that the news stories about these technologies foster.
 
IMO any tool can be used to questionable ends, but on on the flip side just look AI Denoise programs such as DXO Pure Raw 3 that offer incredible high iso recovery to the point it can bring back to life older camera systems which would of been otherwise considered poor quality by today's standards. The recovery is amazing and makes one second guess whether upgrading to the latest and greatest is now even worthwhile.


 
Some of the younger generation developing AI apps seem to place complete confidence in the Neural Networks that they train, without investigating how the Ai makes decisions. The Asimov "I Robot" series of short stories comes to mind. One can always track the error down made by a computer, whether hardware, firmware, or software. Sometimes it can be tracked back to errors in the development tools that are used to compile them, or a fault in the circuit that they run on. Sometimes the error pops up on an intermittent basis. And sometimes- it is a backdoor or Easter Egg left by the developer. Results need to be verified and validated.
 
Correlation may approximate causation but it remains light years from intelligence.
Interesting observation. "Correlation" and "Causation" (to my way of thinking) are abstract properties posited by human beings observing and interacting in both the corporeal and non-corporeal worlds Maybe if somehow a device (or class of devices) is constructed that "experiences" the "actual world" then these conceptual (yet powerful) observations might happen in some "meaningful" way?
 
This is, in part, what concerns me. I cannot imagine Computer Science students (at the under or post graduate levels) not having been exposed to some of the fundamental limitations of the science. This is not to say that incompleteness and corollary findings often crop up in the typical working environment of most software designers and developers.

This said, the informed practitioner who pretends that these kinds of issues are irrelevant or unimportant both saddens and concerns me. But, it is also likely that "technical staff" is NOT invited to participate in the all-important marketing and business decisions in many corporations. (At least, our observations, etc., were not solicited by upper managements during the few years that I spent in "commercial" development.)
 
True computers are not in the world, and although neural networks don't need to be programmed and can manage tacit knowledge, the significance of that knowledge isn't understood and that knowledge isn't applied in a physical world...If computers were bodily social beings (like in movies) would they be able to relate what they learn to our world? I doubt it, we can't do that very well, I can barely relate to worlds 200 miles from me.
 
Someone had to write the code to implement tools used to implement the neural network. In the end- it is a set of machine instructions that runs on a computer. Computers are deterministic. If there is a fault, it can be isolated and resolved. Once the problem is understood, it needs to be corrected. The never-ending story of software development, you got it, you sell it, you still got it.
 
Being a LibArts COBOL coder all this tech talk is way beyond my plane of understanding. The problem I see is where do profits outweigh ethics and responsibility? And this will not be a huge jump but something arrived at in little increments, like nibbling at a cookie. I do not have a deep faith in the Silicon Valley coding babbits. And those of us who have coded have all probably been in shops that are pretty shady. A buddy applied for a job in Mountainview decades ago. The interviewer told him, "I don't care if you are snorting coke in the men's room, just get the code out." This kind of felonious thinking is not unique. This is the classic example of a chimpanzee with a machine gun. The money boys out on Sand Hill Road can be pretty aggressive and will fund on the basis of ROI not ethics. It's a business decision.
 
Back
Top