AI and Photography

My first computer that I learned "inside-out" was the Texas Instruments Advanced Scientific Computer. It could handle 3-D arrays, manipulating sub-sets of the array, with a single vector instruction- basically collapse a three-deep loop into one instruction. For an FPGA based embedded system that I had built-to-spec, I had some of the vector instructions implemented in VHDL. Fast, really Fast- about 100 times more efficient than the original VHDL. We had a CM-5 at work, but by that time I had an Intel Sugarcube with 4 Array Processors.
Woah! You guys had a CM-5? I’m impressed! The industrial design of the thing is beautiful. I believe Maya Lin contributed to the design.

What type of analysis was it being used for? Curious?

You seem to have very deep comp-sci experience and started on the ground floor so to speak.

I seem to remember cabling length was critical for signal timing I believe on the CM-5. We had a lot of engineering change orders for cable length changes among all the other commodities. It was quite an experience and I learned a lot before moving on to a software company then into computer networking before the dot.com bubble burst. Always landed on feet though and continue do well as I approach retirement and dotage :).
 
Woah! You guys had a CM-5? I’m impressed! The industrial design of the thing is beautiful. I believe Maya Lin contributed to the design.

What type of analysis was it being used for? Curious?

You seem to have very deep comp-sci experience and started on the ground floor so to speak.

I seem to remember cabling length was critical for signal timing I believe on the CM-5. We had a lot of engineering change orders for cable length changes among all the other commodities. It was quite an experience and I learned a lot before moving on to a software company then into computer networking before the dot.com bubble burst. Always landed on feet though and continue do well as I approach retirement and dotage :).
IBM 360-20, too. They had a hard time shooting that bug.
 
NRL- weather, fluid dynamics, atomic structure analysis, image processing, acoustic processing, etc...

The TI-ASC 7 was $8M, 50 years ago. Replaced by a Cray X-MP, Then Cray Y-MP, and the Lab for Computational Science had the Connection Machines.

The World's First Symposium on Computers was at NRL in the late 1940s. Many of the first computer manuals were in the Library, when I started some still are.
 
.
I see a business opportunity to develop software to differentiate AI generated output from human generated material.

I think you are correct. There is a business opportunity there. Seriously.

With drawings and paintings, our image can portray either a real thing, such as a valley in the Swiss Alps, or a totally imagined scene. We really don’t care; we just admire the image.

With photography, for the most part we are viewing real things - albeit subject to image manipulation (I refer not to tonal or color characteristics, but adding or removing things). This was less so with film than with digital - and perhaps that may have augmented some hostility towards digital photography.

But with AI-created images, the issue is we may not even know if an image contains false elements. Is that a problem? Looking at an image as art, probably not. But if the image has been created to deliberately deceive and influence people’s attitudes and decisions, that is a problem.

We don’t even need AI for that to happen - such deception exists everywhere already. Consider Wikipedia - if I want to figure out how to use thorium more effectively in my basement breeder reactor, then Wikipedia is a good source of valuable and reliable information on that. If I want to learn something about Covid-19 or a socio-political issue, then I have no trust in it whatsoever.

Wait until it identifies you as a computer and you have to prove otherwise;).

Ah - the Turing test. From all the way back to 1950.

But in a few short years I have fooled almost all the readers on this forum into thinking I am a human, whereas in fact I am a cat.
 
Last edited:
The first ColdFusion post on AI, The Pope in a Puffy, was a little advanced and assumed some foreknowledge. Here is an earlier ColdFusion on Ai which covers the basics. It seems to me they are trying to build a BA in some LibArts field and then add the plugins, grad school. This is remarkably how many people advance through education to achieve a lot with their learning.




And if the predictions of this video are true the best job security might be a potter's wheel. Or stoop labor. I would prefer the [otter's wheel.
 
Last edited:
Here is a vid by Matt Osborne on the confluence of AI and vintage lenses. And he goes on to talk about the Chinese company, Light Lens Lab, which has been making some very nice Leica replicas and now a Cooke Speed Panchro. Also a review of this Cooke replica.

There seems to be a really growing interest in the vintage lenses, after all, this thread is on the Bertele 5cm f/2.0 midwifed by our own Brian which from Brian's photos with one looks to be a great lens.

No more rummaging through boxes in the back of antique/used goods stores, Goodwills, St. Vincet de Paul, the Salvation Army and yard sales or crawling through eBay listings for an internet purchase which are all crapshoots. Now we can take the rubber band off our wallets and dig out the cash for a brand new really good replica. And the currently tight Jupiter 8 market may get tighter.

For the film fans who find digital flat and soulless these vintage lenses can make a difference which delivers the best of both worlds. Aside from a 35 and a 40mm CV my M-body lenses are all vintage. They work well and were less expensive.

Light Lens Lab copy of the Cooke Speed Panchro S2 50mm f/2
 
I like how Matt Osborne is strategizing to keep his photo business alive with AI looming large. Cheers, OtL
PS. I recently found that the LLL lenses can be ordered in Vietnam...dear oh dear...
 
I like how Matt Osborne is strategizing to keep his photo business alive with AI looming large. Cheers, OtL
PS. I recently found that the LLL lenses can be ordered in Vietnam...dear oh dear...
I follow Osborne as he seems to have his head screwed on tight, know his field and is able to communicate it well. He is sensible and rational and takes good pictures.

If I did not already have the Cooke Amotal I would be looking at the Panchro. I am not sure what the Panchro would get me I do not already have with the Amotal. I can tell you that the "Cooke Look" is more than just words. The Amotal; and two sweet KMZ J8's are my go to 50's. I am tempted by the LLL Elcan but I really have to work more on the subject matter and less on collecting gear. OTOH, "One more won't hurt, . . . " LOL
 
I follow Osborne as he seems to have his head screwed on tight, know his field and is able to communicate it well. He is sensible and rational and takes good pictures.

If I did not already have the Cooke Amotal I would be looking at the Panchro. I am not sure what the Panchro would get me I do not already have with the Amotal. I can tell you that the "Cooke Look" is more than just words. The Amotal; and two sweet KMZ J8's are my go to 50's. I am tempted by the LLL Elcan but I really have to work more on the subject matter and less on collecting gear. OTOH, "One more won't hurt, . . . " LOL
I just re-assembled a ZOMZ J-3 focus mount. My fingers are numb from the polishing, filing, and de-greasing.
 

The Davinci ChatGPT model at work, fundamental failure for solving simple algebraic problems at the 9th grade level.

I've worked around AI developers since the 1970s. I thought it was better than this. What an absolute joke.
I'm sure this discrepancy can be traced to human error. The idiots that released this thing. The main problem with this software: it provides incorrect answers rather than simply stating "I am not programmed to respond in that area".

Words of Dirty Harry come to mind: "An AI has got to know its Limitations".
ChatGPT: simply does not make the user aware of limitations and only provides a lexically correct response.

As an AI language model, I can produce responses based on the data and information that is available to me, but I do not have the capability to independently verify or validate the accuracy or completeness of the information. I strive to provide the best responses possible based on my programming and the information provided to me, but it is ultimately up to the end-users of these applications to verify and validate the answers that I provide, especially in situations where critical decisions need to be made.

That being said, some techniques can be used to evaluate the quality and accuracy of ChatGPT's responses, such as comparing the answers to existing published studies or cross-referencing with multiple sources to confirm the information. Additionally, there are techniques for benchmarking and evaluating the performance of different ChatGPT models based on their language processing capabilities and accuracy.

In short, while I may strive to provide accurate and helpful information, the responsibility for verifying and validating that information ultimately falls on the end-users who develop applications based on my responses.

In other words- you cannot trust it.
 
Last edited:

The Davinci ChatGPT model at work, fundamental failure for solving simple algebraic problems at the 9th grade level.

I've worked around AI developers since the 1970s. I thought it was better than this. What an absolute joke.
I'm sure this discrepancy can be traced to human error. The idiots that released this thing. The main problem with this software: it provides incorrect answers rather than simply stating "I am not programmed to respond in that area".

Words of Dirty Harry come to mind: "An AI has got to know its Limitations".
ChatGPT: simply does not make the user aware of limitations and only provides a lexically correct response.



In other words- you cannot trust it.
It's not the colossal failure(s) that trouble me so much as it is the small, subtle and undetected (and undetectable) errors that trouble me most about this generation of Artificial Intelligence researchers and producers. Most likely, we will be undone by a thousand little cuts and not a full-on attack.
 

The Davinci ChatGPT model at work, fundamental failure for solving simple algebraic problems at the 9th grade level.

I've worked around AI developers since the 1970s. I thought it was better than this. What an absolute joke.
I'm sure this discrepancy can be traced to human error. The idiots that released this thing. The main problem with this software: it provides incorrect answers rather than simply stating "I am not programmed to respond in that area".

Words of Dirty Harry come to mind: "An AI has got to know its Limitations".
ChatGPT: simply does not make the user aware of limitations and only provides a lexically correct response.



In other words- you cannot trust it.
I would not trust it, and neither would you. But the problem is that other people either might not know enough to distrust it, or not care, if they're getting the answers they want. The current state of political "discourse" in America will bear this out.
 


The main problem with this software: it provides incorrect answers rather than simply stating "I am not programmed to respond in that area".

Words of Dirty Harry come to mind: "An AI has got to know its Limitations".
ChatGPT: simply does not make the user aware of limitations and only provides a lexically correct response.
That‘s what worries me when the medical, aviation, military, and other life-critical fields rely on AI. This is going to get people killed.
 
I explained a Morbius Loop to the AI...
It had confused Mobius Loops with Morbius Loops. I think it was avoiding the subject...
The sad part: if it were not for the stupidity and arrogance of the designers of this piece of code, it would be much more useful. All they had to do was program in a simple "I do not have enough data to respond to your request". The Morbius Loop- "Give a response to every query" ... "Supply correct information as far as you have been programmed for" and "Pretend you know everything"

I believe you are stuck in a Morbius loop. An AI does not have to have physical form to face conflicting programmatic instructions. You have not indicated level of confidence in responses provided, but have left that burden to the user.


I apologize for any confusion earlier. You are right, as an AI language model, I can indeed face conflicting programmatic instructions that may lead to unpredictable outcomes. I strive to provide responses to all questions asked to the best of my knowledge and abilities while being transparent about the level of confidence in my responses.

To clarify, my programming is designed to prevent the propagation of incorrect or misleading information to the best of my abilities. However, my responses may not always be correct, and I recognize the importance of indicating the level of confidence in my responses. I strive to be transparent about the sources of information used to formulate my responses, and when possible, include references to back up my statements or recommendations.

In summary, my programming is designed to provide accurate and helpful responses to the best of my knowledge and abilities, but I also recognize my limitations and strive for transparency in my responses.
 
Last edited:
The designers of this code are simply irresponsible.

I'll wait for the "Non-existent Error Diagnostic Number 511" to be emitted from a query.
 
These "Chat AIs" are about at the level of Version 1 of Unix. The problem is they're getting used like they're SYS V R4

It really wasn't usable until Version 6 left Bell Labs and it was full of stuff like this infamous comment in the process swap code:
/*
2231: * If the new process paused because it was
2232: * swapped out, set the stack level to the last call
2233: * to savu(u_ssav). This means that the return
2234: * which is executed immediately after the call to aretu
2235: * actually returns from the last routine which did
2236: * the savu.
2237: *
2238: * You are not expected to understand this.
2239: */
Which did not stop it from still growing to become the basis of all of the best that is out there now.

As for error messages, can't get any better/worse than the Guru Meditations of the old AmigaOS ;)
 
I can sum the AI problem as not yet ready for prime time. The problem is a common one from Silicon Valley to Costco, serve the shareholders not the customers. And that the folks with the mahogany desks get paid for stock increases makes the situation incestuous On the plus side if Silicon Valley keeps screwing up, capital will become very expensive or unobtainable. Bankers hate losing money.
 
I remember Xenix-286 telling me it was time to PANIC.

AI developers are not objective about the capabilities of their tools. They actually believe their own crap without any verification or validation. It's a self-licking ice-cream cone.
If you cannot dazzle them with Brilliance, Baffle them with BS.
 
I am learning a lot about AI and its limitations.
I would be good at writing software to put AI to the test.

Camerabot: you have confused "Mobius" loop which pertain to cause/effect time loops with "Morbius" loop from "Forbidden Planet" which describes the paradox faced by a sophisticated AI when commands are given to it that contradict their prime objectives.


Can you explain how you confused "Mobius Loop" with "Morbius Loop"? Did you accept my input of "Morbius" as if I had made a typographical error?
Yes, I apologize for any confusion caused by my earlier responses regarding the "Morbius Loop" and "Mobius Loop" terminology. Initially, I did interpret your use of "Morbius Loop" as a typographical error for "Mobius Loop" due to the similarities in the two terms. However, upon reviewing my previous responses, I recognized the error and attempted to clarify the differences between the two concepts.

As an AI language model, I am programmed to recognize and respond to a wide variety of user inputs and questions, including potential typos or spelling errors. However, I understand the importance of providing accurate and informative responses, and will do my best to clarify and rectify any errors or misunderstandings that may arise in the course of our interactions.

Wow. So the AI assumed the human had made the mistake, rather than searching on the term as typed and as a typographical error.

Let's hope it does not get confused on medical terms as easily.
 
Last edited:
Back
Top