Paragraphica (context-to-image camera) by Bjørn Karmann


Yay! Cameras! 🙈🙉🙊┌( ಠ_ಠ)┘ [◉"]
Local time
1:56 AM
Oct 20, 2006
Pretty wild concept.

Paragraphica is a context-to-image camera that uses location data and artificial intelligence to visualize a "photo" of a specific place and moment. The camera exists both as a physical prototype and a virtual camera that you can try.


"It probably looks like this where you are"?

Is the squashed spider for decoration or does it do something I wonder. Is the camera doing the processing or is that back-end cloud computing/server/whatever?

As a standalone device would be cool, though I guess standalone thinking shows my inability to shake off old skool "independence" ideals.
Is the squashed spider for decoration or does it do something I wonder.
I originally thought it might be a GPS antenna, but then I realized my iPhone has GPS without an external antenna. So maybe is is just serendipity. Or maybe he is planning to sell the cameras out of the back of comic books and that design feature resonates with that market segment.
Last edited:
bI am wondering how he is accomplishing this?
So I looked up "scintigraphic" because, well, I didn't have any idea what the word meant. It turns out it is the adjective form of "scintigraphy", which is defined in the Oxford Dictionary as follows:

Scintigraphy (noun) - The radiological technique or procedure of administering a radioactive substance (typically one that is tissue- or organ-specific) to a patient and recording its distribution in the body by means of a stationary or moving gamma camera; an instance of using this technique.

Make of it what you will.
Yes, I had to look it up also; hence my confusion of his usage. I was thinking he is using it metaphorically rather than literally; but, I dunno…
Just maybe.... press the copy prompt button so it... copies the prompt to clipboard... go to the Stable Diffusion API link raydm6 linked... paste it there.... edit as you see fit (eg add a town name and something or other "nearby") press generate and be AMAZED at the probably-looks-like-this-kinda picture you get.

This might well be all it's really doing if and when it does work, or not far off.

I'm kinda warming to this probabilistic photography. Self portrait? You probably look like this. What will I get for Christmas? Probably looks like this. Where will I go today? Probably...
For AI generated imagery, I’ve been having better results with the main website.

Not sure what the difference is between it and the Stable Diffusion API? I am assuming he is using the API (application programming interface) to communicate with the camera/software; (i.e., systems passing/sharing data).

Last edited:
Yeah it's cool idea that there's this physical camera with the spidersense/3d-radiation-symbol-thing instead of a lens... but it seems likely it's basically a web page that pulls together some text from reading your location and getting the weather/time of day and then sends that to the API.

That's a cool page of stuff he's put together though 😆
So these are the photos it gives me:


Definitely close to what it should look like, perhaps on a great Spring day when there aren’t buckets of rain coming down.

It seems to me these could be generated by going through several dozen Google images of the area and then synthesizing them into these photos.