How the Italian Renaissance can save the smartphone camera
Feb 20, 2023something very special was happening in 16th century italy the
renaissance
orrenaissance
was a difficult shift from the dark ages to a more modern era of humanity the black plague caused great migrations across europe technologies like the printing press spread ideas faster than almost anything before and a rediscovery of ancient Roman texts led to a refocus on philosophies of the humanities based on logic and reason, but probably the most recognizable aspect of the renaissance is art. It is impossible to think about the art itself without first considering some of the pieces and sculptures. that were created during that era giado da vinci michelangelo and caravaggio the list of artists who fundamentally changed the way art was made during that period goes on for pages and pages, but much of that rapid rise in the quality of art is based in a fundamental concept light before the renaissance much of the art that was being produced was impressionistic, it was flat, had line drawings with minimal depth and that's not to say there wasn't light in these paintings, it just clearly wasn't a key focus many of these paintings had global light giving even exposure to the painting, but it wasn't until the renaissance that these artists began to understand how light could actually shape a 2D image.The around 20 da vinci paintings helped define the concept. de somato removed those harsh lines that defined a figure on a canvas instead of da vinci used a smooth transition between colors to shape a 2d image there is a smooth texture that helps better define how it looks and curves our skin is smooth and it is plump and its hue gradually moves from color to color in this almost invisible gradient based entirely on the source and direction of light shining on it similarly, Caravaggio used revolutionary precise representations of light and shadow to shape a 2d image perhaps the most famous painting by caravaggio the vocation of saint mattheus direct your gaze around the canvas using the light from a single window the light tells a story in itself and directs you to look at the most important characters in the painting but perhaps most importantly it shapes a 2d image that bounces around the room fading from light to total darkness it's this accurate depiction of light and shadow this kyoskiro that carvagio really pioneered as a sense of realism which is both artistic and intentional caravaggio is painting light as much as he is painting the rest of the canvas the importance of controlling the properties of light has remained an important part of the art world since the 15th century it helped move the animation from 2D to 3D space and helped create the art of photography not only through the devices that capture light but through the art of what to do with that projected light early photography was almost entirely based on this kiaskiro renaissance concept, especially in black and white photography the artist's control is placed almost entirely in the tonal range of photography this variation between light and dark is what adds dimension to the image and, whether in the darkroom or more modern editing program the artist control is completely based on the intensity of those highlights or the depth of those shadows in the early 20th century maybe a more modern version of caravaggio ansel adams knew this probably better than anyone I specifically remember her landscapes hanging in my house when I was a child you could look at them for hours and notice the little tonal shifts between light and dark that showed this beauty through the shade they were monochromatic in color but so rich and colorful in depth now adams did not paint every single detail of a scene like renaissance painters but his understanding of the importance of tone in a photograph added so much richness to his work and although adam's
camera
medium gave him control over depth of field of an image, mostly shot at an aperture of f64, this ratio between the focal length of the lens and the hole that actually allows the light that made all of his images almost totally flat now, part of adam's reason for To do this is to force your eye to travel through every part of the image, you can see the beauty in the flowing river, the flowers blooming in the meadow, and the towering mountains looming over the rest of the For Adams, all of nature It was important, but without using depth of field, tonal shifts between highlights and shadows were all he had to practice kyoskiro as much as the Renaissance painters before him hundreds of years ago, and although most of Adam's work was photographed almost a century ago, much of that work looks modern, hell, it looks better than modern, it looks better than most photographers could photograph today and the reason for that, beyond their fundamental understanding of the characteristics of light, it was that there was a long period of time after adam's work wherecamera
s got fundamentally worse now the elements were working with large format photography these huge wooden boxes that caught the light and projected it onto these huge canvases of film four by five and eight by ten inches and while these cameras seem pretty old fashioned by today's standards the quality that this format can capture is pretty amazing that huge canvas of film that that lens projects into the inside of that box allows for the smallest of changes in tone and color large format just has this incredibly granular spimato or in the case of ansel adams a much more granular kiaskuro this granular tonality in images creates realism and depth but one of the fundamental cornerstones of technology is miniaturization how we shoot something big and we figured out how to make it smaller make it do the same amount of work or even more in a much smaller and more portable package, we've seen it in radios, music players, data storage and computers, computers shrunk from a mainframe to a desktop PC, to a laptop, to asmartphone
, and hopefully one day to an environmental system. that exists around us without us actually seeing it, so of course this was bound to happen with cameras.The too large format became medium format. Medium format became 35mm and even half frame and with that shrinking of the film canvases we were actually projecting these scenes. in was also reduced, even if we could project the same scene onto the canvas, there's just less physical space to have that granular tonality between color to color or highlight to shadow, there's less kyoskiro, there's less summato, the film is much smaller than digital photography. and finally mobile phones with built in cameras but they were so small because just like with computers and dedicated cameras mobile phones were getting smaller too but it turns out humans are quite addicted to communication and phones they offered faster communication than almost any other medium before them so new ones were made every year got better every year and faster every year batteries got better displays got better chipsets got better but unfortunately for cameras we just can't change physics that obsession with thinness and miniaturization stuck and it became very difficult to put any kind of sensor bigger than your pinky fingernail in your cell phone because to increase the size of the sensor in your phone you would have to increase the depth z, the distance between the optical center of the lens and the sensor or field plane think back to those large format cameras, they're huge, they have a ton of space between the lens and the film canvas it's on projecting, if you take a magnifying glass to the sun and move it away from the concrete, the circle of the image gets bigger. the same thing needs to happen on your cell phone now there were some phones that tried to buck this trend of super slim devices the nokia 808 pureview is a very good example almost half the back of that phone was an image sensor and that allowed for a couple of things, you can make the pixels larger to let more light into your image sensor or you can subdivide it further to get a higher resolution photo with same size photo views, but those devices really they didn't realize it and they kept getting outpaced by these phones that were thinner and lighter and maybe part of that reason is because the mobile internet didn't really exist like it does today and social media wasn't a super integral part of our society at that moment. those phones were too early, but computational speed was advancing so much faster than physics, so engineers decided they had to try to fool physics with chipsets they needed to process the photos on these computers in their hand to make them look as good as you possibly could with the limitations they had now most of the little sensors on your phone just can't capture a ton of light so you have to amplify that signal with something called electronic gain you're digitally lighting those pixels but almost every digital photo has noise these blemishes caused by digital interference electron hopping a bunch of different random things so by digitally amplifying the signal you are also digitally amplifying the noise if you can saturate those very light pixels and you don't have to digitally amplify the signal you'll see less noise but these little sensors just can't capture as much light so you really have to add a lot of gain but then you have to try to minimize that noise so you have to use noise reduction which works effectively by taking an area in a photo and taking the average pixel color of that area and applying it to all of those pixels but that essentially removes this haze from that image that granular hue that gives it contour and depth that's the reason phone photos often look really tacky and flat and then after you've applied all that noise reduction your image starts to look soft so you have to add sharpness and the sharpness fundamentally increases the contrast between the bright and dark parts of the image which again fundamentally reduces the kiskuro between those parts of the image and you get an even flatter image that's how we ended up with the phone photo look and why you can usually tell the difference between a photo that was taken on a phone and one that was taken with a great dedicated camera but cameras haven't really advanced that fast that the technology has mostly stuck to the same phones on the other hand they have the technology that led to something called computational photography and technically the processing your phone is doing to make that little photo from the sensor look better is computational but there are things your phone is doing that are tricking physics that a normal camera just cannot do at this time.
Google pixel phones can take some great photos of the stars with astro mode and can basically see in the dark with night vision, they can separate foreground from background and pretend the big sensor sees with portrait mode, but it's one of the fundamental technologies of computational photography and perhaps one of the most important things that's been done to the world of photography is something called HDR plus HDR and photography means high dynamic range, which is effectively the process of getting as much detail as possible from the highlights, midtones, and shadows of that non-HDR photo on a small sensor photo when I increased the gain to bring back all the shadow detail, it was very easy to remove the reflections from that photo , it was pretty much either going to expose the highlights or expose the shadows, but it wasn't really doing both, but around 2016 Google's then head of computational imaging Mark Lavoy came up with a solution to this to which it's called hdr plus and a few years ago when i was an android authority i made a whole video on this so maybe i'll link it below if you want to see this but sure enough how it works is when you have the camera app open your phone is technically taking a bunch of really short exposures because it has to be able to show you those images on your screen anyway i mean it's refreshing the screen over and over again so it just stores a lot of those super short exposures in ram and then you align those frames, you take the average pixel color of each pixel in the image and then you can increase the gain without having as much noise because it's averaging each individual pixel color alone this feature cemented Google as one of the best camera phones for a long time and only recently there has been real serious competition Mobile phones were not used to capturing so much data and displaying so much detail, light and shadow, this was life changing, now the danger here is just because you are capturing all that information, it does not mean that you should always display all that information, remember the somato andkyoskiro again, it's that granular tonality. between highlights and shadows from one color to the next color that adds contour and depth to an image if it's all a halftone you'll have a really flat looking image that's amplified by the processing your phone is already doing now thankfully Mark Lavoy knew this all too well he's actually on record saying he took a lot of inspiration from early renaissance artists especially Caravaggio when he tweaked the hdr plus algorithm and in the first few pixels you can definitely see there's a lot of depth in those shadows they blend really well with the reflections i've always been a fan of caravaggio this is his supper on the mouse the shadows are dark there's a lot of depth and a lot of contrast and that's been the signature look of hdr plus but as the years went by and other manufacturers started to do computer images.
This obsession with showing as much detail as possible started to exist because you could say, hey, we show more detail in the shadows than the competition, or we retain more detail in the highlights than the competition. The dynamic range stops that we show are larger than the competition, so they turned into this trend where the shadows started to get brighter and the highlights started to get darker. and you got a flatter picture here are some photos I took with Mark at the Pixel 4 launch event and one I took at the Pixel 3 launch event now I'll admit I messed up a bit here because that Pixel 3 photo The baseline was taken at the event at pixel 3 and the lighting is totally different, but if you look at pixel one, two, and four, you can see a much tighter approach to preserving highlights and lifting shadows in this almost linear fashion, the pixel one had this great balance of shadow and highlight detail, it was still a little too sharp again, an artifact of needing to compensate for that noise reduction, but there was color in our skin, there was contour in our faces, and as we devices got newer, that intense focus on keeping the details of shadows and highlights started to remove the contrast from our face and we started to look more like zombies than humans and here's the problem that those algorithms that process the images in our phones they try to get around the small sensor issue to some of the computational features like portrait mode and even a little bit in hdr are just spaces in between there are these algorithms that were created at this particular moment in time when thinness was being tested and the portability of our devices but even in the last couple of years that sentiment has completely changed we have seen so many companies increase the thickness of their
smartphone
and also the camera bump because the camera has pretty much become the fundamental key selling feature in most smartphones we're just sharing a lot more than we used to and we have a much bigger camera bump you can put a much bigger sensor in there those bigger sensors just capture a lot more information in a natural way, from gathering a lot more light to having a much better dynamic range and natural depth of field when you're pointing your phone at an object, there's a really nice transition between the object you're pointing your phone at and shooting mode background portrait seems almost useless at this point and I'm not saying we should get rid of things like HDR, plus those computational features are awesome and can really lend themselves to newer and larger sensors, but we should use the natural features of larger sensors that we now have in our phones right now, newer phones with much larger sensors still render as if they were smaller. this photo i took with the pixel 5 on a 1+ 2.55 inch sensor compared to this photo i took on the pixel 6 with a much larger 1.3+ inch sensor yes the pixel 6 has a more natural depth of field due to that larger sensor but the processing of the aspect is almost exactly the same.Let's look at a couple of examples of what's possible on these new, larger sensors. Here are a couple of jpegs straight from the camera I took with the pixel six compared to the raw file I captured along with it. raw mode plus jpeg if we increase the intensity to bring color back to the photo you will see what is really possible with this sensor remember raw is the untouched information the sensor is capturing what you usually get when you press the shutter The button is the photo after the phone processed the raw photo. The pre-processed photo has super-sharp fine edges that make up for noise reduction.
Each individual hair is too sharp. Eyes start to tear up when you zoom in and the nice natural halo of light is completely absorbed due to that hdr, but the edited raw shows the natural gradient of focus we get from these larger sensors, the highlights and shadows on the face, especially under the eyes and some other parts of the cheeks are sucked into the direct image of the camera and beat hdr'd, which removes the natural contour of the face, but the raw version of this photo almost it brings them back to life, it looks like it was taken in full real -on a camera with a much larger sensor and retains many of those natural characteristics that make a large sensor great.
Here's another photo I took with the Pixel 6 at Wetzler just as the sun was going down. It looks good at first glance, but look what's possible. when we compare it to the raw the shadow on the ground is too raised in the baked jpeg where it should be darker to help show what time of day it is the reflections should be able to peak and maybe the biggest giveaway of this photo it's the foliage every time you take a photo with the phone with plants in them they get too sharp there's too much detail there but the raw looks so much more natural you get this contrast between the shadows and the highlights and still so keep that detail and probably the most obvious in this shot is not too sharp and you can feel the softness of the foliage more naturally than in the camera shot.
The time of day shines so much better because there's color that's actually coming out of those shadows and they're not just lifted to make the tree look dead, it's just a photo that looks better overall and stays true to that outline that was started by davinci and caravaggio these raw photos can really highlight what is possible with these new larger sensors if i can edit a photo to look like this the phone should be able to do it itself automatically the phones are still processing as if not we were capturing enough light and yes these larger sensors are still much smaller than traditional large camera sensors but they're so-so near the point where these raws look slightly different than a photo I'm taking with a dedicated dslr and I see that most people just want to point and shoot and be done with it most people don't want to take raw images on their phone and edit them to this really granular degree before they share anything.
I totally get what I'm just saying at this point phones should be able to do this automatically because there's a lot more information in these photos now interestingly enough Apple announced something with the iPhone 13 this year that I don't think necessarily gets enough press and now they call it photo styles on the surface this looks like a filter you would put on like an instagram story to make it look a little different but this is very different it's actually changing the way the photo is processed at the point of capture it's not just a filter sitting at the top, now photo styles only gives you a couple of settings to choose from. right now there is contrast and warmth which basically means you can make the photo less contrast or more contrast or warmer or cooler but there are a few presets you can choose from and you set them when you first open the camera app which means every time you take a picture if you have it in that preset or that individual setting that you made yourself every picture you take will be processed that way that's a super simple way to get a more individualized and modern Processed Photo without having to edit the raw yourself, although they do allow you to edit raws if you wish.
Ironically the rich contrast photo style has been the most popular since the iPhone 13 came out which actually gives a very similar look to what the original Pixel 1 had when it was first released gives this really nice kyoskoro all over the The iPhone 13's image and photo looks aren't perfect either, they're still processing a lot of your photos as if those sensors were smaller sensors like the iPhone 12's, but at least those photo styles give you a little example of what is possible with these larger sensors giving a lot more dynamic range and a lot more tonal granularity when you want to adjust the shooting itself and even then I still love using a dedicated camera even if it just changes the way I feel when I take a picture, it just inspires me in a whole different way, but we're getting so close to the point where what these phone cameras can do is becoming almost on par with dedicated dslrs and mirrorless cameras and I think which that's really exciting anyway huh if you stay that long thanks i know this was a longer video but the original version of this script was like twice as long because it had this detailed explanation of how the cameras at a very fundamental level, thinking about doing a separate video, so if that's something you want to see, you can let me know, also thanks to the nerdwriter who inspired me.
In many ways this video has some really amazing videos on Caravaggio and DaVinci that you should definitely go check out so I'll add them in the description below and make another video at some point when I figure it out. what i want to say but i'm trying not to rush this is only when things come to mind i really want to talk about i'm going to post videos but otherwise it's not really a big thing i have a job full time and all that kind of stuff. Sorry if the content is not super consistent.
If you have any copyright issue, please Contact