[I’m sorry for the blank few days: there was some marking, I was ill, then there was a man wanting something written fast, then another, then a lecture to plan and write and oh yeah, paid work too. However, I am briefly caught up with blogs and this one has already had to be updated once in its draft state, so, have ye at it and more will shortly follow…]
Obviously, with my main job, I do a lot of squinting at inscriptions. We love digital images because they can be enlarged but the problem with them is that you’re stuck with the same image and lighting unless you redo it. The surfaces are always revealed or shadowed in the same way per image, even if you rotate it. So often as not the first thing I do when trying to read a coin is to take it over to the window of our room and look at it under natural light, turning and tipping it to catch different angles. You can’t do that artificially. Until now.
A new technique called Polynomial Texture Mapping that’s been pioneered at the Oriental Institute of the University of Southern California is being used there to examine an under-exploited cache of Aramaic tablets from about 500 B. C. E., found at the Persian fortress of Persepolis in 1933. They’re using a variety of techniques to look at these things, including UV and IR imaging, and learning a great deal, as you can see in this article on the University of Chicago website to which I was directed by this post at Michelle Moran’s History Buff, but I was most struck by the possibilities of this scanning technique, which they are justly proud of:
The Polynomial Texture Mapping apparatus looks a bit like a small astronomical observatory, with a cylindrical based topped by a hemispherical dome. The camera takes a set of 32 pictures of each side of the tablet, with each shot lit with a different combination of 32 lights set in the dome. After post-processing, the PTM software application knits these images to allow a viewer sitting at a computer to manipulate the apparent direction, angle and intensity of the light on the object, and to introduce various effects to help with visualization of the surface.
“This means that the scholar isn’t completely dependent on the photographer for what he sees anymore,” said Bruce Zuckerman, Director of the West Semitic Research Project and its online presence, InscriptiFact. “The scholar can pull up an image on the screen and relight an object exactly as he wants to see it. He can look at different parts of the image with different lighting, to cast light and shadow across even the faintest, shallowest marks of a stylus or pen on the surface, and across every detail of a seal impression.”
I mean, obviously, if one’s actually got the object, there will still be some things you can best do by human eye, but if you haven’t, this might reduce that set of things to a very small number. I guess the files would be huge and the software rare, but I hope they try and tackle that as well as using it to deepen readings of things on site, however important that may be. This is a tool to make sources more accessible as well as everything else, if they want. And it looks as if they do:
By 2010, the collaborating teams expect to have high-quality images of 5,000 to 6,000 Persepolis tablets and fragments, and to supplement these with conventional digital images of another 7,000 to 8,000 tablets and fragments. The images will be distributed online as they are processed, along with cataloging and editorial information.
“Thanks to electronic media, we don’t have to cut the parts of the archive up and distribute the pieces among academic specialties,” said Stolper. “We can combine the work of specialists in a way that lets us see the archive as it really was, in its original complexity, as one big thing with many distinct parts.”
Addendum: Michelle also now links to this article in the San Francisco Chronicle’s SFGate talking about a few of the actual things that have been learnt by applying this technique to obscure inscriptions. Some of it sounds marvellous material…