The latest meeting of ISO Technical Committee 42, Working Group 18 Digital Photography began today in Tokyo. I’m not travelling there this year, so I’m attending remotely, using the Webex conference app. Thankfully Tokyo is only two hours west of my time zone, so the 9-5 meeting is from 11am to 7pm here.
The first half of the first day is administrative stuff. I introduced a new attendee from Australia, who is actually the professor who teaches the Data Engineering and Image Processing courses that I tutor for at the university. He is of course an imaging expert too, and he’s taking over from me as the chair of the Standards Australia national committee on photography, since my term as chair has reached the term limit. I’m still going to be attending the international meetings and writing reports, but he’ll be chairing our national meetings.
One interesting thing from the admin session was a liaison report from CIPA, the Japanese camera manufacturers’ association. They are the body that defines the Exif standard for tagging photo image files, with metadata such as the time and date the photo was taken, latitude and longitude (if the camera has GPS, such as a phone camera), photographic data such as exposure time, aperture, focal length, and so on. There are many other tags for things like the photographer name and copyright information, and more technical things like the colour space. Anyway, they are working on new tags in a revision, including tags to specify image processing methods and the intent of the person doing the processing – as in are they processing the image for HDR display, or SDR display, or printing, or projection, or whatever. Because you may want different versions for all those things.
And a second new tag is very interesting. They are adding a tag to indicate whether the creator of the photo wants to allow it to be used to train machine learning/AI systems or not. The idea is that future AI training systems would/should check for this tag in all images they are fed, and reject any images tagged by the creator as “not to be used for training AI”, and only use those images where the creator has given permission. This does require the AI developers to implement and respect this, but at least from our side—the photograph creation side—we’ve provided a method for them to actually check. The idea would be that you’d configure this in your camera and it would write it to all your photos, or after uploading to your computer you can adjust the tag with software on a case-by-case basis. It also adds a piece of evidence that you can use to say “I explicitly did not give permission for this photo to be used to train AI”.
After the lunch break (2pm to 3:30 for me), we started on the technical sessions. We talked about angle-dependent image flare, low light performance with hand shake (i.e. an unsteady hand-held camera with long exposure times), vocabulary (a standard defining technical terms), and machine vision performance.
The low light presentation had a comparison between performance thresholds for artistic photography and security camera purposes. For the former, any perceivable drop in image quality is important, whereas for the security application, a drop in image quality doesn’t matter until it starts to make it difficult to identify people, car licence plates, etc. So they were quantifying the differences.
And in the vocabulary discussion we talked about defining the term “photography”. One comment on the draft document suggested that everyone knows what photography is, we don’t need to define it. But then a person in the meeting pointed out that we are the International Standards committee on Photography, so if anyone should have the imperative to define what it is, it should be us.
The meeting ended a little early for the day, after which I made dinner for my wife and myself, and then we took Scully out for a walk. The evening air with the sun down was still warm and humid, but a breeze has picked up which made it bearable.