Sites Inria

Version française

Event

Wishes of the Director and Lecture by Gérard Berry of Collège de France

Inria Lille - Nord Europe Center receives Gérard Berry of Collège de France for his lecture about digital photography. This lecture will extend by a seminar of Stéphane Huot, person in charge of the Mjolnir team. This demonstration will be followed by a cocktail party during which Isabelle Herlin, director of the Inria Lille - Nord Europe center will present its wishes. It will be in French. Registration is free but compulsory.

  • Date : 31/01/2018
  • Place : Lilliad, Campus of Lille University - sciences et technologies - 2 avenue Jean Perrin, Villeneuve d'Ascq

Program

15h30 : Welcome

15h45 : Introduction by Isabelle Herlin

16h - 17h30 : Lecture by Gérard Berry, "Digital photography, a perfect example of the power of computer science"

17h30 - 18h30 : Seminar by Stéphane Huot, "Human-computer interaction: past tense and simple future...or the reverse"

18h30 - 18h45 : Questions

19h - 20h30 : Cocktail


Lecture by Gérard Berry

 Express Bio : Gérard Berry is computer scientist, professor on Collège de France where he is a holder of the pulpit of Algorithms, machines and languages.



  • Abstract

The digital camera is an excellent example of the current evolution of cyber-physical systems, i.e. systems closely combining mechanics, physics, electronics and software. It is also a marvelous example - and one that is accessible to all - of the power of computer science methods compared to those of physics and mechanics alone. The lecture will present the wide array of algorithms embedded in modern cameras and in post-production software programs, and will then discuss the significant impact they have on the design of the cameras and lenses, which is currently changing drastically, and their impact on professional or amateur photographers.

Silver halide photography, a very old technique, only progressed slowly during the 20th century: it took decades for a slow improvement in films and papers, the introduction of automatic exposure calculated analogically from photoelectric cells and the telemetric or reflex viewfinder. Conversely, beginning with the commercialization of the first digital camera in 1990, digital photography has evolved extremely quickly. In 2003, decent semi-professional cameras could already be found and, from 2009 onwards, there were high-quality reflex cameras at an affordable price. A wide variety of cameras of various sizes now exists, all capable of producing high-quality images. Even phones have become excellent photo and video cameras, mainly due to the algorithms they apply. As they are also capable of doing many other things, for example immediately sending pictures over the Internet, they are in the process of taking the place of the old compact cameras and being used as the sole piece of equipment for taking the occasional photo - and for all photos in countries where silver halide photography had a prohibitive cost for the inhabitants. The logic behind digital photography has, as a result, become very different from that of silver halide photography; however this does not mean that the latter is not still favored by certain artists.

What made this revolution possible and why was it so fast? There are three main reasons: the design by physicists and the large-scale industrial manufacturing of high-quality sensors; the considerable increase in power and reduction in the energy expenditure of the embedded computers, thanks to the famous Moore's law; and finally, and above all, the continued improvement in photography algorithms, which in fact play a more important role than the sensors. Over the last 15 years, we have gained at least four degrees of sensitivity, three-quarters of which are due to algorithms. Even cameras with relatively small sensors are capable of taking very high-quality photos at ISO 3200, which was totally impossible with silver halide photography.

Firstly, the lecture will explain the succession of subtle algorithmic transformations that enable the development of the sensor's digital data images, resulting in the final image through managing light, sharpness and noise as best as possible. It will then study the algorithms dedicated to the automatic correction of the various optical faults of the lenses; it will show that the power of these algorithms means that these lenses will no longer be designed as before: henceforth, their design fully incorporates physics and algorithmics, providing better quality optics that are smaller, lighter and cheaper. It will pay particular attention to the increasing importance of new processes based on the merging of successive shots to improve quality depending on various objectives (light, noise, depth of field, etc.), in particular for phones. It will show why basing new cameras directly on algorithms increasingly modifies the core of their design, meaning that many more surprising innovations could emerge. Moreover, other similar developments are equally drastically changing medical and astronomic images.

Finally, the lecture will underline the importance of the new algorithms dedicated to improving the ergonomics of shots, making the photographer's technical life much easier in almost every aspect: well-designed human-computer interaction, stabilization of the sensor and the lens in order to eliminate camera shake, sophisticated management of light and focus, numerous aids for taking shots in the viewfinder becoming electronic and a direct link with computers and phones.

Seminar by Stéphane Huot

  • Abstract

Long before the arrival of personal computers, the Internet and smartphones, human-computer interaction (HCI) was already a concern at the heart of some of the visions that have contributed to shaping modern information technology, be it personal or professional. That said, the design and study of the interactions is still often considered secondary in systems design, with priority often given to the development of functionalities rather than the means with which to use them.
This situation has gradually improved with, notably, the arrival of tactile (smartphones and tablets) or entertainment (games consoles) devices for which the simplicity of use argument has ousted that of their intrinsic power. This has, obviously, enabled a generalization and democratization of access to the technology. However one consequence is, in our opinion, a relative reduction in the possibilities provided by these technologies which, paradoxically, are more powerful than ever. By concealing complexity rather than helping to master it, by perpetuating the myth that, with these devices, everyone can easily do a lot without any effort, the trend is towards sacrificing the potential of the IT tool and user performance to make it quicker to pick up, without enabling more advanced and high-performance - and perhaps more gratifying - use for the user.
This balance between simplicity of use and the power of the tool is a difficult compromise to find and, in our opinion, one of the challenges and major difficulties of HCI: observing and understanding the sensory - and psychomotor, cognitive, social and technological - phenomena applied during the interaction between people and systems in order to improve this interaction and to guide its design to empower users. The aim ultimately being to enable them to do what they would be unable to do without this tool, even if this requires certain learning efforts on their part.
In this seminar, we will start by presenting what is human-computer interaction as a research field, with its goals, methods and practices. Then, through a brief history of computer science from the interaction angle, we will talk about several current innovations that result from pioneering visions in the field, in particular by taking into account this simplicity/power compromise of the tool. We will also see, through examples and counter-examples from our current digital environments, as well as through recent research work, that these visions still carry numerous present and future challenges of HCI. In particular, we will conclude by discussing the necessity to adopt an approach centered on the user and interaction, at a time of major scientific, technological and societal challenges in digital technology, such as the design of autonomous systems or the processing and automatic exploitation of data.

Keywords: Langage ESTEREL Human computer interaction

Top