Information Retrieval for Handwritten Documents
The project consisted of developing a system supporting the search for information in handwritten documents.
A description of the project is here.
Report on the CATCH (Continuous Access To Cultural Heritage) meeting organised by SCRATCH - published in the BNVKI (Belgium-Netherlands Association for Artificial Intelligence) Newsletter.
Script Analysis Tools for the Cultural Heritage: statistics on queries and line matching - poster presented at SIREN 2006, Scientific Information and communication technology Research Event Netherlands.
Content-based text line comparison for historical document retrieval - presentation of an article for Computational Phonology workshop at Recent Advances in Natural Language Processing conference RANLP-2007.
Script Analysis Tools for the Cultural Heritage: text line matching for historical handwritten document retrieval - poster presented at SIREN 2007, Scientific Information and communication technology Research Event Netherlands.
User study report - study of the information retrieval requests from the users of the Nationaal Archief.
SCRATCH: Script Analysis Tools for the Cultural Heritage: information retrieval for handwritten documents - poster presented at the NWO Midterm event for the CATCH (Continuous Access To Cultural Heritage) project.
Text-image alignment for historical handwritten documents - paper presented at the IS&T / SPIE 21st Annual Symposium on Electronic Imaging.
Creation of large-scale image ontology
An unresolved, general problem is recognizing objects in images. We propose exploiting written language resources and web-based image mining for building a large-scale visual dictionary.
The project involves using text analysis and lexical resources to identify objects that might be found in a picture, and then constituting a large visual dictionary of those objects by trawling image repositories on the Web. This postdoc would be expected to produce a system that would construct an image ontology for tens of thousands of objects and comprising millions of images.
For example, from lexical resources or text mining, this system might identify that an "English Toy Spaniel" is a type of dog. This fact would be automatically included in the ontology and then the system would automatically gather images of that animal. In a further step, these images would be used to recognize an image of an "English Toy Spaniel". In such a way, a very large image ontology would be created.
Research involves identifying portrayable objects in text, and extracting image signatures for each collection of objects. The benefits for multimedia understanding are vast, since we currently have no list of what objects can be found in an image, and no large representative sets of images for all these objects.
Here is a presentation of the project.
The subject of my thesis:
Interpolation and resampling of three-dimensional
data and its application for urban cartography and for determination of cosmic microwave background.
A lot of interest has been shown in interpolation methods of irregularly
distributed data in space during the last few years. Well-established
methods, based on Shannon's theorem or on generalized Shannon's
theorem, were replaced in many problems by techniques which use
implicitly or explicitly models of data. These methods enable us to obtain
a better quality of interpolation, which is often accompanied by better
resolution and also by a more accurate solution being found in
We review these new interpolation techniques in two applications.
Firstly, three-dimensional modeling of urban areas for which complex but
bounded models are used: flat roofs, stacked super-structures (lift cages,
chimneys, dormer windows, etc.). The objective of resampling is the
conversion of data, which is expressed by reference points indicated by
a sensor (e.g. a high-resolution image, a digital model of terrain, etc.), to
georeferenced points (for example, reference points of cartography).
Secondly, the determination of the primordial cosmological background.
The data is taken from the sensor which is embedded in the satellite,
which has a complex movement. The sensor provides celestial map,
sampled in a random manner.
The wide range of the interpolation techniques, available at the present
time, gets smaller when one wants to interpolate the values that are
initially distributed on an irregular grid. It is the topic that we consider in
the thesis. Moreover, interpolating a signal that is nonbandlimited and
whose measurements are scattered in space is not a trivial problem and
requires some studies to be done. This problem is considered in the
thesis while working with airborne laser scanning data. The data has
been acquired over the urban areas. Since the urban areas often consist
of streets and buildings, the edges of the buildings form discontinuities
and so the signal to interpolate is nonbandlimited. The simple techniques
(nearest neighbor, linear interpolation) that preserve discontinuities lead
to distortions on the edges of buildings. So the proposed approach is
based on the cost function minimization and allows regularization on the
edges. We adapt the approach for the scattered data. Also we study the
role of edge-preserving potential functions for the surfaces representing
urban areas. The tests are done on a synthetic model as well as on two
real data sets: airborne laser scanning data on Brussels city (Belgium)
and on Amiens (France).
The second part of the thesis is devoted to cosmic microwave
background (CMB) anisotropies interpolation. Acquiring and treating this
kind of data has been an area of great scientific interest during the last
decades. This CMB data is on an irregular grid. The surface to
interpolate is considered to be smooth as long as no deconvolution is
made. This data has some well established statistical properties and we
apply kriging - a geostatistical method - to interpolate it. Other methods,
tried on the CMB data are binning, linear and nearest neighbor
interpolations. The performance if these methods is evaluated on the
simulated data, with and without noise. These experiments are devoted
to the satellite Planck mission preparation. It is to be launched in 2007 by
the European Space Agency. This mission will give full sky coverage
with the highest accuracy ever received.
The thesis can be downloaded here .
And it is presented here .
Optimization of digital watermarking performances using different coding strategies
With the ever-increasing development and usage of digital technologies and digital data, the question of protecting intellectual property of digital data has becoming more and more important. Digital watermarking, which allows to embed copyright information into the digital data, has become more and more indispensable. Due to its characteristics, one of the problems in digital watermarking for fixed images is to decide how to hide in an image as many bits of information (or signature) as possible while ensuring that the signature can be correctly retrieved at the detecting stage, even after various image manipulation including attacks. Error correcting codes and repetition are the natural choices to use in order to correct possible errors when extracting the signature. We have investigated different ways of applying error correcting codes, repetition and some combinations of the two, given different capacities of a fixed image for different error rates of the watermarking channel, in order to obtain optimal selection for a given length of signature. We present both the qualitative and quantitative results. The goal of this work is to explore applying coding methods for watermarking purpose.
The case when some bits in the signature are more important than others is also considered and the theoretical results are presented.
An important question is to find a connection between the bit error rate, which is used for the theoretical calculations, and the JPEG compression rate. We present some experimental results based on testing 500 images.
The results of the internship are presented here .