Publications iCoSys

For a list of relevant conferences to our research axes, click here.

Entries: 356

  • [PDF] A. L. Frei, A. Khan, P. Zens, A. Lugli, I. Zlobec, and A. Fischer, “GammaFocus: An image augmentation method to focus model attention for classification,” in Medical Imaging with Deep Learning, short paper track, 2023.
    [Bibtex] [Abstract]
    @inproceedings{frei2023gammafocus,
    title = {GammaFocus: An image augmentation method to focus model attention for classification},
    author = {Ana Leni Frei and Amjad Khan and Philipp Zens and Alessandro Lugli and Inti Zlobec and Andreas Fischer},
    booktitle = {Medical Imaging with Deep Learning, short paper track},
    year = {2023},
    url = {https://openreview.net/forum?id=MCAgRjgh6v},
    abstract = {In histopathology, histologic elements are not randomly located across an image but organize into structured patterns. In this regard, classification tasks or feature extraction from histology images may require context information to increase performance. In this work, we explore the importance of keeping context information for a cell classification task on Hematoxylin and Eosin (H&E) scanned whole slide images (WSI) in colorectal cancer. We show that to differentiate normal from malignant epithelial cells, the environment around the cell plays a critical role. We propose here an image augmentation based on gamma variations to guide deep learning models to focus on the object of interest while keeping context information. This augmentation method yielded more specific models and helped to increase the model performance (weighted F1 score with/without gamma augmentation respectively, PanNuke: 99.49 vs 99.37 and TCGA: 91.38 vs. 89.12, p<0.05). }
    }

    In histopathology, histologic elements are not randomly located across an image but organize into structured patterns. In this regard, classification tasks or feature extraction from histology images may require context information to increase performance. In this work, we explore the importance of keeping context information for a cell classification task on Hematoxylin and Eosin (H&E) scanned whole slide images (WSI) in colorectal cancer. We show that to differentiate normal from malignant epithelial cells, the environment around the cell plays a critical role. We propose here an image augmentation based on gamma variations to guide deep learning models to focus on the object of interest while keeping context information. This augmentation method yielded more specific models and helped to increase the model performance (weighted F1 score with/without gamma augmentation respectively, PanNuke: 99.49 vs 99.37 and TCGA: 91.38 vs. 89.12, p<0.05).

  • [PDF] A. L. Frei, A. Khan, L. Studer, P. Zens, A. Lugli, A. Fischer, and I. Zlobec, "Local and global feature aggregation for accurate epithelial cell classification using graph attention mechanisms in histopathology images," in Medical Imaging with Deep Learning, short paper track, 2023.
    [Bibtex] [Abstract]
    @inproceedings{frei2023local,
    url = {https://openreview.net/forum?id=HlkroJOY-J},
    title = {Local and global feature aggregation for accurate epithelial cell classification using graph attention mechanisms in histopathology images},
    author = {Frei, Ana Leni and Khan, Amjad and Studer, Linda and Zens, Philipp and Lugli, Alessandro and Fischer, Andreas and Zlobec, Inti},
    booktitle = {Medical Imaging with Deep Learning, short paper track},
    year = {2023},
    abstract = {In digital pathology, cell-level tissue analyses are widely used to better understand tissue composition and structure. Publicly available datasets and models for cell detection and classification in colorectal cancer exist but lack the differentiation of normal and malignant epithelial cells that are important to perform prior to any downstream cell-based analysis. This classification task is particularly difficult due to the high intra-class variability of neoplastic cells. To tackle this, we present here a new method that uses graph-based node classification to take advantage of both local cell features and global tissue architecture to perform accurate epithelial cell classification. The proposed method demonstrated excellent performance on F1 score (PanNuke: 1.0, TCGA: 0.98) and performed significantly better than conventional computer vision methods (PanNuke: 0.99, TCGA: 0.92).}
    }

    In digital pathology, cell-level tissue analyses are widely used to better understand tissue composition and structure. Publicly available datasets and models for cell detection and classification in colorectal cancer exist but lack the differentiation of normal and malignant epithelial cells that are important to perform prior to any downstream cell-based analysis. This classification task is particularly difficult due to the high intra-class variability of neoplastic cells. To tackle this, we present here a new method that uses graph-based node classification to take advantage of both local cell features and global tissue architecture to perform accurate epithelial cell classification. The proposed method demonstrated excellent performance on F1 score (PanNuke: 1.0, TCGA: 0.98) and performed significantly better than conventional computer vision methods (PanNuke: 0.99, TCGA: 0.92).

  • [PDF] [DOI] M. Jungo, B. Wolf, A. Maksai, C. Musat, and A. Fischer, "Character Queries: A Transformer-Based Approach to On-line Handwritten Character Segmentation," in Document Analysis and Recognition - ICDAR 2023, Cham, 2023, p. 98–114.
    [Bibtex] [Abstract]
    @InProceedings{jungo23character,
    url = {https://doi.org/10.48550/arXiv.2309.03072},
    doi = {10.48550/arXiv.2309.03072},
    author = {Jungo, Michael and Wolf, Beat and Maksai, Andrii and Musat, Claudiu and Fischer, Andreas},
    editor = {Fink, Gernot A. and Jain, Rajiv and Kise, Koichi and Zanibbi, Richard},
    title = {Character Queries: A Transformer-Based Approach to On-line Handwritten Character Segmentation},
    booktitle = {Document Analysis and Recognition - ICDAR 2023},
    year = {2023},
    publisher = {Springer Nature Switzerland},
    address = {Cham},
    pages = {98--114},
    abstract = {On-line handwritten character segmentation is often associated with handwriting recognition and even though recognition models include mechanisms to locate relevant positions during the recognition process, it is typically insufficient to produce a precise segmentation. Decoupling the segmentation from the recognition unlocks the potential to further utilize the result of the recognition. We specifically focus on the scenario where the transcription is known beforehand, in which case the character segmentation becomes an assignment problem between sampling points of the stylus trajectory and characters in the text. Inspired by the k-means clustering algorithm, we view it from the perspective of cluster assignment and present a Transformer-based architecture where each cluster is formed based on a learned character query in the Transformer decoder block. In order to assess the quality of our approach, we create character segmentation ground truths for two popular on-line handwriting datasets, IAM-OnDB and HANDS-VNOnDB, and evaluate multiple methods on them, demonstrating that our approach achieves the overall best results.},
    isbn = {978-3-031-41676-7}
    }

    On-line handwritten character segmentation is often associated with handwriting recognition and even though recognition models include mechanisms to locate relevant positions during the recognition process, it is typically insufficient to produce a precise segmentation. Decoupling the segmentation from the recognition unlocks the potential to further utilize the result of the recognition. We specifically focus on the scenario where the transcription is known beforehand, in which case the character segmentation becomes an assignment problem between sampling points of the stylus trajectory and characters in the text. Inspired by the k-means clustering algorithm, we view it from the perspective of cluster assignment and present a Transformer-based architecture where each cluster is formed based on a learned character query in the Transformer decoder block. In order to assess the quality of our approach, we create character segmentation ground truths for two popular on-line handwriting datasets, IAM-OnDB and HANDS-VNOnDB, and evaluate multiple methods on them, demonstrating that our approach achieves the overall best results.

  • [PDF] [DOI] F. Montet, A. Pongelli, S. Schwab, M. Devaux, T. Jusselme, and J. Hennebert, "Energy Performance Certificate Estimation at Large Scale Based on Open Data," Journal of Physics: Conference Series, vol. 2600, iss. 3, p. 32009, 2023.
    [Bibtex] [Abstract]
    @article{montet23energy,
    doi = {10.1088/1742-6596/2600/3/032009},
    url = {https://dx.doi.org/10.1088/1742-6596/2600/3/032009},
    year = {2023},
    month = {nov},
    publisher = {IOP Publishing},
    volume = {2600},
    number = {3},
    pages = {032009},
    author = {Frédéric Montet and Alessandro Pongelli and Stefanie Schwab and Mylène Devaux and Thomas Jusselme and Jean Hennebert},
    title = {Energy Performance Certificate Estimation at Large Scale Based on Open Data},
    journal = {Journal of Physics: Conference Series},
    abstract = {This paper presents an innovative methodology for enhancing energy efficiency assessment procedures in the built environment, with a focus on the Switzerland's Energy Strategy 2050. The current methodology necessitates intensive expert surveys, leading to substantial time and cost implications. Also, such a process can't be scaled to a large number of buildings. Using machine learning techniques, the estimation process is augmented and exploit open data resources. Utilizing a robust dataset exceeding 70'000 energy performance certificates (CECB), the method devises a two-stage ML approach to forecast energy performance. The first phase involves data reconstruction from online repositories, while the second employs a regression algorithm to estimate the energy efficiency. The proposed approach addresses the limitations of existing machine learning methods by offering finer prediction granularity and incorporating readily available data. The results show a commendable degree of prediction accuracy, particularly for single-family residences. Despite this, the study reveals a demand for further granular data, and underlines privacy concerns associated with such data collection. In summary, this investigation provides a significant contribution to the enhancement of energy efficiency assessment methodologies and policy-making.}
    }

    This paper presents an innovative methodology for enhancing energy efficiency assessment procedures in the built environment, with a focus on the Switzerland's Energy Strategy 2050. The current methodology necessitates intensive expert surveys, leading to substantial time and cost implications. Also, such a process can't be scaled to a large number of buildings. Using machine learning techniques, the estimation process is augmented and exploit open data resources. Utilizing a robust dataset exceeding 70'000 energy performance certificates (CECB), the method devises a two-stage ML approach to forecast energy performance. The first phase involves data reconstruction from online repositories, while the second employs a regression algorithm to estimate the energy efficiency. The proposed approach addresses the limitations of existing machine learning methods by offering finer prediction granularity and incorporating readily available data. The results show a commendable degree of prediction accuracy, particularly for single-family residences. Despite this, the study reveals a demand for further granular data, and underlines privacy concerns associated with such data collection. In summary, this investigation provides a significant contribution to the enhancement of energy efficiency assessment methodologies and policy-making.

  • [DOI] R. Plamondon, A. Bensalah, K. Lebel, R. Salameh, G. Séguin de Broin, C. O’Reilly, M. Begon, O. Desbiens, Y. Beloufa, A. Guy, D. and Berio, F. F. Leymarie, S. Boyoguéno-Bidias, A. Fischer, Z. Zhang, M. Morin, D. Alamargot, C. Rémi, N. Faci, R. Fortin, M. Simard, and C. Bazinet, "Lognormality: An Open Window on Neuromotor Control," in Graphonomics in Human Body Movement. Bridging Research and Practice from Motor Control to Handwriting Analysis and Recognition, Cham, 2023, p. 205–258.
    [Bibtex] [Abstract]
    @inproceedings{plamondon2023lognormality,
    doi = {10.1007/978-3-031-45461-5_15},
    url = {https://doi.org/10.1007/978-3-031-45461-5_15},
    title = {Lognormality: An Open Window on Neuromotor Control},
    author = {Plamondon, R{\'e}jean and Bensalah, Asma and Lebel, Karina and Salameh, Romeo and S{\'e}guin de Broin, Guillaume and O’Reilly, Christian and Begon, Mickael and Desbiens, Olivier and Beloufa, Youssef and Guy, Aymeric and and Berio, Daniel and Leymarie, Frederic Fol and Boyogu{\'e}no-Bidias, Simon-Pierre and Fischer, Andreas and Zhang, Zigeng and Morin, Marie-France and Alamargot, Denis and R{\'e}mi, C{\'e}line and Faci, Nadir and Fortin, Rapha{\"e}lle and Simard, Marie-No{\"e}lle and Bazinet, Caroline},
    editor = {Parziale, Antonio and Diaz, Moises and Melo, Filipe},
    booktitle = {Graphonomics in Human Body Movement. Bridging Research and Practice from Motor Control to Handwriting Analysis and Recognition},
    pages = {205--258},
    year = {2023},
    address = {Cham},
    publisher = {Springer Nature Switzerland},
    abstract = {This invited special session of IGS 2023 presents the works carried out at Laboratoire Scribens and some of its collaborating laboratories. It summarises the 17 talks presented in the colloquium {\#}611 entitled « La lognormalit{\'e}: une fen{\^e}tre ouverte sur le contr{\^o}le neuromoteur» (Lognormality: a window opened on neuromotor control), at the 2023 conference of the Association Francophone pour le Savoir (ACFAS) on May 10, 2023. These talks covered a wide range of subjects related to the Kinematic Theory, including key elements of the theory, some gesture analysis algorithms that have emerged from it, and its application to various fields, particularly in biomedical engineering and human-machine interaction.},
    isbn = {978-3-031-45461-5}
    }

    This invited special session of IGS 2023 presents the works carried out at Laboratoire Scribens and some of its collaborating laboratories. It summarises the 17 talks presented in the colloquium {\#}611 entitled « La lognormalité: une fenêtre ouverte sur le contrôle neuromoteur» (Lognormality: a window opened on neuromotor control), at the 2023 conference of the Association Francophone pour le Savoir (ACFAS) on May 10, 2023. These talks covered a wide range of subjects related to the Kinematic Theory, including key elements of the theory, some gesture analysis algorithms that have emerged from it, and its application to various fields, particularly in biomedical engineering and human-machine interaction.

  • [PDF] [DOI] J. F. Rey, M. Cesari, M. Schoenenweid, F. Montet, M. Gandolla, L. Bonvin, V. Bourquin, C. L. Jacot, J. Roman, D. S. Mahecha, A. S. Moreno, J. Hennebert, and G. J. Pernot, "Autodigit-RAD: Towards an automation of the radon's concentration dataflow in a new and innovative building," Journal of Physics: Conference Series, vol. 2600, iss. 10, p. 102008, 2023.
    [Bibtex] [Abstract]
    @article{rey23autodigit,
    doi = {10.1088/1742-6596/2600/10/102008},
    url = {https://dx.doi.org/10.1088/1742-6596/2600/10/102008},
    year = {2023},
    month = {nov},
    publisher = {IOP Publishing},
    volume = {2600},
    number = {10},
    pages = {102008},
    author = {J F Rey and M Cesari and M Schoenenweid and F Montet and M Gandolla and L Bonvin and V Bourquin and C L Jacot and J Roman and S Duque Mahecha and S Aguacil Moreno and J Hennebert and J Goyette Pernot},
    title = {Autodigit-RAD: Towards an automation of the radon's concentration dataflow in a new and innovative building},
    journal = {Journal of Physics: Conference Series},
    abstract = {Radon is a noble, natural, and radioactive gas coming mainly from the ground which might accumulate indoors and lead each year to 200-300 deaths from lung cancer in Switzerland. A brand new and innovative living lab will be built as of 2023 in Fribourg (Switzerland) which will allow to tackle the built environment and the relationship with its occupants. Among a large panel of environmental parameters, radon gas will be continuously monitored under and around the building as well as in the building envelope. This paper aims to present the overall process of the radon dataflow: 1) design of the sensor probes, 2) implementation of the radon sensor probes in the ground and 3) go-live with the data sharing platform with the building users. Such an infrastructure will bring the opportunity to researchers to lead new and innovative radon-related research.}
    }

    Radon is a noble, natural, and radioactive gas coming mainly from the ground which might accumulate indoors and lead each year to 200-300 deaths from lung cancer in Switzerland. A brand new and innovative living lab will be built as of 2023 in Fribourg (Switzerland) which will allow to tackle the built environment and the relationship with its occupants. Among a large panel of environmental parameters, radon gas will be continuously monitored under and around the building as well as in the building envelope. This paper aims to present the overall process of the radon dataflow: 1) design of the sensor probes, 2) implementation of the radon sensor probes in the ground and 3) go-live with the data sharing platform with the building users. Such an infrastructure will bring the opportunity to researchers to lead new and innovative radon-related research.

  • [DOI] A. Scius-Bertrand, P. Ströbel, M. Volk, T. Hodel, and A. Fischer, "The Bullinger Dataset: A Writer Adaptation Challenge," in Document Analysis and Recognition - ICDAR 2023, Cham, 2023, p. 397–410.
    [Bibtex] [Abstract]
    @inproceedings{scius2023bullinger,
    doi = {10.1007/978-3-031-41676-7_23},
    url = {https://doi.org/10.1007/978-3-031-41676-7_23},
    title = {The Bullinger Dataset: A Writer Adaptation Challenge},
    author = {Scius-Bertrand, Anna and Str{\"o}bel, Phillip and Volk, Martin and Hodel, Tobias and Fischer, Andreas},
    editor = {Fink, Gernot A. and Jain, Rajiv and Kise, Koichi and Zanibbi, Richard},
    booktitle = {Document Analysis and Recognition - ICDAR 2023},
    pages = {397--410},
    year = {2023},
    address = {Cham},
    publisher = {Springer Nature Switzerland},
    abstract = {One of the main challenges of automatically transcribing large collections of handwritten letters is to cope with the high variability of writing styles present in the collection. In particular, the writing styles of non-frequent writers, who have contributed only few letters, are often missing in the annotated learning samples used for training handwriting recognition systems. In this paper, we introduce the Bullinger dataset for writer adaptation, which is based on the Heinrich Bullinger letter collection from the 16th century, using a subset of 3,622 annotated letters (about 1.2 million words) from 306 writers. We provide baseline results for handwriting recognition with modern recognizers, before and after the application of standard techniques for supervised adaptation of frequent writers and self-supervised adaptation of non-frequent writers.},
    isbn = {978-3-031-41676-7}
    }

    One of the main challenges of automatically transcribing large collections of handwritten letters is to cope with the high variability of writing styles present in the collection. In particular, the writing styles of non-frequent writers, who have contributed only few letters, are often missing in the annotated learning samples used for training handwriting recognition systems. In this paper, we introduce the Bullinger dataset for writer adaptation, which is based on the Heinrich Bullinger letter collection from the 16th century, using a subset of 3,622 annotated letters (about 1.2 million words) from 306 writers. We provide baseline results for handwriting recognition with modern recognizers, before and after the application of standard techniques for supervised adaptation of frequent writers and self-supervised adaptation of non-frequent writers.

  • [PDF] [DOI] A. Scius-Bertrand, C. Rémi, E. Biabiany, J. Nagau, and A. Fischer, "Towards Visuo-Structural Handwriting Evaluation Based on Graph Matching," in Graphonomics in Human Body Movement. Bridging Research and Practice from Motor Control to Handwriting Analysis and Recognition, Cham, 2023, p. 75–88.
    [Bibtex] [Abstract]
    @inproceedings{scius2023towards,
    doi = {10.1007/978-3-031-45461-5_6},
    url = {https://doi.org/10.1007/978-3-031-45461-5_6},
    author= {Scius-Bertrand, Anna and R{\'e}mi, C{\'e}line and Biabiany, Emmanuel and Nagau, Jimmy and Fischer, Andreas},
    editor = {Parziale, Antonio and Diaz, Moises and Melo, Filipe},
    title = {Towards Visuo-Structural Handwriting Evaluation Based on Graph Matching},
    booktitle = {Graphonomics in Human Body Movement. Bridging Research and Practice from Motor Control to Handwriting Analysis and Recognition},
    year = {2023},
    publisher = {Springer Nature Switzerland},
    address = {Cham},
    pages = {75--88},
    abstract = {Judging the quality of handwriting based on visuo-structural criteria is fundamental for teachers when accompanying children who are learning to write. Automatic methods for quality assessment can support teachers when dealing with a large number of handwritings, in order to identify children who are having difficulties. In this paper, we investigate the potential of graph-based handwriting representation and graph matching to capture visuo-structural features and determine the legibility of cursive handwriting. On a comprehensive dataset of words written by children aged from 3 to 11 years, we compare the judgment of human experts with a graph-based analysis, both with respect to classification and clustering. The results are promising and highlight the potential of graph-based methods for handwriting evaluation.},
    isbn = {978-3-031-45461-5}
    }

    Judging the quality of handwriting based on visuo-structural criteria is fundamental for teachers when accompanying children who are learning to write. Automatic methods for quality assessment can support teachers when dealing with a large number of handwritings, in order to identify children who are having difficulties. In this paper, we investigate the potential of graph-based handwriting representation and graph matching to capture visuo-structural features and determine the legibility of cursive handwriting. On a comprehensive dataset of words written by children aged from 3 to 11 years, we compare the judgment of human experts with a graph-based analysis, both with respect to classification and clustering. The results are promising and highlight the potential of graph-based methods for handwriting evaluation.

  • [PDF] [DOI] A. Scius-Bertrand, M. Bui, and A. Fischer, "A Hybrid Deep Learning Approach to Keyword Spotting in Vietnamese Stele Images," Informatica, vol. 47, iss. 3, 2023.
    [Bibtex] [Abstract]
    @article{scius2023hybrid,
    doi = {10.31449/inf.v47i3.4785},
    url = {https://doi.org/10.31449/inf.v47i3.4785},
    year = {2023},
    author = {Scius-Bertrand, Anna and Bui, Marc and Fischer, Andreas},
    title = {A Hybrid Deep Learning Approach to Keyword Spotting in Vietnamese Stele Images},
    journal = {Informatica},
    volume = {47},
    number = {3},
    abstract = {In order to access the rich cultural heritage conveyed in Vietnamese steles, automatic reading of stone engravings would be a great support for historians, who are analyzing tens of thousands of stele images. Approaching the challenging problem with deep learning alone is difficult because the data-driven models require large representative datasets with expert human annotations, which are not available for the steles and costly to obtain. In this article, we present a hybrid approach to spot keywords in stele images that combines data-driven deep learning with knowledge-based structural modeling and matching of Chu Nom characters. The main advantage of the proposed method is that it is annotation-free, i.e. no human data annotation is required. In an experimental evaluation, we demonstrate that keywords can be successfully spotted with a mean average precision of more than 70% when a single engraving style is considered.}
    }

    In order to access the rich cultural heritage conveyed in Vietnamese steles, automatic reading of stone engravings would be a great support for historians, who are analyzing tens of thousands of stele images. Approaching the challenging problem with deep learning alone is difficult because the data-driven models require large representative datasets with expert human annotations, which are not available for the steles and costly to obtain. In this article, we present a hybrid approach to spot keywords in stele images that combines data-driven deep learning with knowledge-based structural modeling and matching of Chu Nom characters. The main advantage of the proposed method is that it is annotation-free, i.e. no human data annotation is required. In an experimental evaluation, we demonstrate that keywords can be successfully spotted with a mean average precision of more than 70% when a single engraving style is considered.

  • [PDF] [DOI] P. Ströbel, T. Hodel, A. Fischer, A. Scius-Bertrand, B. Wolf, A. Janka, J. Widmer, P. Scheurer, and M. Volk, "Bullingers Briefwechsel zugänglich machen: Stand der Handschriftenerkennung," , 2023.
    [Bibtex] [Abstract]
    @article{strobel2023bullingers,
    doi = {10.5281/zenodo.7715357},
    url = {https://boris.unibe.ch/id/eprint/180287},
    title = {Bullingers Briefwechsel zug{\"a}nglich machen: Stand der Handschriftenerkennung},
    author = {Str{\"o}bel, Phillip and Hodel, Tobias and Fischer, Andreas and Scius-Bertrand, Anna and Wolf, Beat and Janka, Anna and Widmer, Jonas and Scheurer, Patricia and Volk, Martin},
    year = {2023},
    publisher = {University of Zurich},
    abstract = {Anhand des Briefwechsels Heinrich Bullingers (1504-1575), das rund 10'000 Briefe umfasst, demonstrieren wir den Stand der Forschung in automatisierter Handschriftenerkennung. Es finden sich mehr als hundert unterschiedliche Schreiberhände in den Briefen mit sehr unterschiedlicher Verteilung. Das Korpus ist zweisprachig (Latein/Deutsch) und teilweise findet der Sprachwechsel innerhalb von Abschnitten oder gar Sätzen statt. Auf Grund dieser Vielfalt eignet sich der Briefwechsel optimal als Testumgebung für entsprechende Algorithmen und ist aufschlussreiche für Forschungsprojekte und Erinnerungsinstitutionen mit ähnlichen Problemstellungen. Im Paper werden drei Verfahren gegeneinander gestellt und abgewogen. Im folgenden werde drei Ansätze an dem Korpus getestet, die Aufschlüsse zum Stand und möglichen Entwicklungen im Bereich der Handschriftenerkennung versprechen. Erstens wird mit Transkribus eine etablierte Plattform genutzt, die zwei Engines (HTR+ und PyLaia) anbietet. Zweitens wird mit Hilfe von Data Augmentation versucht die Erkennung mit der state-of-the-art Engine HTRFlor zu verbessern und drittens werden neue Transformer-basierte Modelle (TrOCR) eingesetzt.}
    }

    Anhand des Briefwechsels Heinrich Bullingers (1504-1575), das rund 10'000 Briefe umfasst, demonstrieren wir den Stand der Forschung in automatisierter Handschriftenerkennung. Es finden sich mehr als hundert unterschiedliche Schreiberhände in den Briefen mit sehr unterschiedlicher Verteilung. Das Korpus ist zweisprachig (Latein/Deutsch) und teilweise findet der Sprachwechsel innerhalb von Abschnitten oder gar Sätzen statt. Auf Grund dieser Vielfalt eignet sich der Briefwechsel optimal als Testumgebung für entsprechende Algorithmen und ist aufschlussreiche für Forschungsprojekte und Erinnerungsinstitutionen mit ähnlichen Problemstellungen. Im Paper werden drei Verfahren gegeneinander gestellt und abgewogen. Im folgenden werde drei Ansätze an dem Korpus getestet, die Aufschlüsse zum Stand und möglichen Entwicklungen im Bereich der Handschriftenerkennung versprechen. Erstens wird mit Transkribus eine etablierte Plattform genutzt, die zwei Engines (HTR+ und PyLaia) anbietet. Zweitens wird mit Hilfe von Data Augmentation versucht die Erkennung mit der state-of-the-art Engine HTRFlor zu verbessern und drittens werden neue Transformer-basierte Modelle (TrOCR) eingesetzt.

  • [PDF] L. Studer, J. Bokhorst, I. Nagtegaal, I. Zlobec, H. Dawson, and A. Fischer, "Tumor Budding T-cell Graphs: Assessing the Need for Resection in pT1 Colorectal Cancer Patients," in Medical Imaging with Deep Learning, 2023.
    [Bibtex] [Abstract]
    @inproceedings{studer2023tumor,
    title = {Tumor Budding T-cell Graphs: Assessing the Need for Resection in pT1 Colorectal Cancer Patients},
    author = {Linda Studer and JM Bokhorst and I Nagtegaal and Inti Zlobec and Heather Dawson and Andreas Fischer},
    booktitle = {Medical Imaging with Deep Learning},
    year = {2023},
    url = {https://openreview.net/forum?id=ruaXPgZCk6i},
    abstract = {Colon resection is often the treatment of choice for colorectal cancer (CRC) patients. However, especially for minimally invasive cancer, such as pT1, simply removing the polyps may be enough to stop cancer progression. Different histopathological risk factors such as tumor grade and invasion depth currently found the basis for the need for colon resection in pT1 CRC patients. Here, we investigate two additional risk factors, tumor budding and lymphocyte infiltration at the invasive front, which are known to be clinically relevant. We capture the spatial layout of tumor buds and T-cells and use graph-based deep learning to investigate them as potential risk predictors. Our pT1 Hotspot Tumor Budding T-cell Graph (pT1-HBTG) dataset consists of 626 tumor budding hotspots from 575 patients. We propose and compare three different graph structures, as well as combinations of the node labels. The best-performing Graph Neural Network architecture is able to increase specificity by 20% compared to the currently recommended risk stratification based on histopathological risk factors, without losing any sensitivity. We believe that using a graph-based analysis can help to assist pathologists in making risk assessments for pT1 CRC patients, and thus decrease the number of patients undergoing potentially unnecessary surgery. Both the code and dataset are made publicly available.}
    }

    Colon resection is often the treatment of choice for colorectal cancer (CRC) patients. However, especially for minimally invasive cancer, such as pT1, simply removing the polyps may be enough to stop cancer progression. Different histopathological risk factors such as tumor grade and invasion depth currently found the basis for the need for colon resection in pT1 CRC patients. Here, we investigate two additional risk factors, tumor budding and lymphocyte infiltration at the invasive front, which are known to be clinically relevant. We capture the spatial layout of tumor buds and T-cells and use graph-based deep learning to investigate them as potential risk predictors. Our pT1 Hotspot Tumor Budding T-cell Graph (pT1-HBTG) dataset consists of 626 tumor budding hotspots from 575 patients. We propose and compare three different graph structures, as well as combinations of the node labels. The best-performing Graph Neural Network architecture is able to increase specificity by 20% compared to the currently recommended risk stratification based on histopathological risk factors, without losing any sensitivity. We believe that using a graph-based analysis can help to assist pathologists in making risk assessments for pT1 CRC patients, and thus decrease the number of patients undergoing potentially unnecessary surgery. Both the code and dataset are made publicly available.

  • [PDF] [DOI] L. Vögtlin, A. Scius-Bertrand, P. Maergner, A. Fischer, and R. Ingold, "DIVA-DAF: A Deep Learning Framework for Historical Document Image Analysis," in Proceedings of the 7th International Workshop on Historical Document Imaging and Processing, New York, NY, USA, 2023, p. 61–66.
    [Bibtex] [Abstract]
    @inproceedings{vogtlin23diva,
    author = {V\"{o}gtlin, Lars and Scius-Bertrand, Anna and Maergner, Paul and Fischer, Andreas and Ingold, Rolf},
    title = {DIVA-DAF: A Deep Learning Framework for Historical Document Image Analysis},
    year = {2023},
    isbn = {9798400708411},
    publisher = {Association for Computing Machinery},
    address = {New York, NY, USA},
    url = {https://doi.org/10.1145/3604951.3605511},
    doi = {10.1145/3604951.3605511},
    abstract = {Deep learning methods have shown strong performance in solving tasks for historical document image analysis. However, despite current libraries and frameworks, programming an experiment or a set of experiments and executing them can be time-consuming. This is why we propose an open-source deep learning framework, DIVA-DAF, which is based on PyTorch Lightning and specifically designed for historical document analysis. Pre-implemented tasks such as segmentation and classification can be easily used or customized. It is also easy to create one’s own tasks with the benefit of powerful modules for loading data, even large data sets, and different forms of ground truth. The applications conducted have demonstrated time savings for the programming of a document analysis task, as well as for different scenarios such as pre-training or changing the architecture. Thanks to its data module, the framework also allows to reduce the time of model training significantly.},
    booktitle = {Proceedings of the 7th International Workshop on Historical Document Imaging and Processing},
    pages = {61–66},
    numpages = {6},
    keywords = {historical documents, document image analysis, deep neural networks, deep learning framework},
    location = {, San Jose, CA, USA, },
    series = {HIP '23}
    }

    Deep learning methods have shown strong performance in solving tasks for historical document image analysis. However, despite current libraries and frameworks, programming an experiment or a set of experiments and executing them can be time-consuming. This is why we propose an open-source deep learning framework, DIVA-DAF, which is based on PyTorch Lightning and specifically designed for historical document analysis. Pre-implemented tasks such as segmentation and classification can be easily used or customized. It is also easy to create one’s own tasks with the benefit of powerful modules for loading data, even large data sets, and different forms of ground truth. The applications conducted have demonstrated time savings for the programming of a document analysis task, as well as for different scenarios such as pre-training or changing the architecture. Thanks to its data module, the framework also allows to reduce the time of model training significantly.

  • [PDF] C. Abbet, L. Studer, I. Zlobec, and J. -P. Thiran, "Toward Automatic Tumor-Stroma Ratio Assessment for Survival Analysis in Colorectal Cancer," in Proc. 5th Int. Conf. on Medical Imaging with Deep Learning (MIDL), 2022, p. 1–3.
    [Bibtex]
    @inproceedings{abbet22toward,
    Author = {C. Abbet and L. Studer and I. Zlobec and J.-P. Thiran},
    Booktitle = {Proc. 5th Int. Conf. on Medical Imaging with Deep Learning (MIDL)},
    Date-Added = {2022-09-27 14:04:23 +0200},
    Date-Modified = {2022-09-27 14:06:19 +0200},
    Pages = {1--3},
    Title = {Toward Automatic Tumor-Stroma Ratio Assessment for Survival Analysis in Colorectal Cancer},
    Year = {2022}}
  • [PDF] C. Abbet, L. Studer, J. Thiran, and I. Zlobec, "Self-Rule to Multi Adapt automates the tumor-stroma assessment in colorectal cancer," in Proc. 18th European Congress on Digital Pathology (ECDP), 2022.
    [Bibtex]
    @inproceedings{abbet22selfrule,
    Author = {Christian Abbet and Linda Studer and Jean-Philippe Thiran and Inti Zlobec},
    Booktitle = {Proc. 18th European Congress on Digital Pathology (ECDP)},
    Date-Added = {2022-09-27 14:08:32 +0200},
    Date-Modified = {2022-09-27 14:09:13 +0200},
    Title = {Self-Rule to Multi Adapt automates the tumor-stroma assessment in colorectal cancer},
    Year = {2022}}
  • [PDF] C. Abbet, L. Studer, A. Fischer, H. Dawson, I. Zlobec, B. Bozorgtabar, and J. -P. Thiran, "Self-rule to multi-adapt: Generalized multi-source feature learning using unsupervised domain adaptation for colorectal cancer tissue detection," Medical Image Analysis, vol. 79, p. 1–20, 2022.
    [Bibtex]
    @article{abbet22selfruletomultiadapt,
    Author = {C. Abbet and L. Studer and A. Fischer and H. Dawson and I. Zlobec and B. Bozorgtabar and J.-P. Thiran},
    Date-Added = {2022-09-27 14:00:13 +0200},
    Date-Modified = {2022-09-27 14:02:26 +0200},
    Journal = {Medical Image Analysis},
    Pages = {1--20},
    Title = {Self-rule to multi-adapt: Generalized multi-source feature learning using unsupervised domain adaptation for colorectal cancer tissue detection},
    Volume = {79},
    Year = {2022}}
  • [PDF] T. Briglevic, J. Hennebert, and J. Bacher, "Flexibility shares in a low-voltage distribution grid: Identification of dimensioning load peaks and characterization of impacted end-customers for flexibility activation as a solution for peak mitigation," in 6th European GRID SERVICE MARKET Symposium (GSM 2022), 2022, p. 1–10.
    [Bibtex]
    @inproceedings{brigljevic22flexibility,
    Author = {Teo Briglevic and Jean Hennebert and Jean-Philippe Bacher},
    Booktitle = {6th European GRID SERVICE MARKET Symposium (GSM 2022)},
    Date-Added = {2022-09-27 14:04:23 +0200},
    Date-Modified = {2022-09-27 14:06:19 +0200},
    Pages = {1--10},
    Title = {Flexibility shares in a low-voltage distribution grid: Identification of dimensioning load peaks and characterization of impacted end-customers for flexibility activation as a solution for peak mitigation},
    Year = {2022}}
  • [PDF] J. Diesbach, A. Fischer, M. Bui, and A. Scius-Bertrand, "Generating synthetic styled Chu Nom characters," in Proc. 18th Int. Conf on Frontiers in Handwriting Recognition (ICFHR), 2022.
    [Bibtex]
    @inproceedings{diesbach22generating,
    Author = {Jonas Diesbach and Andreas Fischer and Marc Bui and Anna Scius-Bertrand},
    Booktitle = {Proc. 18th Int. Conf on Frontiers in Handwriting Recognition (ICFHR)},
    Date-Added = {2022-09-27 14:14:18 +0200},
    Date-Modified = {2022-09-27 14:16:12 +0200},
    Title = {Generating synthetic styled Chu Nom characters},
    Year = {2022}}
  • [PDF] [DOI] A. Fornés, A. Bensalah, C. Carmona-Duarte, J. Chen, M. A. Ferrer, A. Fischer, J. Lladós, C. Martín, E. Opisso, R. Plamondon, A. Scius-Bertrand, and J. M. Tormos, "The RPM3D Project: 3D Kinematics for Remote Patient Monitoring," in Intertwining Graphonomics with Human Movements, Cham, 2022, p. 217–226.
    [Bibtex] [Abstract]
    @inproceedings{fornes22rpm3d,
    url = {https://doi.org/10.1007/978-3-031-19745-1_16},
    doi = {10.1007/978-3-031-19745-1_16},
    author = {Forn{\'e}s, Alicia and Bensalah, Asma and Carmona-Duarte, Cristina and Chen, Jialuo and Ferrer, Miguel A. and Fischer, Andreas and Llad{\'o}s, Josep and Mart{\'i}n, Cristina and Opisso, Eloy and Plamondon, R{\'e}jean and Scius-Bertrand, Anna and Tormos, Josep Maria},
    editor = {Carmona-Duarte, Cristina and Diaz, Moises and Ferrer, Miguel A. and Morales, Aythami},
    title = {The RPM3D Project: 3D Kinematics for Remote Patient Monitoring},
    booktitle = {Intertwining Graphonomics with Human Movements},
    year = {2022},
    publisher = {Springer International Publishing},
    address = {Cham},
    pages = {217--226},
    abstract = {This project explores the feasibility of remote patient monitoring based on the analysis of 3D movements captured with smartwatches. We base our analysis on the Kinematic Theory of Rapid Human Movement. We have validated our research in a real case scenario for stroke rehabilitation at the Guttmann Institute (https://www.guttmann.com/en/) (neurorehabilitation hospital), showing promising results. Our work could have a great impact in remote healthcare applications, improving the medical efficiency and reducing the healthcare costs. Future steps include more clinical validation, developing multi-modal analysis architectures (analysing data from sensors, images, audio, etc.), and exploring the application of our technology to monitor other neurodegenerative diseases.},
    isbn = {978-3-031-19745-1}
    }

    This project explores the feasibility of remote patient monitoring based on the analysis of 3D movements captured with smartwatches. We base our analysis on the Kinematic Theory of Rapid Human Movement. We have validated our research in a real case scenario for stroke rehabilitation at the Guttmann Institute (https://www.guttmann.com/en/) (neurorehabilitation hospital), showing promising results. Our work could have a great impact in remote healthcare applications, improving the medical efficiency and reducing the healthcare costs. Future steps include more clinical validation, developing multi-modal analysis architectures (analysing data from sensors, images, audio, etc.), and exploring the application of our technology to monitor other neurodegenerative diseases.

  • [PDF] A. Khan, A. Janowczyk, F. Mueller, A. Blank, H. G. Nguyen, C. Abbet, L. Studer, A. Lugli, H. Dawson, J. -P. Thiran, and I. Zlobec, "Impact of scanner variability on lymph node segmentation in computational pathology," Journal of Pathology Informatics, vol. 13, p. 1–16, 2022.
    [Bibtex]
    @article{khan22impact,
    Author = {A. Khan and A. Janowczyk and F. Mueller and A. Blank and H.G. Nguyen and C. Abbet and L. Studer and A. Lugli and H. Dawson and J.-P. Thiran and I. Zlobec},
    Date-Added = {2022-09-27 14:09:28 +0200},
    Date-Modified = {2022-09-27 14:11:37 +0200},
    Journal = {Journal of Pathology Informatics},
    Pages = {1--16},
    Title = {Impact of scanner variability on lymph node segmentation in computational pathology},
    Volume = {13},
    Year = {2022}}
  • [PDF] A. Scius-Bertrand, A. Fischer, and M. Bui, "Retrieving Keywords in Historical Vietnamese Stele Images Without Human Annotations," in Proc. 11th Int. Symposium on Information and Communication Technology (SoICT), 2022.
    [Bibtex]
    @inproceedings{scius22retrieving,
    Author = {A. Scius-Bertrand and A. Fischer and M. Bui},
    Booktitle = {Proc. 11th Int. Symposium on Information and Communication Technology (SoICT)},
    Date-Added = {2022-11-21 11:33:30 +0100},
    Date-Modified = {2022-11-21 11:34:59 +0100},
    Title = {Retrieving Keywords in Historical Vietnamese Stele Images Without Human Annotations},
    Year = {2022}}
  • [PDF] A. Scius-Bertrand, L. Studer, A. Fischer, and M. Bui, "Annotation-free keyword spotting in historical Vietnamese manuscripts using graph matching," in Proc. Int. Workshop on Structural and Syntactic Pattern Recognition (SSPR), 2022.
    [Bibtex]
    @inproceedings{scius22annotationfree,
    Author = {Anna Scius-Bertrand and Linda Studer and Andreas Fischer and Marc Bui},
    Booktitle = {Proc. Int. Workshop on Structural and Syntactic Pattern Recognition (SSPR)},
    Date-Added = {2022-09-27 14:11:54 +0200},
    Date-Modified = {2022-09-27 14:13:40 +0200},
    Title = {Annotation-free keyword spotting in historical Vietnamese manuscripts using graph matching},
    Year = {2022}}
  • [PDF] [DOI] M. Spoto, B. Wolf, A. Fischer, and A. Scius-Bertrand, "Improving Handwriting Recognition for Historical Documents Using Synthetic Text Lines," in Intertwining Graphonomics with Human Movements, Cham, 2022, p. 61–75.
    [Bibtex] [Abstract]
    @InProceedings{spoto22improving,
    url = {https://doi.org/10.1007/978-3-031-19745-1_5},
    doi = {10.1007/978-3-031-19745-1_5},
    author = {Spoto, Martin
    and Wolf, Beat
    and Fischer, Andreas
    and Scius-Bertrand, Anna},
    editor = {Carmona-Duarte, Cristina
    and Diaz, Moises
    and Ferrer, Miguel A.
    and Morales, Aythami},
    title = {Improving Handwriting Recognition for Historical Documents Using Synthetic Text Lines},
    booktitle = {Intertwining Graphonomics with Human Movements},
    year = {2022},
    publisher = {Springer International Publishing},
    address = {Cham},
    pages = {61--75},
    abstract = {Automatic handwriting recognition for historical documents is a key element for making our cultural heritage available to researchers and the general public. However, current approaches based on machine learning require a considerable amount of annotated learning samples to read ancient scripts and languages. Producing such ground truth is a laborious and time-consuming task that often requires human experts. In this paper, to cope with a limited amount of learning samples, we explore the impact of using synthetic text line images to support the training of handwriting recognition systems. For generating text lines, we consider lineGen, a recent GAN-based approach, and for handwriting recognition, we consider HTR-Flor, a state-of-the-art recognition system. Different meta-learning strategies are explored that schedule the addition of synthetic text line images to the existing real samples. In an experimental evaluation on the well-known Bentham dataset as well as the newly introduced Bullinger dataset, we demonstrate a significant improvement of the recognition performance when combining real and synthetic samples.},
    isbn = {978-3-031-19745-1}
    }

    Automatic handwriting recognition for historical documents is a key element for making our cultural heritage available to researchers and the general public. However, current approaches based on machine learning require a considerable amount of annotated learning samples to read ancient scripts and languages. Producing such ground truth is a laborious and time-consuming task that often requires human experts. In this paper, to cope with a limited amount of learning samples, we explore the impact of using synthetic text line images to support the training of handwriting recognition systems. For generating text lines, we consider lineGen, a recent GAN-based approach, and for handwriting recognition, we consider HTR-Flor, a state-of-the-art recognition system. Different meta-learning strategies are explored that schedule the addition of synthetic text line images to the existing real samples. In an experimental evaluation on the well-known Bentham dataset as well as the newly introduced Bullinger dataset, we demonstrate a significant improvement of the recognition performance when combining real and synthetic samples.

  • [PDF] [DOI] C. Stammet, P. Dotti, U. Ultes-Nitsche, and A. Fischer, Analyzing Büchi Automata with Graph Neural Networks, 2022.
    [Bibtex] [Abstract]
    @misc{stammet2022analyzing,
    url = {https://doi.org/10.48550/arXiv.2206.09619},
    doi = {10.48550/arXiv.2206.09619},
    title={Analyzing B\"uchi Automata with Graph Neural Networks},
    author={Christophe Stammet and Prisca Dotti and Ulrich Ultes-Nitsche and Andreas Fischer},
    year={2022},
    eprint={2206.09619},
    archivePrefix={arXiv},
    primaryClass={cs.FL},
    abstract = {Büchi Automata on infinite words present many interesting problems and are used frequently in program verification and model checking. A lot of these problems on Büchi automata are computationally hard, raising the question if a learning-based data-driven analysis might be more efficient than using traditional algorithms. Since Büchi automata can be represented by graphs, graph neural networks are a natural choice for such a learning-based analysis. In this paper, we demonstrate how graph neural networks can be used to reliably predict basic properties of Büchi automata when trained on automatically generated random automata datasets.}
    }

    Büchi Automata on infinite words present many interesting problems and are used frequently in program verification and model checking. A lot of these problems on Büchi automata are computationally hard, raising the question if a learning-based data-driven analysis might be more efficient than using traditional algorithms. Since Büchi automata can be represented by graphs, graph neural networks are a natural choice for such a learning-based analysis. In this paper, we demonstrate how graph neural networks can be used to reliably predict basic properties of Büchi automata when trained on automatically generated random automata datasets.

  • [PDF] L. Studer, J. Bokhorst, F. Ciompi, A. Fischer, and H. Dawson, "Budding-T-cell score is a potential predictor for more aggressive treatment in pT1 colorectal cancers," in Proc. 18th European Congress on Digital Pathology (ECDP), 2022.
    [Bibtex]
    @inproceedings{studer22budding,
    Author = {Linda Studer and John-Melle Bokhorst and Francesco Ciompi and Andreas Fischer and Heather Dawson},
    Booktitle = {Proc. 18th European Congress on Digital Pathology (ECDP)},
    Date-Added = {2022-09-27 14:06:46 +0200},
    Date-Modified = {2022-09-27 14:08:07 +0200},
    Title = {Budding-T-cell score is a potential predictor for more aggressive treatment in pT1 colorectal cancers},
    Year = {2022}}
  • [PDF] C. Abbet, L. Studer, A. Fischer, B. Bozorgtabar, J. -P. Thiran, F. Müller, H. Dawson, and I. Zlobec, "Reducing the annotation workload: using self-supervised methods to learn from publicly available colorectal cancer datasets," in Proc. 87th Annual Congress of the Swiss Society of Pathology, 2021, p. 634–635.
    [Bibtex]
    @inproceedings{abbet21reducing,
    Author = {C. Abbet and L. Studer and A. Fischer and B. Bozorgtabar and J.-P. Thiran and F. M{\"u}ller and H. Dawson and I. Zlobec},
    Booktitle = {Proc. 87th Annual Congress of the Swiss Society of Pathology},
    Date-Added = {2022-09-27 13:57:32 +0200},
    Date-Modified = {2022-09-27 13:59:09 +0200},
    Pages = {634--635},
    Title = {Reducing the annotation workload: using self-supervised methods to learn from publicly available colorectal cancer datasets},
    Year = {2021}}
  • [PDF] C. Abbet, L. Studer, A. Fischer, H. Dawson, I. Zlobec, B. Bozorgtabar, and J. -P. Thiran, "Self-Rule to Adapt: Learning Generalized Features from Sparsely-Labeled Data Using Unsupervised Domain Adaptation for Colorectal Cancer Tissue Phenotyping," in Proc. 4th Int. Conf. on Medical Imaging with Deep Learning (MIDL), 2021, p. 1–16.
    [Bibtex]
    @inproceedings{abbet21selfrule,
    Author = {C. Abbet and L. Studer and A. Fischer and H. Dawson and I. Zlobec and B. Bozorgtabar and J.-P. Thiran},
    Booktitle = {Proc. 4th Int. Conf. on Medical Imaging with Deep Learning (MIDL)},
    Date-Added = {2022-09-27 13:55:40 +0200},
    Date-Modified = {2022-09-27 13:56:42 +0200},
    Pages = {1--16},
    Title = {Self-Rule to Adapt: Learning Generalized Features from Sparsely-Labeled Data Using Unsupervised Domain Adaptation for Colorectal Cancer Tissue Phenotyping},
    Year = {2021}}
  • [PDF] L. Linder, F. Montet, J. Hennebert, and J. Bacher, "Big Building Data 2.0-a Big Data Platform for Smart Buildings," in Journal of Physics: Conference Series, 2021, p. 12016.
    [Bibtex]
    @inproceedings{linder2021big,
    title={Big Building Data 2.0-a Big Data Platform for Smart Buildings},
    author={Linder, Lucy and Montet, Fr{\'e}d{\'e}ric and Hennebert, Jean and Bacher, Jean-Philippe},
    booktitle={Journal of Physics: Conference Series},
    volume={2042},
    number={1},
    pages={012016},
    year={2021},
    organization={IOP Publishing}
    }
  • [PDF] P. Riba, A. Fischer, J. Llados, and A. Fornes, "Learning Graph Edit Distance by Graph Neural Networks," Pattern Recognition, vol. 120, p. 1–11, 2021.
    [Bibtex]
    @article{riba21learning,
    Author = {P. Riba and A. Fischer and J. Llados and A. Fornes},
    Date-Added = {2022-09-27 13:38:19 +0200},
    Date-Modified = {2022-09-27 13:40:11 +0200},
    Journal = {Pattern Recognition},
    Pages = {1--11},
    Title = {Learning Graph Edit Distance by Graph Neural Networks},
    Volume = {120},
    Year = {2021}}
  • [PDF] A. Scius-Bertrand, M. Jungo, B. Wolf, A. Fischer, and M. Bui, "Annotation-Free Character Detection in Historical Vietnamese Stele Images," in Proc. 16th Int. Conf. on Document Analysis and Recognition (ICDAR), 2021, p. 432–447.
    [Bibtex]
    @inproceedings{scius21annotationfree,
    Author = {A. Scius-Bertrand and M. Jungo and B. Wolf and A. Fischer and M. Bui},
    Booktitle = {Proc. 16th Int. Conf. on Document Analysis and Recognition (ICDAR)},
    Date-Added = {2022-09-27 13:51:05 +0200},
    Date-Modified = {2022-09-27 13:52:23 +0200},
    Pages = {432--447},
    Title = {Annotation-Free Character Detection in Historical Vietnamese Stele Images},
    Year = {2021}}
  • [PDF] A. Scius-Bertrand, M. Jungo, B. Wolf, A. Fischer, and M. Bui, "Transcription Alignment of Historical Vietnamese Manuscripts without Human-Annotated Learning Samples," Applied Sciences, vol. 11, p. 1–18, 2021.
    [Bibtex]
    @article{scius21transcription,
    Author = {A. Scius-Bertrand and M. Jungo and B. Wolf and A. Fischer and M. Bui},
    Date-Added = {2022-09-27 13:42:23 +0200},
    Date-Modified = {2022-09-27 13:43:25 +0200},
    Journal = {Applied Sciences},
    Pages = {1--18},
    Title = {Transcription Alignment of Historical Vietnamese Manuscripts without Human-Annotated Learning Samples},
    Volume = {11},
    Year = {2021}}
  • [PDF] L. Studer, J. Wallau, H. Dawson, I. Zlobec, and A. Fischer, "Classification of Intestinal Gland Cell-Graphs Using Graph Neural Networks," in Proc. 25th Int. Conf. on Pattern Recognition (ICPR), 2021, p. 3636–3643.
    [Bibtex]
    @inproceedings{studer21classification,
    Author = {L. Studer and J. Wallau and H. Dawson and I. Zlobec and A. Fischer},
    Booktitle = {Proc. 25th Int. Conf. on Pattern Recognition (ICPR)},
    Date-Added = {2022-09-27 13:54:23 +0200},
    Date-Modified = {2022-09-27 13:55:20 +0200},
    Pages = {3636--3643},
    Title = {Classification of Intestinal Gland Cell-Graphs Using Graph Neural Networks},
    Year = {2021}}
  • [PDF] L. Studer, A. Blank, J. -M. Bokhorst, I. Nagtegaal, I. Zlobec, A. Lugli, A. Fischer, and H. Dawson, "Taking tumour budding to the next frontier–-a post International Tumour Budding Consensus Conference (ITBCC) 2016 review," Histopathology, vol. 78, iss. 4, p. 476–484, 2021.
    [Bibtex]
    @article{studer21taking,
    Author = {L. Studer and A. Blank and J.-M. Bokhorst and I. Nagtegaal and I. Zlobec and A. Lugli and A. Fischer and H. Dawson},
    Date-Added = {2022-09-27 13:43:50 +0200},
    Date-Modified = {2022-09-27 13:50:51 +0200},
    Journal = {Histopathology},
    Number = {4},
    Pages = {476--484},
    Title = {Taking tumour budding to the next frontier---a post International Tumour Budding Consensus Conference (ITBCC) 2016 review},
    Volume = {78},
    Year = {2021}}
  • [PDF] F. Wolf, A. Fischer, and G. A. Fink, "Graph Convolutional Neural Networks for Learning Attribute Representations for Word Spotting," in Proc. 16th Int. Conf. on Document Analysis and Recognition (ICDAR), 2021, p. 50–64.
    [Bibtex]
    @inproceedings{wolf21graph,
    Author = {F. Wolf and A. Fischer and G.A. Fink},
    Booktitle = {Proc. 16th Int. Conf. on Document Analysis and Recognition (ICDAR)},
    Date-Added = {2022-09-27 13:53:12 +0200},
    Date-Modified = {2022-09-27 13:54:08 +0200},
    Pages = {50--64},
    Title = {Graph Convolutional Neural Networks for Learning Attribute Representations for Word Spotting},
    Year = {2021}}
  • [PDF] O. Zayene, R. Ingold, N. E. BenAmara, and J. Hennebert, "ICPR2020 Competition on Text Detection and Recognition in Arabic News Video Frames," in Pattern Recognition. ICPR International Workshops and Challenges: Virtual Event, January 10-15, 2021, Proceedings, Part VIII, 2021, p. 343–356.
    [Bibtex]
    @inproceedings{zayene2021icpr2020,
    title={ICPR2020 Competition on Text Detection and Recognition in Arabic News Video Frames},
    author={Zayene, Oussama and Ingold, Rolf and BenAmara, Najoua Essoukri and Hennebert, Jean},
    booktitle={Pattern Recognition. ICPR International Workshops and Challenges: Virtual Event, January 10-15, 2021, Proceedings, Part VIII},
    pages={343--356},
    year={2021},
    organization={Springer International Publishing}
    }
  • [PDF] [DOI] N. Zurbuchen, A. Wilde, and P. Bruegger, "A Machine learning multi-class approach for fall detection systems based on wearable sensors with a study on sampling rates selection," Sensors, iss. ARTICLE, p. 23 p., 2021.
    [Bibtex] [Abstract]
    @article{zurbuchen2021,
    author = {Zurbuchen, Nicolas and Wilde, Adriana and Bruegger, Pascal},
    url = {http://hesso.tind.io/record/7094},
    journal = {Sensors},
    title = {A Machine learning multi-class approach for fall detection systems based on wearable sensors with a study on sampling rates selection},
    abstract = {Falls are dangerous for the elderly, often causing serious injuries especially when the fallen person stays on the ground for a long time without assistance. This paper extends our previous work on the development of a Fall Detection System (FDS) using an inertial measurement unit worn at the waist. Data come from SisFall, a publicly available dataset containing records of Activities of Daily Living and falls. We first applied a preprocessing and a feature extraction stage before using five Machine Learning algorithms, allowing us to compare them. Ensemble learning algorithms such as Random Forest and Gradient Boosting have the best performance, with a Sensitivity and Specificity both close to 99%. Our contribution is: a multi-class classification approach for fall detection combined with a study of the effect of the sensors’ sampling rate on the performance of the FDS. Our multi-class classification approach splits the fall into three phases: pre-fall, impact, post-fall. The extension to a multi-class problem is not trivial and we present a well-performing solution. We experimented sampling rates between 1 and 200 Hz. The results show that, while high sampling rates tend to improve performance, a sampling rate of 50 Hz is generally sufficient for an accurate detection.},
    number = {ARTICLE},
    doi = {10.3390/s21030938},
    recid = {7094},
    pages = {23 p.},
    address = {2021-03},
    year = {2021},
    }

    Falls are dangerous for the elderly, often causing serious injuries especially when the fallen person stays on the ground for a long time without assistance. This paper extends our previous work on the development of a Fall Detection System (FDS) using an inertial measurement unit worn at the waist. Data come from SisFall, a publicly available dataset containing records of Activities of Daily Living and falls. We first applied a preprocessing and a feature extraction stage before using five Machine Learning algorithms, allowing us to compare them. Ensemble learning algorithms such as Random Forest and Gradient Boosting have the best performance, with a Sensitivity and Specificity both close to 99%. Our contribution is: a multi-class classification approach for fall detection combined with a study of the effect of the sensors’ sampling rate on the performance of the FDS. Our multi-class classification approach splits the fall into three phases: pre-fall, impact, post-fall. The extension to a multi-class problem is not trivial and we present a well-performing solution. We experimented sampling rates between 1 and 200 Hz. The results show that, while high sampling rates tend to improve performance, a sampling rate of 50 Hz is generally sufficient for an accurate detection.

  • [PDF] A. Cholleton, A. Fischer, J. Hennebert, V. Raemy, and B. Wicht, Deep neural network generation of domain names, 2020.
    [Bibtex]
    @misc{cholleton2020deep,
    title={Deep neural network generation of domain names},
    author={Cholleton, Aubry and Fischer, Andreas and Hennebert, Jean and Raemy, Vincent and Wicht, Baptiste},
    year={2020},
    month=sep # "~15",
    note={US Patent 10,778,640}
    }
  • [PDF] [DOI] A. Fischer, R. Schindler, M. Bouillon, and R. Plamondon, "Modeling 3D Movements with the Kinematic Theory of Rapid Human Movements," in The Lognormality Principle and its Applications in e-Security, e-Learning and e-Health, , 2020, p. 327–342.
    [Bibtex] [Abstract]
    @inbook{fischer20modeling,
    author = {Fischer, Andreas and Schindler, Roman and Bouillon, Manuel and Plamondon, Réjean},
    title = {Modeling 3D Movements with the Kinematic Theory of Rapid Human Movements},
    booktitle = {The Lognormality Principle and its Applications in e-Security, e-Learning and e-Health},
    chapter = {Chapter 15},
    pages = {327--342},
    year = {2020},
    doi = {10.1142/9789811226830_0015},
    URL = {https://www.worldscientific.com/doi/abs/10.1142/9789811226830_0015},
    eprint = {https://www.worldscientific.com/doi/pdf/10.1142/9789811226830_0015},
    abstract = { The Kinematic Theory of rapid human movements analytically describes pen tip movements as a sequence of elementary strokes with lognormal speed. The theory has been confirmed in a large number of experimental evaluations, achieving a high reconstruction quality when compared with observed trajectories and providing pertinent features for biomedical applications as well as biometric identification. So far, the Kinematic Theory has focused on one-dimensional movements with the Delta-Lognormal model and on two-dimensional movements with the Sigma-Lognormal model. In this chapter, we present a model for movements in three dimensions, which naturally extends the Sigma-Lognormal approach. We evaluate our method on two action recognition datasets and an air-writing dataset, demonstrating a high reconstruction quality for modelling rapid 3D movements in all cases. }
    }

    The Kinematic Theory of rapid human movements analytically describes pen tip movements as a sequence of elementary strokes with lognormal speed. The theory has been confirmed in a large number of experimental evaluations, achieving a high reconstruction quality when compared with observed trajectories and providing pertinent features for biomedical applications as well as biometric identification. So far, the Kinematic Theory has focused on one-dimensional movements with the Delta-Lognormal model and on two-dimensional movements with the Sigma-Lognormal model. In this chapter, we present a model for movements in three dimensions, which naturally extends the Sigma-Lognormal approach. We evaluate our method on two action recognition datasets and an air-writing dataset, demonstrating a high reconstruction quality for modelling rapid 3D movements in all cases.

  • [DOI] A. Fischer, M. Liwicki, and R. Ingold, Handwritten Historical Document Analysis, Recognition, and Retrieval — State of the Art and Future Trends, World Scientific, 2020.
    [Bibtex]
    @book{fischer20handwritten,
    author = {Fischer, Andreas and Liwicki, Marcus and Ingold, Rolf},
    title = {Handwritten Historical Document Analysis, Recognition, and Retrieval — State of the Art and Future Trends},
    Publisher = {World Scientific},
    year = {2020},
    doi = {10.1142/11353},
    URL = {https://www.worldscientific.com/doi/abs/10.1142/11353},
    eprint = {https://www.worldscientific.com/doi/pdf/10.1142/11353}
    }
  • [DOI] M. Galimberti, C. Leuenberger, B. Wolf, S. M. Szilágyi, M. Foll, and D. Wegmann, "Detecting Selection from Linked Sites Using an F-Model," Genetics, 2020.
    [Bibtex] [Abstract]
    @article {Galimbertigenetics.303780.2020,
    author = {Galimberti, Marco and Leuenberger, Christoph and Wolf, Beat and Szil{\'a}gyi, S{\'a}ndor M. and Foll, Matthieu and Wegmann, Daniel},
    title = {Detecting Selection from Linked Sites Using an F-Model},
    elocation-id = {genetics.303780.2020},
    year = {2020},
    doi = {10.1534/genetics.120.303780},
    publisher = {Genetics},
    abstract = {Allele frequencies vary across populations and loci, even in the presence of migration. While most differences may be due to genetic drift, divergent selection will further increase differentiation at some loci. Identifying those is key in studying local adaptation, but remains statistically challenging. A particularly elegant way to describe allele frequency differences among populations connected by migration is the F-model, which measures differences in allele frequencies by population specific FST coefficients. This model readily accounts for multiple evolutionary forces by partitioning FST coefficients into locus and population specific components reflecting selection and drift, respectively. Here we present an extension of this model to linked loci by means of a hidden Markov model (HMM), which characterizes the effect of selection on linked markers through correlations in the locus specific component along the genome. Using extensive simulations we show that the statistical power of our method is up to two-fold that of previous implementations that assume sites to be independent. We finally evidence selection in the human genome by applying our method to data from the Human Genome Diversity Project (HGDP).},
    issn = {0016-6731},
    url = {https://www.genetics.org/content/early/2020/10/16/genetics.120.303780},
    eprint = {https://www.genetics.org/content/early/2020/10/16/genetics.120.303780.full.pdf},
    journal = {Genetics}
    }

    Allele frequencies vary across populations and loci, even in the presence of migration. While most differences may be due to genetic drift, divergent selection will further increase differentiation at some loci. Identifying those is key in studying local adaptation, but remains statistically challenging. A particularly elegant way to describe allele frequency differences among populations connected by migration is the F-model, which measures differences in allele frequencies by population specific FST coefficients. This model readily accounts for multiple evolutionary forces by partitioning FST coefficients into locus and population specific components reflecting selection and drift, respectively. Here we present an extension of this model to linked loci by means of a hidden Markov model (HMM), which characterizes the effect of selection on linked markers through correlations in the locus specific component along the genome. Using extensive simulations we show that the statistical power of our method is up to two-fold that of previous implementations that assume sites to be independent. We finally evidence selection in the human genome by applying our method to data from the Human Genome Diversity Project (HGDP).

  • [PDF] L. Linder, M. Jungo, J. Hennebert, C. C. Musat, and A. Fischer, "Automatic Creation of Text Corpora for Low-Resource Languages from the Internet: The Case of Swiss German," in Proceedings of The 12th Language Resources and Evaluation Conference, Marseille, France, 2020, p. 2706–2711.
    [Bibtex] [Abstract]
    @InProceedings{linder2020crawler,
    author = {Linder, Lucy and Jungo, Michael and Hennebert, Jean and Musat, Claudiu Cristian and Fischer, Andreas},
    title = {Automatic Creation of Text Corpora for Low-Resource Languages from the Internet: The Case of Swiss German},
    booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference},
    month = {May},
    year = {2020},
    address = {Marseille, France},
    publisher = {European Language Resources Association},
    pages = {2706--2711},
    abstract = {This paper presents SwissCrawl, the largest Swiss German text corpus to date. Composed of more than half a million sentences, it was generated using a customized web scraping tool that could be applied to other low-resource languages as well. The approach demonstrates how freely available web pages can be used to construct comprehensive text corpora, which are of fundamental importance for natural language processing. In an experimental evaluation, we show that using the new corpus leads to significant improvements for the task of language modeling.},
    url = {https://www.aclweb.org/anthology/2020.lrec-1.329}
    }

    This paper presents SwissCrawl, the largest Swiss German text corpus to date. Composed of more than half a million sentences, it was generated using a customized web scraping tool that could be applied to other low-resource languages as well. The approach demonstrates how freely available web pages can be used to construct comprehensive text corpora, which are of fundamental importance for natural language processing. In an experimental evaluation, we show that using the new corpus leads to significant improvements for the task of language modeling.

  • [PDF] J. Parrat, J. Bacher, F. Radu, and J. Hennebert, "Rendre visibles les pulsations de la ville," bulletin.ch, vol. 6, p. 22–26, 2020.
    [Bibtex]
    @article{parrat2020rendre,
    title={Rendre visibles les pulsations de la ville},
    author={Parrat, Jonathan and Bacher, Jean-Philippe and Radu, Florinel and Hennebert, Jean},
    journal={bulletin.ch},
    volume={6},
    pages={22--26},
    year={2020},
    publisher={Electrosuisse et l'Association des entreprises electriques suisses (AES)}
    }
  • [PDF] [DOI] L. Rychener, F. Montet, and J. Hennebert, "Architecture Proposal for Machine Learning Based Industrial Process Monitoring," Procedia Computer Science, vol. 170, p. 648–655, 2020.
    [Bibtex] [Abstract]
    @article{rychener2020architecture,
    title={Architecture Proposal for Machine Learning Based Industrial Process Monitoring},
    author={Rychener, Lorenz and Montet, Fr{\'e}d{\'e}ric and Hennebert, Jean},
    journal={Procedia Computer Science},
    volume={170},
    pages={648--655},
    year={2020},
    publisher={Elsevier},
    issn = {1877-0509},
    doi = {https://doi.org/10.1016/j.procs.2020.03.137},
    url = {http://www.sciencedirect.com/science/article/pii/S1877050920305925},
    keywords = {System Architecture, Rule Engine, Anomaly Detection, Monitoring, Industry 4.0},
    abstract = {In the context of Industry 4.0, an emerging trend is to increase the reliability of industrial process by using machine learning (ML) to detect anomalies of production machines. The main advantages of ML are in the ability to (1) capture non-linear phenomena, (2) adapt to many different processes without human intervention and (3) learn incrementally and improve over time. In this paper, we take the perspective of IT system architects and analyse the implications of the inclusion of ML components into a traditional anomaly detection systems. Through a prototype that we deployed for chemical reactors, our findings are that such ML components are impacting drastically the architecture of classical alarm systems. First, there is a need for long-term storage of the data that are used to train the models. Second, the training and usage of ML models can be CPU intensive and may request using specific resources. Third, there is no single algorithm that can detect machine errors. Fourth, human crafted alarm rules can now also include a learning process to improve these rules, for example by using active learning with a human-in-the-loop approach. These reasons are the motivations behind a microservice-based architecture for an alarm system in industrial machinery.}
    }

    In the context of Industry 4.0, an emerging trend is to increase the reliability of industrial process by using machine learning (ML) to detect anomalies of production machines. The main advantages of ML are in the ability to (1) capture non-linear phenomena, (2) adapt to many different processes without human intervention and (3) learn incrementally and improve over time. In this paper, we take the perspective of IT system architects and analyse the implications of the inclusion of ML components into a traditional anomaly detection systems. Through a prototype that we deployed for chemical reactors, our findings are that such ML components are impacting drastically the architecture of classical alarm systems. First, there is a need for long-term storage of the data that are used to train the models. Second, the training and usage of ML models can be CPU intensive and may request using specific resources. Third, there is no single algorithm that can detect machine errors. Fourth, human crafted alarm rules can now also include a learning process to improve these rules, for example by using active learning with a human-in-the-loop approach. These reasons are the motivations behind a microservice-based architecture for an alarm system in industrial machinery.

  • [PDF] L. Schmidt, L. Linder, S. Djambazovska, A. Lazaridis, T. Samardžić, and C. Musat, "A Swiss German Dictionary: Variation in Speech and Writing," in Proceedings of The 12th Language Resources and Evaluation Conference, Marseille, France, 2020, p. 2720–2725.
    [Bibtex] [Abstract]
    @InProceedings{schmidt2020gswdict,
    author = {Schmidt, Larissa and Linder, Lucy and Djambazovska, Sandra and Lazaridis, Alexandros and Samardžić, Tanja and Musat, Claudiu},
    title = {A Swiss German Dictionary: Variation in Speech and Writing},
    booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference},
    month = {May},
    year = {2020},
    address = {Marseille, France},
    publisher = {European Language Resources Association},
    pages = {2720--2725},
    abstract = {We introduce a dictionary containing normalized forms of common words in various Swiss German dialects into High German. As Swiss German is, for now, a predominantly spoken language, there is a significant variation in the written forms, even between speakers of the same dialect. To alleviate the uncertainty associated with this diversity, we complement the pairs of Swiss German - High German words with the Swiss German phonetic transcriptions (SAMPA). This dictionary becomes thus the first resource to combine large-scale spontaneous translation with phonetic transcriptions. Moreover, we control for the regional distribution and insure the equal representation of the major Swiss dialects. The coupling of the phonetic and written Swiss German forms is powerful. We show that they are sufficient to train a Transformer-based phoneme to grapheme model that generates credible novel Swiss German writings. In addition, we show that the inverse mapping - from graphemes to phonemes - can be modeled with a transformer trained with the novel dictionary. This generation of pronunciations for previously unknown words is key in training extensible automated speech recognition (ASR) systems, which are key beneficiaries of this dictionary.},
    url = {https://www.aclweb.org/anthology/2020.lrec-1.331}
    }

    We introduce a dictionary containing normalized forms of common words in various Swiss German dialects into High German. As Swiss German is, for now, a predominantly spoken language, there is a significant variation in the written forms, even between speakers of the same dialect. To alleviate the uncertainty associated with this diversity, we complement the pairs of Swiss German - High German words with the Swiss German phonetic transcriptions (SAMPA). This dictionary becomes thus the first resource to combine large-scale spontaneous translation with phonetic transcriptions. Moreover, we control for the regional distribution and insure the equal representation of the major Swiss dialects. The coupling of the phonetic and written Swiss German forms is powerful. We show that they are sufficient to train a Transformer-based phoneme to grapheme model that generates credible novel Swiss German writings. In addition, we show that the inverse mapping - from graphemes to phonemes - can be modeled with a transformer trained with the novel dictionary. This generation of pronunciations for previously unknown words is key in training extensible automated speech recognition (ASR) systems, which are key beneficiaries of this dictionary.

  • [PDF] M. Stauffer, A. Fischer, and K. Riesen, "Filters for Graph-Based Keyword Spotting in Historical Handwritten Documents," Pattern Recognition Letters, vol. 134, p. 125–134, 2020.
    [Bibtex]
    @article{stauffer18filters,
    Author = {M. Stauffer and A. Fischer and K. Riesen},
    Date-Added = {2018-10-04 07:21:31 +0000},
    Date-Modified = {2018-10-04 07:22:50 +0000},
    Journal = {Pattern Recognition Letters},
    Pages = {125--134},
    Title = {Filters for Graph-Based Keyword Spotting in Historical Handwritten Documents},
    Volume = {134},
    Year = {2020}}
  • [PDF] [DOI] L. Studer, J. Wallau, R. Ingold, and A. Fischer, "Effects of Graph Pooling Layers on Classification with Graph Neural Networks," in 2020 7th Swiss Conference on Data Science (SDS), 2020, p. 57–58.
    [Bibtex]
    @inproceedings{studer20effects,
    author={Studer, Linda and Wallau, Jannis and Ingold, Rolf and Fischer, Andreas},
    booktitle={2020 7th Swiss Conference on Data Science (SDS)},
    title={Effects of Graph Pooling Layers on Classification with Graph Neural Networks},
    year={2020},
    pages={57--58},
    keywords={Computer architecture;Machine learning;Neural networks;Convolution;Databases;Glands;Image edge detection;graph neural networks;graph pooling;graphs},
    doi={10.1109/SDS49233.2020.00021}}
  • [PDF] M. Tornare, Un réseau intelligent gérera le trafic, 2020.
    [Bibtex]
    @misc{tornare2020laliberte,
    author = {Tornare, Maud},
    year = {2020},
    month = {March},
    title = {Un r{\'e}seau intelligent g{\'e}rera le trafic},
    howpublished = {Journal La Libert{\'e}},
    }
  • [DOI] B. Wolf, J. Donzallaz, C. Jost, A. Hayoz, S. Commend, J. Hennebert, and P. Kuonen, "Using CNNs to Optimize Numerical Simulations in Geotechnical Engineering," in Artificial Neural Networks in Pattern Recognition, Cham, 2020, p. 247–256.
    [Bibtex] [Abstract]
    @InProceedings{10.1007/978-3-030-58309-5_20,
    author="Wolf, Beat
    and Donzallaz, Jonathan
    and Jost, Colette
    and Hayoz, Amanda
    and Commend, St{\'e}phane
    and Hennebert, Jean
    and Kuonen, Pierre",
    editor="Schilling, Frank-Peter
    and Stadelmann, Thilo",
    title="Using CNNs to Optimize Numerical Simulations in Geotechnical Engineering",
    booktitle="Artificial Neural Networks in Pattern Recognition",
    year="2020",
    publisher="Springer International Publishing",
    address="Cham",
    pages="247--256",
    abstract="Deep excavations are today mainly designed by manually optimising the wall's geometry, stiffness and strut or anchor layout. In order to better automate this process for sustained excavations, we are exploring the possibility of approximating key values using a machine learning (ML) model instead of calculating them with time-consuming numerical simulations. After demonstrating in our previous work that this approach works for simple use cases, we show in this paper that this method can be enhanced to adapt to complex real-world supported excavations. We have improved our ML model compared to our previous work by using a convolutional neural network (CNN) model, coding the excavation configuration as a set of layers of fixed height containing the soil parameters as well as the geometry of the walls and struts. The system is trained and evaluated on a set of synthetically generated situations using numerical simulation software. To validate this approach, we also compare our results to a set of 15 real-world situations in a t-SNE. Using our improved CNN model we could show that applying machine learning to predict the output of numerical simulation in the domain of geotechnical engineering not only works in simple cases but also in more complex, real-world situations.",
    isbn="978-3-030-58309-5",
    url={https://link.springer.com/chapter/10.1007/978-3-030-58309-5_20},
    doi={https://doi.org/10.1007/978-3-030-58309-5_20}
    }

    Deep excavations are today mainly designed by manually optimising the wall's geometry, stiffness and strut or anchor layout. In order to better automate this process for sustained excavations, we are exploring the possibility of approximating key values using a machine learning (ML) model instead of calculating them with time-consuming numerical simulations. After demonstrating in our previous work that this approach works for simple use cases, we show in this paper that this method can be enhanced to adapt to complex real-world supported excavations. We have improved our ML model compared to our previous work by using a convolutional neural network (CNN) model, coding the excavation configuration as a set of layers of fixed height containing the soil parameters as well as the geometry of the walls and struts. The system is trained and evaluated on a set of synthetically generated situations using numerical simulation software. To validate this approach, we also compare our results to a set of 15 real-world situations in a t-SNE. Using our improved CNN model we could show that applying machine learning to predict the output of numerical simulation in the domain of geotechnical engineering not only works in simple cases but also in more complex, real-world situations.

  • [PDF] N. Zurbuchen, P. Bruegger, and A. Wilde, "A Comparison of Machine Learning Algorithms for Fall Detection using Wearable Sensors," in 2020 International Conference on Artificial Intelligence in Information and Communication (ICAIIC) (ICAIIC 2020), Fukuoka, Japan, 2020.
    [Bibtex] [Abstract]
    @inproceedings{zurb2020comparison,
    AUTHOR="Nicolas Zurbuchen and Pascal Bruegger and Adriana Wilde",
    TITLE="A Comparison of Machine Learning Algorithms for Fall Detection using Wearable Sensors",
    BOOKTITLE="2020 International Conference on Artificial Intelligence in Information and Communication (ICAIIC) (ICAIIC 2020)",
    ADDRESS="Fukuoka, Japan",
    DAYS=18,
    MONTH=feb,
    YEAR=2020,
    ABSTRACT="The proportion of people 60 years old and above is expected to double globally to reach 22\% by 2050. This creates societal challenges such as the increase of age-related illnesses and the need for caregivers. Falls are a major threat for the elderly, often causing serious injuries especially when the fallen person stays on the ground for a long time without assistance. This paper presents the development of a Fall Detection System (FDS) using an accelerometer combined with a gyroscope worn at the waist. Data come from SisFall, a publicly available dataset containing records of Activities of Daily Living and falls. We first applied preprocessing and a feature extraction stage before using five Machine Learning algorithms, allowing us to compare them. Ensemble learning algorithms such as Random Forest and Gradient Boosting have the best performance, with a Sensitivity and Specificity both close to 99\%. Our main contribution is the study of the effect of the sensors' sampling rate on the performance of the FDS. We experimented sampling rates between 1 and 200 Hz. The results show that, while high sampling rates tend to improve performance, a sampling rate of 50 Hz is generally sufficient for an accurate detection."
    }

    The proportion of people 60 years old and above is expected to double globally to reach 22\% by 2050. This creates societal challenges such as the increase of age-related illnesses and the need for caregivers. Falls are a major threat for the elderly, often causing serious injuries especially when the fallen person stays on the ground for a long time without assistance. This paper presents the development of a Fall Detection System (FDS) using an accelerometer combined with a gyroscope worn at the waist. Data come from SisFall, a publicly available dataset containing records of Activities of Daily Living and falls. We first applied preprocessing and a feature extraction stage before using five Machine Learning algorithms, allowing us to compare them. Ensemble learning algorithms such as Random Forest and Gradient Boosting have the best performance, with a Sensitivity and Specificity both close to 99\%. Our main contribution is the study of the effect of the sensors' sampling rate on the performance of the FDS. We experimented sampling rates between 1 and 200 Hz. The results show that, while high sampling rates tend to improve performance, a sampling rate of 50 Hz is generally sufficient for an accurate detection.

  • Graph-Based Keyword Spotting, M. Stauffer, A. Fischer, and K. Riesen, Eds., World Scientific, 2019.
    [Bibtex]
    @book{stauffer19graphbased,
    Date-Added = {2022-09-27 13:28:45 +0200},
    Date-Modified = {2022-09-27 13:31:21 +0200},
    Author = {},
    Editor = {M. Stauffer and A. Fischer and K. Riesen},
    Publisher = {World Scientific},
    Title = {Graph-Based Keyword Spotting},
    Year = {2019}}
  • [PDF] M. R. Ameri, M. Stauffer, K. Riesen, T. D. Bui, and A. Fischer, "Graph-Based Keyword Spotting in Historical Manuscripts Using Hausdorff Edit Distance," Pattern Recognition Letters, vol. 121, pp. 61-67, 2019.
    [Bibtex]
    @article{ameri18graphbased,
    Author = {M.R. Ameri and M. Stauffer and K. Riesen and T.D. Bui and A. Fischer},
    Date-Added = {2018-10-04 07:18:51 +0000},
    Date-Modified = {2018-10-04 07:21:27 +0000},
    Journal = {Pattern Recognition Letters},
    Pages = {61-67},
    Title = {Graph-Based Keyword Spotting in Historical Manuscripts Using Hausdorff Edit Distance},
    Volume = {121},
    Year = {2019}}
  • [PDF] [DOI] F. Bapst, W. Bhimji, P. Calafiura, H. Gray, W. Lavrijsen, L. Linder, and A. Smith, "A pattern recognition algorithm for quantum annealers," Computing and Software for Big Science, iss. ARTICLE, p. 7 p., 2019.
    [Bibtex] [Abstract]
    @article{bapst2019,
    author = {Bapst, Frédéric and Bhimji, Wahid and Calafiura, Paolo and Gray, Heather and Lavrijsen, Wim and Linder, Lucy and Smith, Alex},
    url = {http://hesso.tind.io/record/6692},
    journal = {Computing and Software for Big Science},
    title = {A pattern recognition algorithm for quantum annealers},
    abstract = {The reconstruction of charged particles will be a key computing challenge for the high-luminosity Large Hadron Collider (HL-LHC) where increased data rates lead to a large increase in running time for current pattern recognition algorithms. An alternative approach explored here expresses pattern recognition as a quadratic unconstrained binary optimization (QUBO), which allows algorithms to be run on classical and quantum annealers. While the overall timing of the proposed approach and its scaling has still to be measured and studied, we demonstrate that, in terms of efficiency and purity, the same physics performance of the LHC tracking algorithms can be achieved. More research will be needed to achieve comparable performance in HL-LHC conditions, as increasing track density decreases the purity of the QUBO track segment classifier.},
    number = {ARTICLE},
    doi = {10.1007/s41781-019-0032-5},
    recid = {6692},
    pages = {7 p.},
    address = {2019-12},
    year = {2019},
    }

    The reconstruction of charged particles will be a key computing challenge for the high-luminosity Large Hadron Collider (HL-LHC) where increased data rates lead to a large increase in running time for current pattern recognition algorithms. An alternative approach explored here expresses pattern recognition as a quadratic unconstrained binary optimization (QUBO), which allows algorithms to be run on classical and quantum annealers. While the overall timing of the proposed approach and its scaling has still to be measured and studied, we demonstrate that, in terms of efficiency and purity, the same physics performance of the LHC tracking algorithms can be achieved. More research will be needed to achieve comparable performance in HL-LHC conditions, as increasing track density decreases the purity of the QUBO track segment classifier.

  • [PDF] S. Commend, S. Wattel, J. Hennebert, P. Kuonen, and L. Vulliet, "Prediction of unsupported excavations behaviour with machine learning techniques," in COMPLAS 2019, 2019, pp. 529-535.
    [Bibtex]
    @InProceedings{commend2019prediction,
    author={St{\'{e}}phane Commend and Sacha Wattel and Jean Hennebert and Pierre Kuonen and Laurent Vulliet},
    booktitle={COMPLAS 2019},
    title={Prediction of unsupported excavations behaviour with machine learning techniques},
    year={2019},
    pages={529-535},
    month={September},
    }
  • [PDF] [DOI] I. S. Comsa, S. Zhang, M. Aydin, P. Kuonen, R. Trestian, and G. Ghinea, "A Comparison of Reinforcement Learning Algorithms in Fairness-Oriented OFDMA Schedulers," Information (Switzerland), p. 25, 2019.
    [Bibtex]
    @article{comsa2019comparison,
    author = {Comsa, Ioan Sorin and Zhang, Sijing and Aydin, Mehmet and Kuonen, Pierre and Trestian, Ramona and Ghinea, Gheorghita},
    year = {2019},
    month = {10},
    pages = {25},
    title = {A Comparison of Reinforcement Learning Algorithms in Fairness-Oriented OFDMA Schedulers},
    journal = {Information (Switzerland)},
    doi = {10.3390/info10100315}
    }
  • [PDF] [DOI] I. ComÅŸa, S. Zhang, M. Aydin, P. Kuonen, R. Trestian, and G. Ghinea, "Enhancing User Fairness in OFDMA Radio Access Networks Through Machine Learning," in 2019 Wireless Days (WD), 2019, pp. 1-8.
    [Bibtex] [Abstract]
    @InProceedings{comsa2019enhancing,
    author={I. {ComÅŸa} and S. {Zhang} and M. {Aydin} and P. {Kuonen} and R. {Trestian} and G. {Ghinea}},
    booktitle={2019 Wireless Days (WD)},
    title={Enhancing User Fairness in OFDMA Radio Access Networks Through Machine Learning},
    year={2019},
    volume={},
    number={},
    pages={1-8},
    abstract={The problem of radio resource scheduling subject to fairness satisfaction is very challenging even in future radio access networks. Standard fairness criteria aim to find the best trade-off between overall throughput maximization and user fairness satisfaction under various types of network conditions. However, at the Radio Resource Management (RRM) level, the existing schedulers are rather static being unable to react according to the momentary networking conditions so that the user fairness measure is maximized all time. This paper proposes a dynamic scheduler framework able to parameterize the proportional fair scheduling rule at each Transmission Time Interval (TTI) to improve the user fairness. To deal with the framework complexity, the parameterization decisions are approximated by using the neural networks as non-linear functions. The actor-critic Reinforcement Learning (RL) algorithm is used to learn the best set of non-linear functions that approximate the best fairness parameters to be applied in each momentary state. Simulations results reveal that the proposed framework outperforms the existing fairness adaptation techniques as well as other types of RL-based schedulers.},
    keywords={frequency division multiple access;learning (artificial intelligence);neural nets;OFDM modulation;optimisation;quality of service;radio access networks;resource allocation;telecommunication computing;telecommunication scheduling;machine Learning;fairness satisfaction;standard fairness criteria;network conditions;Radio Resource Management level;momentary networking conditions;user fairness measure;dynamic scheduler framework;proportional fair scheduling rule;neural networks;nonlinear functions;RL-based schedulers;OFDMA;radio resource scheduling;radio access networks;overall throughput maximization;fairness adaptation techniques;actor-critic reinforcement learning algorithm;Transmission time interval;framework complexity;parameterization decisions;momentary state;Throughput;Resource management;Quality of service;Dynamic scheduling;Heuristic algorithms;Optimization;Wireless communication;RRM;Resource Scheduling;Fairness Optimization;Reinforcement Learning;Neural Networks},
    doi={10.1109/WD.2019.8734262},
    ISSN={},
    month={April},
    }

    The problem of radio resource scheduling subject to fairness satisfaction is very challenging even in future radio access networks. Standard fairness criteria aim to find the best trade-off between overall throughput maximization and user fairness satisfaction under various types of network conditions. However, at the Radio Resource Management (RRM) level, the existing schedulers are rather static being unable to react according to the momentary networking conditions so that the user fairness measure is maximized all time. This paper proposes a dynamic scheduler framework able to parameterize the proportional fair scheduling rule at each Transmission Time Interval (TTI) to improve the user fairness. To deal with the framework complexity, the parameterization decisions are approximated by using the neural networks as non-linear functions. The actor-critic Reinforcement Learning (RL) algorithm is used to learn the best set of non-linear functions that approximate the best fairness parameters to be applied in each momentary state. Simulations results reveal that the proposed framework outperforms the existing fairness adaptation techniques as well as other types of RL-based schedulers.

  • [PDF] [DOI] N. Kocher, C. Scuito, L. Tarantino, A. Lazaridis, A. Fischer, and C. Musat, "Alleviating Sequence Information Loss with Data Overlapping and Prime Batch Sizes," in Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), Hong Kong, China, 2019, p. 890–899.
    [Bibtex] [Abstract]
    @inproceedings{kocher-etal-2019-alleviating,
    title = "Alleviating Sequence Information Loss with Data Overlapping and Prime Batch Sizes",
    author = "Kocher, No{\'e}mien and Scuito, Christian and Tarantino, Lorenzo and Lazaridis, Alexandros and Fischer, Andreas and Musat, Claudiu",
    booktitle = "Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)",
    month = {nov},
    year = "2019",
    address = "Hong Kong, China",
    publisher = "Association for Computational Linguistics",
    url = "https://www.aclweb.org/anthology/K19-1083",
    doi = "10.18653/v1/K19-1083",
    pages = "890--899",
    abstract = "In sequence modeling tasks the token order matters, but this information can be partially lost due to the discretization of the sequence into data points. In this paper, we study the imbalance between the way certain token pairs are included in data points and others are not. We denote this a token order imbalance (TOI) and we link the partial sequence information loss to a diminished performance of the system as a whole, both in text and speech processing tasks. We then provide a mechanism to leverage the full token order information{---}Alleviated TOI{---}by iteratively overlapping the token composition of data points. For recurrent networks, we use prime numbers for the batch size to avoid redundancies when building batches from overlapped data points. The proposed method achieved state of the art performance in both text and speech related tasks.",
    }

    In sequence modeling tasks the token order matters, but this information can be partially lost due to the discretization of the sequence into data points. In this paper, we study the imbalance between the way certain token pairs are included in data points and others are not. We denote this a token order imbalance (TOI) and we link the partial sequence information loss to a diminished performance of the system as a whole, both in text and speech processing tasks. We then provide a mechanism to leverage the full token order information{–-}Alleviated TOI{–-}by iteratively overlapping the token composition of data points. For recurrent networks, we use prime numbers for the batch size to avoid redundancies when building batches from overlapped data points. The proposed method achieved state of the art performance in both text and speech related tasks.

  • [PDF] [DOI] M. Kunz, B. Wolf, M. Fuchs, J. Christoph, K. Xiao, T. Thum, D. Atlan, H. Prokosch, and T. Dandekar, "A comprehensive method protocol for annotation and integrated functional understanding of lncRNAs," Briefings in Bioinformatics, 2019.
    [Bibtex] [Abstract]
    @article{kunz2019comprehensive,
    author = {Kunz, Meik and Wolf, Beat and Fuchs, Maximilian and Christoph, Jan and Xiao, Ke and Thum, Thomas and Atlan, David and Prokosch, Hans-Ulrich and Dandekar, Thomas},
    title = "{A comprehensive method protocol for annotation and integrated functional understanding of lncRNAs}",
    journal = {Briefings in Bioinformatics},
    year = {2019},
    month = {10},
    abstract = "{Long non-coding RNAs (lncRNAs) are of fundamental biological importance; however, their functional role is often unclear or loosely defined as experimental characterization is challenging and bioinformatic methods are limited. We developed a novel integrated method protocol for the annotation and detailed functional characterization of lncRNAs within the genome. It combines annotation, normalization and gene expression with sequence-structure conservation, functional interactome and promoter analysis. Our protocol allows an analysis based on the tissue and biological context, and is powerful in functional characterization of experimental and clinical RNA-Seq datasets including existing lncRNAs. This is demonstrated on the uncharacterized lncRNA GATA6-AS1 in dilated cardiomyopathy.}",
    issn = {1477-4054},
    doi = {10.1093/bib/bbz066},
    url = {https://doi.org/10.1093/bib/bbz066},
    note = {bbz066},
    eprint = {http://oup.prod.sis.lan/bib/advance-article-pdf/doi/10.1093/bib/bbz066/30096180/bbz066.pdf},
    }

    {Long non-coding RNAs (lncRNAs) are of fundamental biological importance; however, their functional role is often unclear or loosely defined as experimental characterization is challenging and bioinformatic methods are limited. We developed a novel integrated method protocol for the annotation and detailed functional characterization of lncRNAs within the genome. It combines annotation, normalization and gene expression with sequence-structure conservation, functional interactome and promoter analysis. Our protocol allows an analysis based on the tissue and biological context, and is powerful in functional characterization of experimental and clinical RNA-Seq datasets including existing lncRNAs. This is demonstrated on the uncharacterized lncRNA GATA6-AS1 in dilated cardiomyopathy.}

  • P. Maergner, T. S. Karabacakoglu, K. Riesen, R. Ingold, and A. Fischer, "Synthetic Generation of Online Signatures using a Deep Generative Model," in Proc. 19th International Graphonomics Conference (IGS), 2019.
    [Bibtex]
    @inproceedings{maergner19synthetic,
    Author = {P. Maergner and T.S. Karabacakoglu and K. Riesen and R. Ingold and A. Fischer},
    Booktitle = {Proc. 19th International Graphonomics Conference (IGS)},
    Date-Added = {2019-12-09 15:56:55 +0100},
    Date-Modified = {2019-12-09 15:59:17 +0100},
    Title = {Synthetic Generation of Online Signatures using a Deep Generative Model},
    Year = {2019}
    }
  • [PDF] P. Maergner, V. Pondenkandath, M. Alberti, M. Liwicki, K. Riesen, R. Ingold, and A. Fischer, "Combining graph edit distance and triplet networks for offline signature verification," Pattern Recognition Letters, vol. 125, p. 527–533, 2019.
    [Bibtex]
    @article{maergner19combining,
    Author = {P. Maergner and V. Pondenkandath and M. Alberti and M. Liwicki and K. Riesen and R. Ingold and A. Fischer},
    Date-Added = {2019-12-09 15:33:24 +0100},
    Date-Modified = {2019-12-09 15:36:37 +0100},
    Journal = {Pattern Recognition Letters},
    Pages = {527--533},
    Title = {Combining graph edit distance and triplet networks for offline signature verification},
    Volume = {125},
    Year = {2019}
    }
  • [PDF] [DOI] V. Nejkovic, A. Visa, M. Tosic, N. Petrovic, M. Valkama, M. Koivisto, J. Talvitie, S. Rancic, D. Grzonka, J. Tchorzewski, P. Kuonen, and F. Gortazar, "Big Data in 5G Distributed Applications," in High-Performance Modelling and Simulation for Big Data Applications: Selected Results of the COST Action IC1406 cHiPSet, J. Ko{l}odziej and H. González-Vélez, Eds., Cham: Springer International Publishing, 2019, p. 138–162.
    [Bibtex] [Abstract]
    @Inbook{nejkovic2019high,
    author="Nejkovic, Valentina and Visa, Ari and Tosic, Milorad and Petrovic, Nenad and Valkama, Mikko and Koivisto, Mike and Talvitie, Jukka and Rancic, Svetozar and Grzonka, Daniel and Tchorzewski, Jacek and Kuonen, Pierre and Gortazar, Francisco",
    editor="Ko{\l}odziej, Joanna and Gonz{\'a}lez-V{\'e}lez, Horacio",
    title="Big Data in 5G Distributed Applications",
    bookTitle="High-Performance Modelling and Simulation for Big Data Applications: Selected Results of the COST Action IC1406 cHiPSet",
    year="2019",
    publisher="Springer International Publishing",
    address="Cham",
    pages="138--162",
    abstract="Fifth generation mobile networks (5G) will rather supplement than replace current 4G networks by dramatically improving their bandwidth, capacity and reliability. This way, much more demanding use cases that simply are not achievable with today's networks will become reality - from home entertainment, to product manufacturing and healthcare. However, many of them rely on Internet of Things (IoT) devices equipped with low-cost transmitters and sensors that generate enormous amount of data about their environment. Therefore, due to large scale of 5G systems, combined with their inherent complexity and heterogeneity, Big Data and analysis techniques are considered as one of the main enablers of future mobile networks. In this work, we recognize 5G use cases from various application domains and list the basic requirements for their development and realization.",
    isbn="978-3-030-16272-6",
    doi="10.1007/978-3-030-16272-6_5",
    url="https://doi.org/10.1007/978-3-030-16272-6_5"
    }

    Fifth generation mobile networks (5G) will rather supplement than replace current 4G networks by dramatically improving their bandwidth, capacity and reliability. This way, much more demanding use cases that simply are not achievable with today's networks will become reality - from home entertainment, to product manufacturing and healthcare. However, many of them rely on Internet of Things (IoT) devices equipped with low-cost transmitters and sensors that generate enormous amount of data about their environment. Therefore, due to large scale of 5G systems, combined with their inherent complexity and heterogeneity, Big Data and analysis techniques are considered as one of the main enablers of future mobile networks. In this work, we recognize 5G use cases from various application domains and list the basic requirements for their development and realization.

  • [PDF] A. Scius-Bertrand, L. Voegtlin, M. Alberti, A. Fischer, and M. Bui, "Layout Analysis and Text Column Segmentation for Historical Vietnamese Steles," in Proc. 5th Int. Workshop on Historical Document Imaging and Processing (HIP), 2019, p. 84–89.
    [Bibtex]
    @inproceedings{scius19layout,
    Author = {A. Scius-Bertrand and L. Voegtlin and M. Alberti and A. Fischer and M. Bui},
    Booktitle = {Proc. 5th Int. Workshop on Historical Document Imaging and Processing (HIP)},
    Date-Added = {2019-12-09 15:52:38 +0100},
    Date-Modified = {2019-12-09 15:53:48 +0100},
    Pages = {84--89},
    Title = {Layout Analysis and Text Column Segmentation for Historical Vietnamese Steles},
    Year = {2019}
    }
  • [PDF] M. Stauffer, P. Maergner, A. Fischer, R. Ingold, and K. Riesen, "Offline Signature Verification using Structural Dynamic Time Warping," in Proc. 15th Int. Conf. on Document Analysis and Recognition (ICDAR), 2019, p. 1117–1124.
    [Bibtex]
    @inproceedings{stauffer19offline,
    Author = {M. Stauffer and P. Maergner and A. Fischer and R. Ingold and K. Riesen},
    Booktitle = {Proc. 15th Int. Conf. on Document Analysis and Recognition (ICDAR)},
    Date-Added = {2019-12-09 15:55:39 +0100},
    Date-Modified = {2019-12-09 15:56:39 +0100},
    Pages = {1117--1124},
    Title = {Offline Signature Verification using Structural Dynamic Time Warping},
    Year = {2019}
    }
  • [PDF] M. Stauffer, P. Maergner, A. Fischer, and K. Riesen, "Cross-Evaluation of Graph-Based Keyword Spotting in Handwritten Historical Documents," in Proc. 12th Int. Workshop on Graph-Based Representation in Pattern Recognition (GbR), 2019, p. 45–55.
    [Bibtex]
    @inproceedings{stauffer19crossevaluation,
    Author = {M. Stauffer and P. Maergner and A. Fischer and K. Riesen},
    Booktitle = {Proc. 12th Int. Workshop on Graph-Based Representation in Pattern Recognition (GbR)},
    Date-Added = {2019-12-09 15:48:22 +0100},
    Date-Modified = {2019-12-09 15:50:19 +0100},
    Pages = {45--55},
    Title = {Cross-Evaluation of Graph-Based Keyword Spotting in Handwritten Historical Documents},
    Year = {2019}
    }
  • [PDF] M. Stauffer, P. Maergner, A. Fischer, and K. Riesen, "Graph Embedding for Offline Handwritten Signature Verification," in Proc. 3rd Int. Conf. on Biometric Engineering and Applications (ICBEA), 2019, p. 69–76.
    [Bibtex]
    @inproceedings{stauffer19graph,
    Author = {M. Stauffer and P. Maergner and A. Fischer and K. Riesen},
    Booktitle = {Proc. 3rd Int. Conf. on Biometric Engineering and Applications (ICBEA)},
    Date-Added = {2019-12-09 15:50:38 +0100},
    Date-Modified = {2019-12-09 15:52:32 +0100},
    Pages = {69--76},
    Title = {Graph Embedding for Offline Handwritten Signature Verification},
    Year = {2019}
    }
  • [PDF] L. Studer, S. Toneyan, I. Zlobec, H. Dawson, and A. Fischer, "Graph-based Classification of Intestinal Glands in Colorectal Cancer Tissue Images," in Proc. 22nd Int. Conf. on Medical Image Computing and Computer Assisted Intervention (MICCAI), Computational Pathology Workshop (COMPAY), 2019, p. 1–8.
    [Bibtex]
    @inproceedings{studer19graphbased,
    Author = {L. Studer and S. Toneyan and I. Zlobec and H. Dawson and A. Fischer},
    Booktitle = {Proc. 22nd Int. Conf. on Medical Image Computing and Computer Assisted Intervention (MICCAI), Computational Pathology Workshop (COMPAY)},
    Date-Added = {2019-12-09 15:39:48 +0100},
    Date-Modified = {2019-12-09 15:45:52 +0100},
    Pages = {1--8},
    Title = {Graph-based Classification of Intestinal Glands in Colorectal Cancer Tissue Images},
    Year = {2019}
    }
  • [PDF] L. Studer, M. Alberti, V. Pondenkandath, P. Goktepe, T. Kolonko, A. Fischer, M. Liwicki, and R. Ingold, "A Comprehensive Study of ImageNet Pre-Training for Historical Document Image Analysis," in Proc. 15th Int. Conf. on Document Analysis and Recognition (ICDAR), 2019, p. 720–725.
    [Bibtex]
    @inproceedings{studer19acomprehensive,
    Author = {L. Studer and M. Alberti and V. Pondenkandath and P. Goktepe and T. Kolonko and A. Fischer and M. Liwicki and R. Ingold},
    Booktitle = {Proc. 15th Int. Conf. on Document Analysis and Recognition (ICDAR)},
    Date-Added = {2019-12-09 15:53:52 +0100},
    Date-Modified = {2019-12-09 15:55:27 +0100},
    Pages = {720--725},
    Title = {A Comprehensive Study of ImageNet Pre-Training for Historical Document Image Analysis},
    Year = {2019}
    }
  • [PDF] L. Studer, S. Toneyan, I. Zlobec, A. Lugli, A. Fischer, and H. Dawson, "Intestinal Gland Classification from Colorectal Cancer Tissue Images using Graph-based Methods," Der Pathologe, vol. 40, iss. 6, p. 688–689, 2019.
    [Bibtex]
    @article{studer19intestinal,
    Author = {L. Studer and S. Toneyan and I. Zlobec and A. Lugli and A. Fischer and H. Dawson},
    Date-Added = {2019-12-09 15:37:10 +0100},
    Date-Modified = {2019-12-09 15:39:19 +0100},
    Journal = {Der Pathologe},
    Number = {6},
    Pages = {688--689},
    Title = {Intestinal Gland Classification from Colorectal Cancer Tissue Images using Graph-based Methods},
    Volume = {40},
    Year = {2019}
    }
  • M. Vallo Docampo and P. Bruegger, "Humanitarian Organization ICT field specialists training : Bridging theoretical and practical humanitarian knowledge." 2019.
    [Bibtex]
    @InProceedings{ghtc2019ldocampo,
    author = {Vallo Docampo, Mariana and Bruegger, Pascal},
    title = {Humanitarian Organization ICT field specialists training : Bridging theoretical and practical humanitarian knowledge},
    note = {GHTC - IEEE Global Humanitarian Technology Conference},
    year = {2019},
    month = {oct}
    }
  • [PDF] [DOI] J. Böck, S. Appenzeller, L. Haertle, T. Schneider, A. Gehrig, J. Schröder, S. Rost, B. Wolf, C. R. Bartram, C. Sutter, and T. Haaf, "Single CpG hypermethylation, allele methylation errors, and decreased expression of multiple tumor suppressor genes in normal body cells of mutation‐negative early‐onset and high‐risk breast cancer patients," International Journal of Cancer, vol. 143, iss. 6, p. 1416–1425, 2018.
    [Bibtex] [Abstract]
    @article{bock2018single,
    author = {Böck, Julia and Appenzeller, Silke and Haertle, Larissa and Schneider, Tamara and Gehrig, Andrea and Schröder, Jörg and Rost, Simone and Wolf, Beat and Bartram, Claus R. and Sutter, Christian and Haaf, Thomas},
    year = {2018},
    title = {Single CpG hypermethylation, allele methylation errors, and decreased expression of multiple tumor suppressor genes in normal body cells of mutation‐negative early‐onset and high‐risk breast cancer patients},
    journal = {International Journal of Cancer},
    publisher = {Wiley Online Library},
    issn = {0020-7136},
    doi = {10.1002/ijc.31526},
    volume = {143},
    month = {9},
    pages = {1416--1425},
    number = {6},
    url = {https://doi.org/10.1002/ijc.31526},
    abstract = {To evaluate the role of constitutive epigenetic changes in normal body cells of BRCA1/BRCA2‐mutation negative patients, we have developed a deep bisulfite sequencing assay targeting the promoter regions of 8 tumor suppressor (TS) genes (BRCA1, BRCA2, RAD51C, ATM, PTEN, TP53, MLH1, RB1) and the estrogene receptor gene (ESR1), which plays a role in tumor progression. We analyzed blood samples of two breast cancer (BC) cohorts with early onset (EO) and high risk (HR) for a heterozygous mutation, respectively, along with age‐matched controls. Methylation analysis of up to 50,000 individual DNA molecules per gene and sample allowed quantification of epimutations (alleles with >50% methylated CpGs), which are associated with epigenetic silencing. Compared to ESR1, which is representative for an average promoter, TS genes were characterized by a very low (< 1%) average methylation level and a very low mean epimutation rate (EMR; < 0.0001% to 0.1%). With exception of BRCA1, which showed an increased EMR in BC (0.31% vs. 0.06%), there was no significant difference between patients and controls. One of 36 HR BC patients exhibited a dramatically increased EMR (14.7%) in BRCA1, consistent with a disease‐causing epimutation. Approximately one third (15 of 44) EO BC patients exhibited increased rates of single CpG methylation errors in multiple TS genes. Both EO and HR BC patients exhibited global underexpression of blood TS genes. We propose that epigenetic abnormalities in normal body cells are indicative of disturbed mechanisms for maintaining low methylation and appropriate expression levels and may be associated with an increased BC risk. What's new? Cancer can change patterns of DNA methylation, with widespread loss of methylation but also localized increases in methylation. Here, the authors analyzed blood cells, looking for differences in methylation between breast cancer patients and healthy persons. They developed a deep bisulfite sequencing assay to specifically test the promoter regions of 8 tumor suppressor genes, plus the estrogen receptor gene, along with reduced tumor suppressor gene expression. They found that breast cancer patients showed increased methylation changes in multiple tumor suppressor genes, reduced tumor suppressor gene expression. Thus, epigenetic abnormalities could indicate disruptions in the mechanisms that maintain proper methylation, and could signal increased tumor risk.}
    }

    To evaluate the role of constitutive epigenetic changes in normal body cells of BRCA1/BRCA2‐mutation negative patients, we have developed a deep bisulfite sequencing assay targeting the promoter regions of 8 tumor suppressor (TS) genes (BRCA1, BRCA2, RAD51C, ATM, PTEN, TP53, MLH1, RB1) and the estrogene receptor gene (ESR1), which plays a role in tumor progression. We analyzed blood samples of two breast cancer (BC) cohorts with early onset (EO) and high risk (HR) for a heterozygous mutation, respectively, along with age‐matched controls. Methylation analysis of up to 50,000 individual DNA molecules per gene and sample allowed quantification of epimutations (alleles with >50% methylated CpGs), which are associated with epigenetic silencing. Compared to ESR1, which is representative for an average promoter, TS genes were characterized by a very low (< 1%) average methylation level and a very low mean epimutation rate (EMR; < 0.0001% to 0.1%). With exception of BRCA1, which showed an increased EMR in BC (0.31% vs. 0.06%), there was no significant difference between patients and controls. One of 36 HR BC patients exhibited a dramatically increased EMR (14.7%) in BRCA1, consistent with a disease‐causing epimutation. Approximately one third (15 of 44) EO BC patients exhibited increased rates of single CpG methylation errors in multiple TS genes. Both EO and HR BC patients exhibited global underexpression of blood TS genes. We propose that epigenetic abnormalities in normal body cells are indicative of disturbed mechanisms for maintaining low methylation and appropriate expression levels and may be associated with an increased BC risk. What's new? Cancer can change patterns of DNA methylation, with widespread loss of methylation but also localized increases in methylation. Here, the authors analyzed blood cells, looking for differences in methylation between breast cancer patients and healthy persons. They developed a deep bisulfite sequencing assay to specifically test the promoter regions of 8 tumor suppressor genes, plus the estrogen receptor gene, along with reduced tumor suppressor gene expression. They found that breast cancer patients showed increased methylation changes in multiple tumor suppressor genes, reduced tumor suppressor gene expression. Thus, epigenetic abnormalities could indicate disruptions in the mechanisms that maintain proper methylation, and could signal increased tumor risk.

  • [DOI] I. ComÅŸa, S. Zhang, M. E. Aydin, P. Kuonen, Y. Lu, R. Trestian, and G. Ghinea, "Towards 5G: A Reinforcement Learning-Based Scheduling Solution for Data Traffic Management," IEEE Transactions on Network and Service Management, vol. 15, iss. 4, pp. 1661-1675, 2018.
    [Bibtex] [Abstract]
    @article{comsa2018toward5g,
    author={I. {ComÅŸa} and S. {Zhang} and M. E. {Aydin} and P. {Kuonen} and Y. {Lu} and R. {Trestian} and G. {Ghinea}},
    journal={IEEE Transactions on Network and Service Management},
    title={Towards 5G: A Reinforcement Learning-Based Scheduling Solution for Data Traffic Management},
    year={2018},
    volume={15},
    number={4},
    pages={1661-1675},
    abstract={Dominated by delay-sensitive and massive data applications, radio resource management in 5G access networks is expected to satisfy very stringent delay and packet loss requirements. In this context, the packet scheduler plays a central role by allocating user data packets in the frequency domain at each predefined time interval. Standard scheduling rules are known limited in satisfying higher quality of service (QoS) demands when facing unpredictable network conditions and dynamic traffic circumstances. This paper proposes an innovative scheduling framework able to select different scheduling rules according to instantaneous scheduler states in order to minimize the packet delays and packet drop rates for strict QoS requirements applications. To deal with real-time scheduling, the reinforcement learning (RL) principles are used to map the scheduling rules to each state and to learn when to apply each. Additionally, neural networks are used as function approximation to cope with the RL complexity and very large representations of the scheduler state space. Simulation results demonstrate that the proposed framework outperforms the conventional scheduling strategies in terms of delay and packet drop rate requirements.},
    keywords={5G mobile communication;function approximation;learning (artificial intelligence);neural nets;quality of service;radio access networks;telecommunication computing;telecommunication scheduling;telecommunication traffic;packet drop rate requirements;reinforcement learning-based scheduling solution;stringent delay requirements;delay-sensitive applications;RL complexity;function approximation;quality-of-service demands;conventional scheduling strategies;scheduler state space;neural networks;reinforcement learning principles;real-time scheduling;strict QoS requirements applications;packet drop rates;packet delays;instantaneous scheduler states;innovative scheduling framework;dynamic traffic circumstances;unpredictable network conditions;higher quality;standard scheduling rules;predefined time interval;frequency domain;user data packets;central role;packet scheduler;packet loss requirements;5G access networks;radio resource management;massive data applications;data traffic management;Delays;Quality of service;Scheduling algorithms;Resource management;Dynamic scheduling;5G mobile communication;5G;packet scheduling;optimization;radio resource management;reinforcement learning;neural networks},
    doi={10.1109/TNSM.2018.2863563},
    ISSN={},
    month={Dec}
    }

    Dominated by delay-sensitive and massive data applications, radio resource management in 5G access networks is expected to satisfy very stringent delay and packet loss requirements. In this context, the packet scheduler plays a central role by allocating user data packets in the frequency domain at each predefined time interval. Standard scheduling rules are known limited in satisfying higher quality of service (QoS) demands when facing unpredictable network conditions and dynamic traffic circumstances. This paper proposes an innovative scheduling framework able to select different scheduling rules according to instantaneous scheduler states in order to minimize the packet delays and packet drop rates for strict QoS requirements applications. To deal with real-time scheduling, the reinforcement learning (RL) principles are used to map the scheduling rules to each state and to learn when to apply each. Additionally, neural networks are used as function approximation to cope with the RL complexity and very large representations of the scheduler state space. Simulation results demonstrate that the proposed framework outperforms the conventional scheduling strategies in terms of delay and packet drop rate requirements.

  • [PDF] M. Diaz, A. Fischer, M. A. Ferrer, and R. Plamondon, "Dynamic Signature Verification System Based on One Real Signature," IEEE Trans. on Cybernetics, vol. 48, iss. 1, p. 228–239, 2018.
    [Bibtex]
    @article{diaz18dynamic,
    Author = {Diaz, M. and Fischer, A. and Ferrer, M.A. and Plamondon, R.},
    Date-Added = {2017-01-15 10:40:54 +0000},
    Date-Modified = {2018-01-15 13:12:25 +0000},
    Journal = {IEEE Trans. on Cybernetics},
    Number = {1},
    Pages = {228--239},
    Title = {Dynamic Signature Verification System Based on One Real Signature},
    Volume = {48},
    Year = {2018}}
  • [PDF] J. Esseiva, M. Caon, E. Mugellini, O. A. Khaled, and K. Aminian, "Feet Fidgeting Detection Based on Accelerometers Using Decision Tree Learning and Gradient Boosting," in International Conference on Bioinformatics and Biomedical Engineering, 2018, p. 75–84.
    [Bibtex]
    @inproceedings{esseiva2018feet,
    title={Feet Fidgeting Detection Based on Accelerometers Using Decision Tree Learning and Gradient Boosting},
    author={Esseiva, Julien and Caon, Maurizio and Mugellini, Elena and Khaled, Omar Abou and Aminian, Kamiar},
    booktitle={International Conference on Bioinformatics and Biomedical Engineering},
    pages={75--84},
    year={2018},
    organization={Springer}}
  • [PDF] L. Linder, J. Hennebert, and J. Esseiva, "BBDATA, a Big Data platform for Smart Buildings," in FTAL conference on Industrial Applied Data Science, 2018, p. 38–39.
    [Bibtex]
    @InProceedings{bbdata2018ftal,
    author = {Lucy Linder and Jean Hennebert and Julien Esseiva},
    title = {BBDATA, a Big Data platform for Smart Buildings},
    booktitle = {FTAL conference on Industrial Applied Data Science},
    pages = {38--39},
    year = {2018},
    month = {oct},
    isbn = {978-2-8399-2549-5},
    }
  • [PDF] P. Maergner, N. R. Howe, K. Riesen, R. Ingold, and A. Fischer, "Offline Signature Verification via Structural Methods: Graph Edit Distance and Inkball Models," in Proc. 16th Int. Conf. on Frontiers in Handwriting Recognition, 2018.
    [Bibtex]
    @inproceedings{maergner18inkball,
    Author = {P. Maergner and N.R. Howe and K. Riesen and R. Ingold and A. Fischer},
    Booktitle = {Proc. 16th Int. Conf. on Frontiers in Handwriting Recognition},
    Date-Added = {2018-10-04 07:47:33 +0000},
    Date-Modified = {2018-10-04 07:49:18 +0000},
    Title = {Offline Signature Verification via Structural Methods: Graph Edit Distance and Inkball Models},
    Year = {2018}}
  • [PDF] P. Maergner, V. Pondenkandath, M. Alberti, M. Liwicki, K. Riesen, R. Ingold, and A. Fischer, "Offline Signature Verification by Combining Graph Edit Distance and Triplet Networks," in Proc. Int. Workshop on Structural, Syntactic, and Statistical Pattern Recognition, 2018.
    [Bibtex]
    @inproceedings{maergner18offline,
    Author = {P. Maergner and V. Pondenkandath and M. Alberti and M. Liwicki and K. Riesen and R. Ingold and A. Fischer},
    Booktitle = {Proc. Int. Workshop on Structural, Syntactic, and Statistical Pattern Recognition},
    Date-Added = {2018-10-04 07:30:48 +0000},
    Date-Modified = {2018-10-04 07:31:50 +0000},
    Title = {Offline Signature Verification by Combining Graph Edit Distance and Triplet Networks},
    Year = {2018}}
  • [PDF] [DOI] K. R. Martin, K. Mansouri, R. N. Weinreb, R. Wasilewicz, C. Gisler, J. Hennebert, D. Genoud, T. Shaarawy, C. Erb, N. Pfeiffer, G. E. Trope, F. A. Medeiros, Y. Barkana, J. H. K. Liu, R. Ritch, A. Mermoud, D. Jinapriya, C. Birt, I. I. Ahmed, C. Kranemann, P. Höh, B. Lachenmayr, Y. Astakhov, E. Chen, S. Duch, G. Marchini, S. Gandolfi, M. Rekas, A. Kuroyedov, A. Cernak, V. Polo, J. Belda, S. Grisanti, C. Baudouin, J. Nordmann, C. D. G. Moraes, Z. Segal, M. Lusky, H. Morori-Katz, N. Geffen, S. Kurtz, J. Liu, D. L. Budenz, O. J. Knight, J. C. Mwanza, A. Viera, F. Castanera, and J. Che-Hamzah, "Use of Machine Learning on Contact Lens Sensor–Derived Parameters for the Diagnosis of Primary Open-angle Glaucoma," American Journal of Ophthalmology, vol. 194, pp. 46-53, 2018.
    [Bibtex] [Abstract]
    @article{keith2018lens,
    title = "Use of Machine Learning on Contact Lens Sensor–Derived Parameters for the Diagnosis of Primary Open-angle Glaucoma",
    journal = "American Journal of Ophthalmology",
    volume = "194",
    pages = "46 - 53",
    year = "2018",
    issn = "0002-9394",
    doi = "https://doi.org/10.1016/j.ajo.2018.07.005",
    url = "http://www.sciencedirect.com/science/article/pii/S0002939418303866",
    author = "Keith R. Martin and Kaweh Mansouri and Robert N. Weinreb and Robert Wasilewicz and Christophe Gisler and Jean Hennebert and Dominique Genoud and Tarek Shaarawy and Carl Erb and Norbert Pfeiffer and Graham E. Trope and Felipe A. Medeiros and Yaniv Barkana and John H.K. Liu and Robert Ritch and André Mermoud and Delan Jinapriya and Catherine Birt and Iqbal I. Ahmed and Christoph Kranemann and Peter Höh and Bernhard Lachenmayr and Yuri Astakhov and Enping Chen and Susana Duch and Giorgio Marchini and Stefano Gandolfi and Marek Rekas and Alexander Kuroyedov and Andrej Cernak and Vicente Polo and José Belda and Swaantje Grisanti and Christophe Baudouin and Jean-Philippe Nordmann and Carlos G. De Moraes and Zvi Segal and Moshe Lusky and Haia Morori-Katz and Noa Geffen and Shimon Kurtz and Ji Liu and Donald L. Budenz and O'Rese J. Knight and Jean Claude Mwanza and Anthony Viera and Fernando Castanera and Jemaima Che-Hamzah",
    abstract = "Purpose
    To test the hypothesis that contact lens sensor (CLS)-based 24-hour profiles of ocular volume changes contain information complementary to intraocular pressure (IOP) to discriminate between primary open-angle glaucoma (POAG) and healthy (H) eyes.
    Design
    Development and evaluation of a diagnostic test with machine learning.
    Methods
    Subjects: From 435 subjects (193 healthy and 242 POAG), 136 POAG and 136 age-matched healthy subjects were selected. Subjects with contraindications for CLS wear were excluded. Procedure: This is a pooled analysis of data from 24 prospective clinical studies and a registry. All subjects underwent 24-hour CLS recording on 1 eye. Statistical and physiological CLS parameters were derived from the signal recorded. CLS parameters frequently associated with the presence of POAG were identified using a random forest modeling approach. Main Outcome Measures: Area under the receiver operating characteristic curve (ROC AUC) for feature sets including CLS parameters and Start IOP, as well as a feature set with CLS parameters and Start IOP combined.
    Results
    The CLS parameters feature set discriminated POAG from H eyes with mean ROC AUCs of 0.611, confidence interval (CI) 0.493–0.722. Larger values of a given CLS parameter were in general associated with a diagnosis of POAG. The Start IOP feature set discriminated between POAG and H eyes with a mean ROC AUC of 0.681, CI 0.603–0.765. The combined feature set was the best indicator of POAG with an ROC AUC of 0.759, CI 0.654–0.855. This ROC AUC was statistically higher than for CLS parameters or Start IOP feature sets alone (both P < .0001).
    Conclusions
    CLS recordings contain information complementary to IOP that enable discrimination between H and POAG. The feature set combining CLS parameters and Start IOP provide a better indication of the presence of POAG than each of the feature sets separately. As such, the CLS may be a new biomarker for POAG."
    }

    Purpose To test the hypothesis that contact lens sensor (CLS)-based 24-hour profiles of ocular volume changes contain information complementary to intraocular pressure (IOP) to discriminate between primary open-angle glaucoma (POAG) and healthy (H) eyes. Design Development and evaluation of a diagnostic test with machine learning. Methods Subjects: From 435 subjects (193 healthy and 242 POAG), 136 POAG and 136 age-matched healthy subjects were selected. Subjects with contraindications for CLS wear were excluded. Procedure: This is a pooled analysis of data from 24 prospective clinical studies and a registry. All subjects underwent 24-hour CLS recording on 1 eye. Statistical and physiological CLS parameters were derived from the signal recorded. CLS parameters frequently associated with the presence of POAG were identified using a random forest modeling approach. Main Outcome Measures: Area under the receiver operating characteristic curve (ROC AUC) for feature sets including CLS parameters and Start IOP, as well as a feature set with CLS parameters and Start IOP combined. Results The CLS parameters feature set discriminated POAG from H eyes with mean ROC AUCs of 0.611, confidence interval (CI) 0.493–0.722. Larger values of a given CLS parameter were in general associated with a diagnosis of POAG. The Start IOP feature set discriminated between POAG and H eyes with a mean ROC AUC of 0.681, CI 0.603–0.765. The combined feature set was the best indicator of POAG with an ROC AUC of 0.759, CI 0.654–0.855. This ROC AUC was statistically higher than for CLS parameters or Start IOP feature sets alone (both P < .0001). Conclusions CLS recordings contain information complementary to IOP that enable discrimination between H and POAG. The feature set combining CLS parameters and Start IOP provide a better indication of the presence of POAG than each of the feature sets separately. As such, the CLS may be a new biomarker for POAG.

  • [PDF] [DOI] A. Nikodemski, J. F. Wagen, F. Buntschu, C. Gisler, and G. Bovet, "Reproducing Measured MANET Radio Performances Using the EMANE Framework," IEEE Communications Magazine, vol. 56, iss. 10, p. 151–155, 2018.
    [Bibtex] [Abstract]
    @article{Nikodemski2018,
    abstract = {Simulation or emulation of mobile ad hoc networks (MANET) is used to predict or analyze the performance of MANETs under various scenarios. One challenge is to emulate realistically the MANET's radio performance. Running the Extendable Mobile Ad Hoc Network Emulator (EMANE) framework, we show how to reproduce measured characteristics, namely throughput and round-trip time, of real tactical radios using wideband or narrowband TDMA-based waveforms. Additionally, a solution to simulate rate adaptation is proposed. An introduction to EMANE and the EMANE radio model plugins is also provided.},
    author = {Nikodemski, Alexandre and Wagen, Jean Frederic and Buntschu, Francois and Gisler, Christophe and Bovet, Gerome},
    doi = {10.1109/MCOM.2018.1800294},
    isbn = {0007-0920 (Print)$\backslash$r0007-0920 (Linking)},
    issn = {1558-1896},
    journal = {IEEE Communications Magazine},
    number = {10},
    pages = {151--155},
    pmid = {3435700},
    title = {{Reproducing Measured MANET Radio Performances Using the EMANE Framework}},
    url = {https://ieeexplore.ieee.org/abstract/document/8493135},
    volume = {56},
    year = {2018}
    }

    Simulation or emulation of mobile ad hoc networks (MANET) is used to predict or analyze the performance of MANETs under various scenarios. One challenge is to emulate realistically the MANET's radio performance. Running the Extendable Mobile Ad Hoc Network Emulator (EMANE) framework, we show how to reproduce measured characteristics, namely throughput and round-trip time, of real tactical radios using wideband or narrowband TDMA-based waveforms. Additionally, a solution to simulate rate adaptation is proposed. An introduction to EMANE and the EMANE radio model plugins is also provided.

  • V. Raemy, V. Russo, J. Hennebert, and B. Wicht, "Construction of phonetic representation of a string of characters," , iss. US9910836B2, 2018.
    [Bibtex] [Abstract]
    @patent{Raemy2018US9910836B2,
    author = {Vincent Raemy and Vincenzo Russo and Jean Hennebert and Baptiste Wicht},
    title = {Construction of phonetic representation of a string of characters},
    holder = {VeriSign Inc},
    year = {2018},
    month = {03},
    day = {06},
    number = {US9910836B2},
    location = {US},
    url = {https://worldwide.espacenet.com/publicationDetails/biblio?CC=US&NR=9910836B2&KC=B2&FT=D},
    filing_num = {14976968},
    yearfiled = {2015},
    monthfiled = {12},
    dayfiled = {21},
    abstract = {Provided are methods, devices, and computer-readable media for accessing a string of characters; parsing the string of characters into string of graphemes; determining one or more phonetic representations for one or more graphemes in the string of graphemes based on a first data structure; determining at least one grapheme representation for one or more of the one or more phonetic representations based on a second data structure; and constructing the phonetic representation of the string of characters based on the grapheme representation that was determined.}
    }

    Provided are methods, devices, and computer-readable media for accessing a string of characters; parsing the string of characters into string of graphemes; determining one or more phonetic representations for one or more graphemes in the string of graphemes based on a first data structure; determining at least one grapheme representation for one or more of the one or more phonetic representations based on a second data structure; and constructing the phonetic representation of the string of characters based on the grapheme representation that was determined.

  • V. Raemy, V. Russo, J. Hennebert, and B. Wicht, "Method for writing a foreign language in a pseudo language phonetically resembling native language of the speaker," , iss. US10102203B2, 2018.
    [Bibtex] [Abstract]
    @patent{Raemy2018US10102203B2,
    author = {Vincent Raemy and Vincenzo Russo and Jean Hennebert and Baptiste Wicht},
    title = {Method for writing a foreign language in a pseudo language phonetically resembling native language of the speaker},
    holder = {VeriSign Inc},
    year = {2018},
    month = {10},
    day = {16},
    number = {US10102203B2},
    location = {US},
    url = {https://worldwide.espacenet.com/publicationDetails/biblio?CC=US&NR=10102203B2&KC=B2&FT=D},
    filing_num = {14977022},
    yearfiled = {2015},
    monthfiled = {12},
    dayfiled = {21},
    abstract = {Provided is a method, device, and computer-readable medium for converting a string of characters in a first language into a phonetic representation of a second language using a first data structure that maps graphemes in the first language to one or more universal phonetic representations based on an international phonetic alphabet, wherein the first data structure comprises a plurality of first nodes with each first node of the plurality of first nodes having a respective weight assigned that corresponds to a likely pronunciation of a grapheme, and a second data structure that maps the one or more universal phonetic representations to one or more graphemes in the second language, wherein the second data structure comprises a plurality of second nodes with each second node of the plurality of second nodes having a respective weight assigned that corresponds to a likely representation of a grapheme in the second language.}
    }

    Provided is a method, device, and computer-readable medium for converting a string of characters in a first language into a phonetic representation of a second language using a first data structure that maps graphemes in the first language to one or more universal phonetic representations based on an international phonetic alphabet, wherein the first data structure comprises a plurality of first nodes with each first node of the plurality of first nodes having a respective weight assigned that corresponds to a likely pronunciation of a grapheme, and a second data structure that maps the one or more universal phonetic representations to one or more graphemes in the second language, wherein the second data structure comprises a plurality of second nodes with each second node of the plurality of second nodes having a respective weight assigned that corresponds to a likely representation of a grapheme in the second language.

  • V. Raemy, V. Russo, J. Hennebert, and B. Wicht, "Construction of a phonetic representation of a generated string of characters," , iss. US10102189B2, 2018.
    [Bibtex] [Abstract]
    @patent{Raemy2018US10102189B2,
    author = {Vincent Raemy and Vincenzo Russo and Jean Hennebert and Baptiste Wicht},
    title = {Construction of a phonetic representation of a generated string of characters},
    holder = {VeriSign Inc},
    year = {2018},
    month = {10},
    day = {16},
    number = {US10102189B2},
    location = {US},
    url = {https://worldwide.espacenet.com/publicationDetails/biblio?CC=US&NR=10102189B2&KC=B2&FT=D},
    filing_num = {14977090},
    yearfiled = {2015},
    monthfiled = {12},
    dayfiled = {21},
    abstract = {Provided are methods, devices, and computer-readable media for generating a string of characters based on a set of rules; parsing the string of characters into string of graphemes; determining one or more phonetic representations for one or more graphemes in the string of graphemes based on a first data structure; determining at least one grapheme representation for one or more of the one or more phonetic representations based on a second data structure; and constructing the phonetic representation of the string of characters based on the grapheme representation that was determined.}
    }

    Provided are methods, devices, and computer-readable media for generating a string of characters based on a set of rules; parsing the string of characters into string of graphemes; determining one or more phonetic representations for one or more graphemes in the string of graphemes based on a first data structure; determining at least one grapheme representation for one or more of the one or more phonetic representations based on a second data structure; and constructing the phonetic representation of the string of characters based on the grapheme representation that was determined.

  • V. Raemy, V. Russo, J. Hennebert, and B. Wicht, "Systems and methods for automatic phonetization of domain names," , iss. US9947311B2, 2018.
    [Bibtex] [Abstract]
    @patent{Raemy2018US9947311B2,
    author = {Vincent Raemy and Vincenzo Russo and Jean Hennebert and Baptiste Wicht},
    title = {Systems and methods for automatic phonetization of domain names},
    holder = {VeriSign Inc},
    year = {2018},
    month = {04},
    day = {17},
    number = {US9947311B2},
    location = {US},
    url = {https://worldwide.espacenet.com/publicationDetails/biblio?CC=US&NR=9947311B2&KC=B2&FT=D},
    filing_num = {14977133},
    yearfiled = {2015},
    monthfiled = {12},
    dayfiled = {21},
    abstract = {A method can include receiving, from a user, a string of characters. The method can also include determining components of the string of characters. The components of the string of characters may include one or more graphemes that are related in the string of characters. The method can include determining universal phonetic representations for the components of the string of characters. The method can also include determining pronunciations for the universal phonetic representations. Additionally, the method can include constructing a pronunciation of the string of characters based at least partially on the pronunciations of the universal phonetic representations. Further, the method can include sending, to the user, a sound file representing the pronunciation of the string of characters.}
    }

    A method can include receiving, from a user, a string of characters. The method can also include determining components of the string of characters. The components of the string of characters may include one or more graphemes that are related in the string of characters. The method can include determining universal phonetic representations for the components of the string of characters. The method can also include determining pronunciations for the universal phonetic representations. Additionally, the method can include constructing a pronunciation of the string of characters based at least partially on the pronunciations of the universal phonetic representations. Further, the method can include sending, to the user, a sound file representing the pronunciation of the string of characters.

  • [PDF] P. Riba, A. Fischer, J. Llados, and A. Fornés, "Learning Graph Distances with Message Passing Neural Networks," in Proc. 24th Proc. 24th Int. Conf. on Pattern Recognition, 2018.
    [Bibtex]
    @inproceedings{riba18learning,
    Author = {P. Riba and A. Fischer and J. Llados and A. Forn{\'e}s},
    Booktitle = {Proc. 24th Proc. 24th Int. Conf. on Pattern Recognition},
    Date-Added = {2018-10-04 07:28:19 +0000},
    Date-Modified = {2018-10-04 07:29:37 +0000},
    Title = {Learning Graph Distances with Message Passing Neural Networks},
    Year = {2018}}
  • [PDF] K. Riesen, A. Fischer, and H. Bunke, "On the Impact of Using Utilities Rather than Costs for Graph Matching," Neural Processing Letters, vol. 48, iss. 2, pp. 691-707, 2018.
    [Bibtex]
    @article{riesen17ontheimpact,
    author = {Riesen, Kaspar and Fischer, Andreas and Bunke, Horst},
    journal = {Neural Processing Letters},
    pages = {691-707},
    title = {On the Impact of Using Utilities Rather than Costs for Graph Matching},
    volume = {48},
    number = {2},
    year = {2018},
    month = oct}
  • [PDF] S. Ruffieux, C. Gisler, J. Wagen, F. Buntschu, and G. Bovet, "TAKE - Tactical Ad-Hoc Network Emulation," in Proceedings of the International Conference on Military Communications and Information Systems (ICMCIS, former MCC) 2018, Warsaw, Poland, 2018, p. 8.
    [Bibtex]
    @inproceedings{ruffieux2018,
    address = {Warsaw, Poland},
    author = {Ruffieux, Simon and Gisler, Christophe and Wagen, Jean-Fr{\'{e}}d{\'{e}}ric and Buntschu, Fran{\c{c}}ois and Bovet, G{\'{e}}r{\^{o}}me},
    booktitle = {Proceedings of the International Conference on Military Communications and Information Systems (ICMCIS, former MCC) 2018},
    pages = {8},
    publisher = {IEEE Computer Society},
    title = {{TAKE - Tactical Ad-Hoc Network Emulation}},
    year = {2018}
    }
  • [PDF] L. Rychener and J. Hennebert, "Machine Learning for Anomaly Detection in Time-Series Produced by Industrial Processes," in FTAL conference on Industrial Applied Data Science, 2018, p. 15–16.
    [Bibtex]
    @InProceedings{ftalconference2018lorenz,
    author = {Lorenz Rychener and Jean Hennebert},
    title = {Machine Learning for Anomaly Detection in Time-Series Produced by Industrial Processes},
    booktitle = {FTAL conference on Industrial Applied Data Science},
    pages = {15--16},
    year = {2018},
    month = {oct},
    isbn = {978-2-8399-2549-5},
    }
  • [PDF] R. Schindler, M. Bouillon, R. Plamondon, and A. Fischer, "Extending the Sigma-Lognormal Model of the Kinematic Theory to Three Dimensions," in Proc. 1st Int. Conf. on Pattern Recognition and Artificial Intelligence, 2018, p. 748–752.
    [Bibtex]
    @inproceedings{schindler18extending,
    Author = {R. Schindler and M. Bouillon and R. Plamondon and A. Fischer},
    Booktitle = {Proc. 1st Int. Conf. on Pattern Recognition and Artificial Intelligence},
    Date-Added = {2018-10-04 07:51:05 +0000},
    Date-Modified = {2018-10-04 07:56:40 +0000},
    Pages = {748--752},
    Title = {Extending the Sigma-Lognormal Model of the Kinematic Theory to Three Dimensions},
    Year = {2018}}
  • [PDF] M. Stauffer, A. Fischer, and K. Riesen, "Searching and Browsing in Historical Documents – State of the Art and Novel Approaches for Template-Based Keyword Spotting," in Business Information Systems and Technology 4.0, R. Dornberger, Ed., Springer, 2018, vol. 141, p. 197–211.
    [Bibtex]
    @incollection{stauffer18searching,
    Author = {M. Stauffer and A. Fischer and K. Riesen},
    Booktitle = {Business Information Systems and Technology 4.0},
    Date-Added = {2018-10-04 07:24:22 +0000},
    Date-Modified = {2018-10-04 07:28:11 +0000},
    Editor = {R. Dornberger},
    Pages = {197--211},
    Publisher = {Springer},
    Series = {Studies in Systems, Decision, and Control},
    Title = {Searching and Browsing in Historical Documents -- State of the Art and Novel Approaches for Template-Based Keyword Spotting},
    Volume = {141},
    Year = {2018}}
  • [PDF] M. Stauffer, A. Fischer, and K. Riesen, "Keyword Spotting in Historical Handwritten Documents based on Graph Matching," Pattern Recognition, vol. 81, p. 240–253, 2018.
    [Bibtex]
    @article{stauffer18keyword,
    Author = {M. Stauffer and A. Fischer and K. Riesen},
    Date-Added = {2018-10-04 07:22:53 +0000},
    Date-Modified = {2018-10-04 07:23:44 +0000},
    Journal = {Pattern Recognition},
    Pages = {240--253},
    Title = {Keyword Spotting in Historical Handwritten Documents based on Graph Matching},
    Volume = {81},
    Year = {2018}}
  • [PDF] M. Stauffer, A. Fischer, and K. Riesen, "Graph-Based Keyword Spotting in Historical Documents Using Context-Aware Hausdorff Edit Distance," in Proc. 13th Int. Workshop on Document Analysis Systems, 2018, p. 49–54.
    [Bibtex]
    @inproceedings{stauffer18graphbased,
    Author = {M. Stauffer and A. Fischer and K. Riesen},
    Booktitle = {Proc. 13th Int. Workshop on Document Analysis Systems},
    Date-Added = {2018-10-04 07:50:05 +0000},
    Date-Modified = {2018-10-04 07:50:57 +0000},
    Pages = {49--54},
    Title = {Graph-Based Keyword Spotting in Historical Documents Using Context-Aware Hausdorff Edit Distance},
    Year = {2018}}
  • [PDF] [DOI] B. Wicht, A. Fischer, and J. Hennebert, "DLL: A Fast Deep Neural Network Library," in Artificial Neural Networks in Pattern Recognition, L. Pancionia, F. Schwenker, and E. Trentin, Eds., Springer International Publishing, 2018, p. 54–65.
    [Bibtex]
    @inbook{wicht18dll,
    Author = {B. Wicht and A. Fischer and J. Hennebert},
    Booktitle = {Artificial Neural Networks in Pattern Recognition},
    Date-Added = {2018-10-04 07:29:50 +0000},
    Date-Modified = {2018-10-22 09:07:00 +0000},
    Editor = {Pancionia, Luca and Schwenker, Friedhelm and Trentin, Edmondo},
    Isbn = "978-3-319-99978-4",
    Doi = "10.1007/978-3-319-99978-4",
    Pages = {54--65},
    Publisher = {Springer International Publishing},
    Series = {Lecture Notes in Artificial Intelligence},
    Title = {{DLL}: A Fast Deep Neural Network Library},
    Year = {2018}}
  • [PDF] [DOI] B. Wicht, A. Fischer, and J. Hennebert, "Seamless GPU Evaluation of Smart Expression Templates," in 2018 International Conference on High Performance Computing Simulation (HPCS), 2018, pp. 196-203.
    [Bibtex] [Abstract]
    @inproceedings{wicht2018gpu,
    author={B. {Wicht} and A. {Fischer} and J. {Hennebert}},
    booktitle={2018 International Conference on High Performance Computing Simulation (HPCS)},
    title={Seamless GPU Evaluation of Smart Expression Templates},
    year={2018},
    volume={},
    number={},
    pages={196-203},
    abstract={Expression Templates is a technique allowing to write linear algebra code in C++ the same way it would be written on paper. It is also used extensively as a performance optimization technique, especially as the Smart Expression Templates form which allows for even higher performance. It has proved to be very efficient for computation on a Central Processing Unit (CPU). However, due to its design, it is not easily implemented on a Graphics Processing Unit (GPU). In this paper, we devise a set of techniques to allow the seamless evaluation of Smart Expression Templates on the GPU. The execution is transparent for the user of the library which still uses the matrices and vector as if it was on the CPU and profits from the performance and higher multi-processing capabilities of the GPU. We also show that the GPU version is significantly faster than the CPU version, without any change to the code of the user.},
    keywords={C++ language;graphics processing units;matrix algebra;optimisation;parallel processing;software performance evaluation;CPU;seamless evaluation;GPU version;linear algebra code;performance optimization technique;central processing unit;graphics processing unit;GPU evaluation;multiprocessing capabilities;smart expression templates form;Graphics processing units;Kernel;Libraries;C++ languages;Runtime;Central Processing Unit;High performance computing},
    doi={10.1109/HPCS.2018.00045},
    ISSN={},
    month={July}
    }

    Expression Templates is a technique allowing to write linear algebra code in C++ the same way it would be written on paper. It is also used extensively as a performance optimization technique, especially as the Smart Expression Templates form which allows for even higher performance. It has proved to be very efficient for computation on a Central Processing Unit (CPU). However, due to its design, it is not easily implemented on a Graphics Processing Unit (GPU). In this paper, we devise a set of techniques to allow the seamless evaluation of Smart Expression Templates on the GPU. The execution is transparent for the user of the library which still uses the matrices and vector as if it was on the CPU and profits from the performance and higher multi-processing capabilities of the GPU. We also show that the GPU version is significantly faster than the CPU version, without any change to the code of the user.

  • [PDF] B. Wicht, "Deep Learning Feature Extraction for Image Processing," PhD Thesis, 2018.
    [Bibtex]
    @phdthesis{wicht2018deep,
    author={Wicht, B},
    title={Deep Learning Feature Extraction for Image Processing},
    Date-Added = {2018-01-25 10:40:54 +0000},
    Date-Modified = {2018-01-25 13:12:25 +0000},
    year={2018},
    school={University of Fribourg}
    }
  • [PDF] [DOI] O. Zayene, S. Masmoudi Touj, J. Hennebert, R. Ingold, and N. Essoukri Ben Amara, "Multi-dimensional long short-term memory networks for artificial Arabic text recognition in news video," IET Computer Vision, 2018.
    [Bibtex] [Abstract]
    @ARTICLE{ietzayene2018,
    author = {Zayene, Oussama and Masmoudi Touj, Sameh and Hennebert, Jean and Ingold, Rolf and Essoukri Ben Amara, Najoua},
    keywords = {text pattern variability;public AcTiV-R dataset;artificial Arabic video text recognition;evaluation protocols;Arabic character models;multimedia document annotation;segmentation-free method;line levels;nonuniform intraword distances;news video;public dataset ALIF;recurrent neural networks;multidimensional long short-term memory networks;interword distances;diacritic marks;multimedia document indexing;embedded texts;connectionist temporal classification layer;},
    ISSN = {1751-9632},
    language = {English},
    abstract = {This study presents a novel approach for Arabic video text recognition based on recurrent neural networks. In fact, embedded texts in videos represent a rich source of information for indexing and automatically annotating multimedia documents. However, video text recognition is a non-trivial task due to many challenges like the variability of text patterns and the complexity of backgrounds. In the case of Arabic, the presence of diacritic marks, the cursive nature of the script and the non-uniform intra/inter word distances, may introduce many additional challenges. The proposed system presents a segmentation-free method that relies specifically on a multi-dimensional long short-term memory coupled with a connectionist temporal classification layer. It is shown that using an efficient pre-processing step and a compact representation of Arabic character models brings robust performance and yields a low-error rate than other recently published methods. The authors’ system is trained and evaluated using the public AcTiV-R dataset under different evaluation protocols. The obtained results are very interesting. They also outperform current state-of-the-art approaches on the public dataset ALIF in terms of recognition rates at both character and line levels.},
    title = {Multi-dimensional long short-term memory networks for artificial Arabic text recognition in news video},
    journal = {IET Computer Vision},
    year = {2018},
    month = {March},
    publisher ={Institution of Engineering and Technology},
    copyright = {© The Institution of Engineering and Technology},
    url = {http://digital-library.theiet.org/content/journals/10.1049/iet-cvi.2017.0468},
    DOI = {10.1049/iet-cvi.2017.0468}
    }

    This study presents a novel approach for Arabic video text recognition based on recurrent neural networks. In fact, embedded texts in videos represent a rich source of information for indexing and automatically annotating multimedia documents. However, video text recognition is a non-trivial task due to many challenges like the variability of text patterns and the complexity of backgrounds. In the case of Arabic, the presence of diacritic marks, the cursive nature of the script and the non-uniform intra/inter word distances, may introduce many additional challenges. The proposed system presents a segmentation-free method that relies specifically on a multi-dimensional long short-term memory coupled with a connectionist temporal classification layer. It is shown that using an efficient pre-processing step and a compact representation of Arabic character models brings robust performance and yields a low-error rate than other recently published methods. The authors’ system is trained and evaluated using the public AcTiV-R dataset under different evaluation protocols. The obtained results are very interesting. They also outperform current state-of-the-art approaches on the public dataset ALIF in terms of recognition rates at both character and line levels.

  • [PDF] [DOI] O. Zayene, S. Masmoudi Touj, J. Hennebert, R. Ingold, and N. Essoukri Ben Amara, "Open Datasets and Tools for Arabic Text Detection and Recognition in News Video Frames," Journal of Imaging, vol. 4, iss. 2, 2018.
    [Bibtex] [Abstract]
    @Article{jimagingzayene2018,
    AUTHOR = {Zayene, Oussama and Masmoudi Touj, Sameh and Hennebert, Jean and Ingold, Rolf and Essoukri Ben Amara, Najoua},
    TITLE = {Open Datasets and Tools for Arabic Text Detection and Recognition in News Video Frames},
    JOURNAL = {Journal of Imaging},
    VOLUME = {4},
    YEAR = {2018},
    NUMBER = {2},
    URL = {http://www.mdpi.com/2313-433X/4/2/32},
    ISSN = {2313-433X},
    ABSTRACT = {Recognizing texts in video is more complex than in other environments such as scanned documents. Video texts appear in various colors, unknown fonts and sizes, often affected by compression artifacts and low quality. In contrast to Latin texts, there are no publicly available datasets which cover all aspects of the Arabic Video OCR domain. This paper describes a new well-defined and annotated Arabic-Text-in-Video dataset called AcTiV 2.0. The dataset is dedicated especially to building and evaluating Arabic video text detection and recognition systems. AcTiV 2.0 contains 189 video clips serving as a raw material for creating 4063 key frames for the detection task and 10,415 cropped text images for the recognition task. AcTiV 2.0 is also distributed with its annotation and evaluation tools that are made open-source for standardization and validation purposes. This paper also reports on the evaluation of several systems tested under the proposed detection and recognition protocols.},
    DOI = {10.3390/jimaging4020032}
    }

    Recognizing texts in video is more complex than in other environments such as scanned documents. Video texts appear in various colors, unknown fonts and sizes, often affected by compression artifacts and low quality. In contrast to Latin texts, there are no publicly available datasets which cover all aspects of the Arabic Video OCR domain. This paper describes a new well-defined and annotated Arabic-Text-in-Video dataset called AcTiV 2.0. The dataset is dedicated especially to building and evaluating Arabic video text detection and recognition systems. AcTiV 2.0 contains 189 video clips serving as a raw material for creating 4063 key frames for the detection task and 10,415 cropped text images for the recognition task. AcTiV 2.0 is also distributed with its annotation and evaluation tools that are made open-source for standardization and validation purposes. This paper also reports on the evaluation of several systems tested under the proposed detection and recognition protocols.

  • M. Ameri, M. Stauffer, K. Riesen, T. Bui, and A. Fischer, "Keyword Spotting in Historical Documents Based on Handwriting Graphs and Hausdorff Edit Distance," in Proc. 18th Conf. of the International Graphonomics Society, 2017.
    [Bibtex]
    @inproceedings{ameri17keyword,
    Author = {M. Ameri and M. Stauffer and K. Riesen and T. Bui and A. Fischer},
    Booktitle = {Proc. 18th Conf. of the International Graphonomics Society},
    Date-Added = {2018-01-15 15:25:58 +0000},
    Date-Modified = {2018-01-15 15:27:35 +0000},
    Title = {Keyword Spotting in Historical Documents Based on Handwriting Graphs and Hausdorff Edit Distance},
    Year = {2017}}
  • [PDF] K. Chen, M. Seuret, J. Hennebert, and R. Ingold, "Convolutional Neural Networks for Page Segmentation of Historical Document Images," in Proc. 14th Int. Conf. on Document Analysis and Recognition ICDAR, 2017, p. 965–970.
    [Bibtex]
    @inproceedings{chen2017icdar,
    Author = {Kai Chen and Mathias Seuret and Jean Hennebert and Rolf Ingold},
    Booktitle = {Proc. 14th Int. Conf. on Document Analysis and Recognition ICDAR},
    Title = {Convolutional Neural Networks for Page Segmentation of Historical Document Images},
    Pages = {965--970},
    Year = {2017}}
  • A. Fischer, K. Riesen, and H. Bunke, "Improved quadratic time approximation of graph edit distance by combining Hausdorff matching and greedy assignment," Pattern Recognition Letters, vol. 87, p. 55–62, 2017.
    [Bibtex]
    @article{fischer17improved,
    Author = {Fischer, A. and Riesen, K. and Bunke, H.},
    Date-Added = {2017-01-15 10:44:16 +0000},
    Date-Modified = {2018-01-15 13:13:59 +0000},
    Journal = {Pattern Recognition Letters},
    Pages = {55--62},
    Title = {Improved quadratic time approximation of graph edit distance by combining {H}ausdorff matching and greedy assignment},
    Volume = {87},
    Year = {2017}}
  • A. Fischer and R. Plamondon, "Signature Verification Based on the Kinematic Theory of Rapid Human Movements," IEEE Trans. on Human-Machine Systems, vol. 47, iss. 2, p. 169–180, 2017.
    [Bibtex]
    @article{fischer17signature,
    Author = {Fischer, A. and Plamondon, R.},
    Date-Added = {2017-01-15 10:37:55 +0000},
    Date-Modified = {2018-01-15 13:15:25 +0000},
    Journal = {IEEE Trans. on Human-Machine Systems},
    Number = {2},
    Pages = {169--180},
    Title = {Signature Verification Based on the Kinematic Theory of Rapid Human Movements},
    Volume = {47},
    Year = {2017}}
  • A. Garz, M. Seuret, A. Fischer, and R. Ingold, "A User-Centered Segmentation Method for Complex Historical Manuscripts Based on Document Graphs," IEEE Trans. on Human-Machine Systems, vol. 47, iss. 2, p. 181–193, 2017.
    [Bibtex]
    @article{garz17auser,
    Author = {Garz, A. and Seuret, M. and Fischer, A. and Ingold, R.},
    Date-Added = {2017-01-15 10:33:14 +0000},
    Date-Modified = {2018-01-15 13:16:19 +0000},
    Journal = {IEEE Trans. on Human-Machine Systems},
    Number = {2},
    Pages = {181--193},
    Title = {A User-Centered Segmentation Method for Complex Historical Manuscripts Based on Document Graphs},
    Volume = {47},
    Year = {2017}}
  • A. Garz, F. Schuetz, A. Villa, R. Plamondon, and A. Fischer, "User Adaptation for Multi-Classifier Signature Verification Based on the Kinematic Theory," in Proc. 18th Conf. of the International Graphonomics Society, 2017.
    [Bibtex]
    @inproceedings{garz17user,
    Author = {A. Garz and F. Schuetz and A. Villa and R. Plamondon and A. Fischer},
    Booktitle = {Proc. 18th Conf. of the International Graphonomics Society},
    Date-Added = {2018-01-15 13:54:52 +0000},
    Date-Modified = {2018-01-15 13:55:36 +0000},
    Title = {User Adaptation for Multi-Classifier Signature Verification Based on the Kinematic Theory},
    Year = {2017}}
  • [PDF] C. Gisler, "Generic Data-Driven Approaches to Time Series Classification," PhD Thesis PhD Thesis, 2017.
    [Bibtex]
    @phdthesis{Gisler2017thesis,
    author = {Gisler, C},
    pages = {262},
    school = {University of Fribourg, Switzerland},
    title = {{Generic Data-Driven Approaches to Time Series Classification}},
    type = {PhD Thesis},
    year = {2017}
    }
  • [PDF] [DOI] L. Linder, D. Vionnet, J. Bacher, and J. Hennebert, "Big Building Data - a Big Data Platform for Smart Buildings," Energy Procedia, vol. 122, pp. 589-594, 2017.
    [Bibtex] [Abstract]
    @article{2017lindercisbat,
    title = "Big Building Data - a Big Data Platform for Smart Buildings",
    journal = "Energy Procedia",
    volume = "122",
    pages = "589 - 594",
    year = "2017",
    note = "CISBAT 2017 International ConferenceFuture Buildings & Districts – Energy Efficiency from Nano to Urban Scale",
    issn = "1876-6102",
    doi = "10.1016/j.egypro.2017.07.354",
    url = "http://www.sciencedirect.com/science/article/pii/S1876610217329582",
    author = "Lucy Linder and Damien Vionnet and Jean-Philippe Bacher and Jean Hennebert",
    keywords = "Big Data, Building Management Systems, Smart Buildings, Web of Buildings",
    abstract = "Abstract Future buildings will more and more rely on advanced Building Management Systems (BMS) connected to a variety of sensors, actuators and dedicated networks. Their objectives are to observe the state of rooms and apply automated rules to preserve or increase comfort while economizing energy. In this work, we advocate for the inclusion of a dedicated system for sensors data storage and processing, based on Big Data technologies. This choice enables new potentials in terms of data analytics and applications development, the most obvious one being the ability to scale up seamlessly from one smart building to several, in the direction of smart areas and smart cities. We report in this paper on our system architecture and on several challenges we met in its elaboration, attempting to meet requirements of scalability, data processing, flexibility, interoperability and privacy. We also describe current and future end-user services that our platform will support, including historical data retrieval, visualisation, processing and alarms. The platform, called BBData - Big Building Data, is currently in production at the Smart Living Lab of Fribourg and is offered to several research teams to ease their work, to foster the sharing of historical data and to avoid that each project develops its own data gathering and processing pipeline."
    }

    Abstract Future buildings will more and more rely on advanced Building Management Systems (BMS) connected to a variety of sensors, actuators and dedicated networks. Their objectives are to observe the state of rooms and apply automated rules to preserve or increase comfort while economizing energy. In this work, we advocate for the inclusion of a dedicated system for sensors data storage and processing, based on Big Data technologies. This choice enables new potentials in terms of data analytics and applications development, the most obvious one being the ability to scale up seamlessly from one smart building to several, in the direction of smart areas and smart cities. We report in this paper on our system architecture and on several challenges we met in its elaboration, attempting to meet requirements of scalability, data processing, flexibility, interoperability and privacy. We also describe current and future end-user services that our platform will support, including historical data retrieval, visualisation, processing and alarms. The platform, called BBData - Big Building Data, is currently in production at the Smart Living Lab of Fribourg and is offered to several research teams to ease their work, to foster the sharing of historical data and to avoid that each project develops its own data gathering and processing pipeline.

  • P. Maergner, K. Riesen, R. Ingold, and A. Fischer, "A Structural Approach to Offline Signature Verification Using Graph Edit Distance," in Proc. 14th Int. Conf. on Document Analysis and Recognition, 2017.
    [Bibtex]
    @inproceedings{maergner17astructural,
    Author = {P. Maergner and K. Riesen and R. Ingold and A. Fischer},
    Booktitle = {Proc. 14th Int. Conf. on Document Analysis and Recognition},
    Date-Added = {2018-01-15 15:26:33 +0000},
    Date-Modified = {2018-01-15 15:27:15 +0000},
    Title = {A Structural Approach to Offline Signature Verification Using Graph Edit Distance},
    Year = {2017}}
  • P. Maergner, K. Riesen, R. Ingold, and A. Fischer, "Offline Signature Verification Based on Bipartite Approximation of Graph Edit Distance," in Proc. 18th Conf. of the International Graphonomics Society, 2017.
    [Bibtex]
    @inproceedings{maergner17offline,
    Author = {P. Maergner and K. Riesen and R. Ingold and A. Fischer},
    Booktitle = {Proc. 18th Conf. of the International Graphonomics Society},
    Date-Added = {2018-01-15 15:25:05 +0000},
    Date-Modified = {2018-01-15 15:25:51 +0000},
    Title = {Offline Signature Verification Based on Bipartite Approximation of Graph Edit Distance},
    Year = {2017}}
  • [PDF] K. Riesen, A. Fischer, and H. Bunke, "Improved Graph Edit Distance Approximation with Simulated Annealing," in Proc. 11th Int. Workshop on Graph-based Representations in Pattern Recognition, 2017, p. 222–231.
    [Bibtex]
    @inproceedings{riesen17improved,
    Author = {Riesen, K. and Fischer, A. and Bunke, H.},
    Booktitle = {Proc. 11th Int. Workshop on Graph-based Representations in Pattern Recognition},
    Date-Added = {2018-01-15 13:36:45 +0000},
    Date-Modified = {2018-01-15 13:37:58 +0000},
    Pages = {222--231},
    Title = {Improved Graph Edit Distance Approximation with Simulated Annealing},
    Year = {2017}}
  • [PDF] [DOI] F. Rossier, P. Lang, and J. Hennebert, "Near Real-Time Appliance Recognition Using Low Frequency Monitoring and Active Learning Methods," Energy Procedia, vol. 122, pp. 691-696, 2017.
    [Bibtex] [Abstract]
    @article{2017rossiercisbat,
    title = "Near Real-Time Appliance Recognition Using Low Frequency Monitoring and Active Learning Methods",
    journal = "Energy Procedia",
    volume = "122",
    pages = "691 - 696",
    year = "2017",
    note = "CISBAT 2017 International ConferenceFuture Buildings & Districts – Energy Efficiency from Nano to Urban Scale",
    issn = "1876-6102",
    doi = "10.1016/j.egypro.2017.07.371",
    url = "http://www.sciencedirect.com/science/article/pii/S1876610217329752",
    author = "Florian Rossier and Philippe Lang and Jean Hennebert",
    keywords = "NILM, Appliance recognition, active learning",
    abstract = "Abstract Electricity load monitoring in residential buildings has become an important task allowing for energy consumption understanding, indirect human activity recognition and occupancy modelling. In this context, Non Intrusive Load Monitoring (NILM) is an approach based on the analysis of the global electricity consumption signal of the habitation. Current NILM solutions are reaching good precision for the identification of electrical devices but at the cost of difficult setups with expensive equipments typically working at high frequency. In this work we propose to use a low-cost and easy to install low frequency sensor for which we improve the performances with an active machine learning strategy. At setup, the system is able to identify some appliances with typical signatures such as a fridge. During usage, the system detects unknown signatures and provides a user-friendly procedure to include new appliances and to improve the identification precision over time."
    }

    Abstract Electricity load monitoring in residential buildings has become an important task allowing for energy consumption understanding, indirect human activity recognition and occupancy modelling. In this context, Non Intrusive Load Monitoring (NILM) is an approach based on the analysis of the global electricity consumption signal of the habitation. Current NILM solutions are reaching good precision for the identification of electrical devices but at the cost of difficult setups with expensive equipments typically working at high frequency. In this work we propose to use a low-cost and easy to install low frequency sensor for which we improve the performances with an active machine learning strategy. At setup, the system is able to identify some appliances with typical signatures such as a fridge. During usage, the system detects unknown signatures and provides a user-friendly procedure to include new appliances and to improve the identification precision over time.

  • [PDF] F. Slimane, R. Ingold, and J. Hennebert, "ICDAR2017 Competition on Multi-font and Multi-size Digitally Represented Arabic Text," in Proc. 14th Int. Conf. on Document Analysis and Recognition ICDAR, 2017, p. 1466–1472.
    [Bibtex] [Abstract]
    @inproceedings{slimane2017icdar,
    Author = {Fouad Slimane and Rolf Ingold and Jean Hennebert},
    Booktitle = {Proc. 14th Int. Conf. on Document Analysis and Recognition ICDAR},
    Title = {ICDAR2017 Competition on Multi-font and Multi-size Digitally Represented Arabic Text},
    Pages = {1466--1472},
    Year = {2017},
    abstract = {This paper describes the organisation and results of the Arabic Recognition Competition: Multi-font Multi-size Digitally Represented Text held in the context of the 14th International Conference on Document Analysis and Recognition (ICDAR’2017), during November 10-15, 2017, Kyoto, Japan. This competition has used the freely available Arabic Printed Text Image (APTI) database. A first and second editions took place respectively in ICDAR’2011 and ICDAR’2013. In this edition, we propose four challenges. Six research groups are participating in the competition with thirteen systems. These systems are compared using the font, font-size, font and fontsize, and character and word recognition rates. The systems were tested in a blind manner using the first 5000 images of APTI database set 6. A short description of the participating groups, their systems, the experimental setup, and the observed results are presented.}}

    This paper describes the organisation and results of the Arabic Recognition Competition: Multi-font Multi-size Digitally Represented Text held in the context of the 14th International Conference on Document Analysis and Recognition (ICDAR’2017), during November 10-15, 2017, Kyoto, Japan. This competition has used the freely available Arabic Printed Text Image (APTI) database. A first and second editions took place respectively in ICDAR’2011 and ICDAR’2013. In this edition, we propose four challenges. Six research groups are participating in the competition with thirteen systems. These systems are compared using the font, font-size, font and fontsize, and character and word recognition rates. The systems were tested in a blind manner using the first 5000 images of APTI database set 6. A short description of the participating groups, their systems, the experimental setup, and the observed results are presented.

  • M. Stauffer, A. Fischer, and K. Riesen, "Ensembles for Graph-based Keyword Spotting in Historical Handwritten Documents," in Proc. 14th Int. Conf. on Document Analysis and Recognition, 2017.
    [Bibtex]
    @inproceedings{stauffer17ensembles,
    Author = {M. Stauffer and A. Fischer and K. Riesen},
    Booktitle = {Proc. 14th Int. Conf. on Document Analysis and Recognition},
    Date-Added = {2018-01-15 15:28:24 +0000},
    Date-Modified = {2018-01-15 15:28:57 +0000},
    Title = {Ensembles for Graph-based Keyword Spotting in Historical Handwritten Documents},
    Year = {2017}}
  • M. Stauffer, A. Fischer, and K. Riesen, "Speeding-Up Graph-based Keyword Spotting in Historical Handwritten Documents," in Proc. 11th Int. Workshop on Graph-based Representations in Pattern Recognition, 2017, p. 83–93.
    [Bibtex]
    @inproceedings{stauffer17speedingup,
    Author = {M. Stauffer and A. Fischer and K. Riesen},
    Booktitle = {Proc. 11th Int. Workshop on Graph-based Representations in Pattern Recognition},
    Date-Added = {2018-01-15 13:53:08 +0000},
    Date-Modified = {2018-01-15 13:54:31 +0000},
    Pages = {83--93},
    Title = {Speeding-Up Graph-based Keyword Spotting in Historical Handwritten Documents},
    Year = {2017}}
  • M. Stauffer, T. Tschachtli, A. Fischer, and H. Bunke, "A Survey on Applications of Bipartite Graph Edit Distance," in Proc. 11th Int. Workshop on Graph-based Representations in Pattern Recognition, 2017, p. 242–252.
    [Bibtex]
    @inproceedings{stauffer17asurvey,
    Author = {Stauffer, M. and Tschachtli, T. and Fischer, A. and Bunke, H.},
    Booktitle = {Proc. 11th Int. Workshop on Graph-based Representations in Pattern Recognition},
    Date-Added = {2018-01-15 13:38:03 +0000},
    Date-Modified = {2018-01-15 13:39:30 +0000},
    Pages = {242--252},
    Title = {A Survey on Applications of Bipartite Graph Edit Distance},
    Year = {2017}}
  • [PDF] O. Zayene, J. Hennebert, R. Ingold, and N. E. BenAmara, "ICDAR2017 Competition on Arabic Text Detection and Recognition in Multi-resolution Video Frames," in Proc. 14th Int. Conf. on Document Analysis and Recognition ICDAR, 2017, p. 1460–1465.
    [Bibtex]
    @inproceedings{zayene2017icdar,
    Author = {Oussama Zayene and Jean Hennebert and Rolf Ingold and Najoua Essoukri BenAmara},
    Booktitle = {Proc. 14th Int. Conf. on Document Analysis and Recognition ICDAR},
    Title = {ICDAR2017 Competition on Arabic Text Detection and Recognition in Multi-resolution Video Frames},
    Pages = {1460--1465},
    Year = {2017}}
  • [PDF] F. Bapst and L. Linder, "More Power (and fun!) in Java Numbers," in Colloque numérique suisse 2016 – Book of Abstracts, 2016, p. 15.
    [Bibtex]
    @InProceedings{bapst-cojac-numeric-colloquium-15,
    author = {Fr{\'e}d{\'e}ric Bapst and Lucy Linder},
    title = {{M}ore {P}ower (and fun!) in {J}ava {N}umbers},
    booktitle = {Colloque num{\'e}rique suisse 2016 -- Book of Abstracts},
    pages = {15},
    year = {2016},
    month = {April},
    url = {http://math.unifr.ch/colloqnum2016/programme.php/}
    }
  • [PDF] [DOI] K. Chen, M. Seuret, M. Liwicki, J. Hennebert, C. Liu, and R. Ingold, "Page Segmentation for Historical Handwritten Document Images Using Conditional Random Fields," in 2016 15th International Conference on Frontiers in Handwriting Recognition (ICFHR), 2016, pp. 90-95.
    [Bibtex] [Abstract]
    @INPROCEEDINGS{chen2016:icfhr,
    author={Kai Chen and Mathias Seuret and Marcus Liwicki and Jean Hennebert and Cheng-Lin Liu and Rolf Ingold},
    booktitle={2016 15th International Conference on Frontiers in Handwriting Recognition (ICFHR)},
    title={Page Segmentation for Historical Handwritten Document Images Using Conditional Random Fields},
    year={2016},
    pages={90-95},
    abstract={In this paper, we present a Conditional Random Field (CRF) model to deal with the problem of segmenting handwritten historical document images into different regions. We consider page segmentation as a pixel-labeling problem, i.e., each pixel is assigned to one of a set of labels. Features are learned from pixel intensity values with stacked convolutional autoencoders in an unsupervised manner. The features are used for the purpose of initial classification with a multilayer perceptron. Then a CRF model is introduced for modeling the local and contextual information jointly in order to improve the segmentation. For the purpose of decreasing the time complexity, we perform labeling at superpixel level. In the CRF model, graph nodes are represented by superpixels. The label of each pixel is determined by the label of the superpixel to which it belongs. Experiments on three public datasets demonstrate that, compared to previous methods, the proposed method achieves more accurate segmentation results and is much faster.},
    keywords={document image processing,graph theory,handwriting recognition,image classification,image segmentation,multilayer perceptrons, unsupervised learning, CRF, conditional random field, historical handwritten document, stacked convolutional autoencoders, superpixel, Autoencoder},
    doi={10.1109/ICFHR.2016.0029},
    ISSN={2167-6445},
    month={Oct},
    pdf={http://www.hennebert.org/download/publications/icfhr-2016-page-segmentation-for-historical-handwritten-document-images-using-conditional-random-fields.pdf}}

    In this paper, we present a Conditional Random Field (CRF) model to deal with the problem of segmenting handwritten historical document images into different regions. We consider page segmentation as a pixel-labeling problem, i.e., each pixel is assigned to one of a set of labels. Features are learned from pixel intensity values with stacked convolutional autoencoders in an unsupervised manner. The features are used for the purpose of initial classification with a multilayer perceptron. Then a CRF model is introduced for modeling the local and contextual information jointly in order to improve the segmentation. For the purpose of decreasing the time complexity, we perform labeling at superpixel level. In the CRF model, graph nodes are represented by superpixels. The label of each pixel is determined by the label of the superpixel to which it belongs. Experiments on three public datasets demonstrate that, compared to previous methods, the proposed method achieves more accurate segmentation results and is much faster.

  • M. Diaz, A. Fischer, M. A. Ferrer, and R. Plamondon, "Dynamic Signature Verification System Based on One Real Signature," IEEE Trans. on Cybernetics, vol. PP, iss. 99, p. 1–12, 2016.
    [Bibtex]
    @article{diaz16dynamic,
    Author = {Diaz, M. and Fischer, A. and Ferrer, M.A. and Plamondon, R.},
    Date-Added = {2017-01-15 10:40:54 +0000},
    Date-Modified = {2017-01-15 10:42:20 +0000},
    Journal = {IEEE Trans. on Cybernetics},
    Number = {99},
    Pages = {1--12},
    Title = {Dynamic Signature Verification System Based on One Real Signature},
    Volume = {PP},
    Year = {2016}}
  • A. Fischer, K. Riesen, and H. Bunke, "Improved quadratic time approximation of graph edit distance by combining Hausdorff matching and greedy assignment," Pattern Recognition Letters, vol. PP, iss. 99, p. 1–8, 2016.
    [Bibtex]
    @article{fischer16improved,
    Author = {Fischer, A. and Riesen, K. and Bunke, H.},
    Date-Added = {2017-01-15 10:44:16 +0000},
    Date-Modified = {2017-01-15 10:44:55 +0000},
    Journal = {Pattern Recognition Letters},
    Number = {99},
    Pages = {1--8},
    Title = {Improved quadratic time approximation of graph edit distance by combining {H}ausdorff matching and greedy assignment},
    Volume = {PP},
    Year = {2016}}
  • A. Fischer and R. Plamondon, "Signature Verification Based on the Kinematic Theory of Rapid Human Movements," IEEE Trans. on Human-Machine Systems, vol. PP, iss. 99, p. 1–12, 2016.
    [Bibtex]
    @article{fischer16signature,
    Author = {Fischer, A. and Plamondon, R.},
    Date-Added = {2017-01-15 10:37:55 +0000},
    Date-Modified = {2017-01-15 10:39:59 +0000},
    Journal = {IEEE Trans. on Human-Machine Systems},
    Number = {99},
    Pages = {1--12},
    Title = {Signature Verification Based on the Kinematic Theory of Rapid Human Movements},
    Volume = {PP},
    Year = {2016}}
  • A. Fischer, S. Grimm, V. Bernasconi, A. Garz, P. Buchs, M. Caon, O. A. Khaled, E. Mugellini, F. Meyer, and C. Wagner, "Nautilus: Real-Time Interaction Between Dancers and Augmented Reality with Pixel-Cloud Avatars," in Proc. 28ième confèrence francophone sur l'Interaction Homme-Machine, 2016, pp. 50-57.
    [Bibtex]
    @inproceedings{fischer16nautilus,
    Author = {Fischer, Andreas and Grimm, Sara and Bernasconi, Valentine and Garz, Angelika and Buchs, Pascal and Caon, Maurizio and Khaled, Omar Abou and Mugellini, Elena and Meyer, Franziska and Wagner, Claudia},
    Booktitle = {Proc. 28i{\`e}me conf{\`e}rence francophone sur l'Interaction Homme-Machine},
    Date-Added = {2017-01-16 12:33:38 +0000},
    Date-Modified = {2017-01-16 12:36:12 +0000},
    Pages = {50-57},
    Title = {Nautilus: Real-Time Interaction Between Dancers and Augmented Reality with Pixel-Cloud Avatars},
    Year = {2016}}
  • A. Garz, M. Seuret, A. Fischer, and R. Ingold, "A User-Centered Segmentation Method for Complex Historical Manuscripts Based on Document Graphs," IEEE Trans. on Human-Machine Systems, vol. PP, iss. 99, p. 1–13, 2016.
    [Bibtex]
    @article{garz16auser,
    Author = {Garz, A. and Seuret, M. and Fischer, A. and Ingold, R.},
    Date-Added = {2017-01-15 10:33:14 +0000},
    Date-Modified = {2017-01-15 10:36:56 +0000},
    Journal = {IEEE Trans. on Human-Machine Systems},
    Number = {99},
    Pages = {1--13},
    Title = {A User-Centered Segmentation Method for Complex Historical Manuscripts Based on Document Graphs},
    Volume = {PP},
    Year = {2016}}
  • A. Garz, M. Seuret, F. Simistira, A. Fischer, and R. Ingold, "Creating ground truth for historical manuscripts with document graphs and scribbling interaction," in Proc. 12th Int. Workshop on Document Analysis Systems, 2016, p. 126–131.
    [Bibtex]
    @inproceedings{garz16creating,
    Author = {Garz, A. and Seuret, M. and Simistira, F. and Fischer, A. and Ingold, R.},
    Booktitle = {Proc. 12th Int. Workshop on Document Analysis Systems},
    Date-Added = {2017-01-16 12:29:10 +0000},
    Date-Modified = {2018-01-15 13:20:44 +0000},
    Pages = {126--131},
    Title = {Creating ground truth for historical manuscripts with document graphs and scribbling interaction},
    Year = {2016}}
  • A. Garz, M. Würsch, A. Fischer, and R. Ingold, "Simple and Fast Geometrical Descriptors for Writer Identification," in Proc. 23rd Int. Conf. on Document Recognition and Retrieval, 2016, p. 1–12.
    [Bibtex]
    @inproceedings{garz16simple,
    Author = {Garz, A. and W{\"u}rsch, M. and Fischer, A. and Ingold, R.},
    Booktitle = {Proc. 23rd Int. Conf. on Document Recognition and Retrieval},
    Date-Added = {2017-01-16 12:29:23 +0000},
    Date-Modified = {2017-01-16 12:32:58 +0000},
    Pages = {1--12},
    Title = {Simple and Fast Geometrical Descriptors for Writer Identification},
    Year = {2016}}
  • A. Garz, M. Seuret, A. Fischer, and R. Ingold, "GraphManuscribble: Interact intuitively with digital facsimiles," in Proc. 2nd Int. Conf. on Natural Sciences and Technology in Manuscript Analysis, 2016, p. 61–63.
    [Bibtex]
    @inproceedings{garz16graphmanuscribble,
    Author = {A. Garz and M. Seuret and A. Fischer and R. Ingold},
    Booktitle = {Proc. 2nd Int. Conf. on Natural Sciences and Technology in Manuscript Analysis},
    Date-Added = {2018-01-15 13:26:18 +0000},
    Date-Modified = {2018-01-15 13:27:31 +0000},
    Pages = {61--63},
    Title = {GraphManuscribble: Interact intuitively with digital facsimiles},
    Year = {2016}}
  • [PDF] [DOI] N. R. Howe, A. Fischer, and B. Wicht, "Inkball Models as Features for Handwriting Recognition," in 2016 15th International Conference on Frontiers in Handwriting Recognition (ICFHR), 2016, pp. 96-101.
    [Bibtex] [Abstract]
    @INPROCEEDINGS{2016howeicfhr,
    author={N. R. Howe and A. Fischer and B. Wicht},
    booktitle={2016 15th International Conference on Frontiers in Handwriting Recognition (ICFHR)},
    title={Inkball Models as Features for Handwriting Recognition},
    year={2016},
    pages={96-101},
    abstract={Inkball models provide a tool for matching and comparison of spatially structured markings such as handwritten characters and words. Hidden Markov models offer a framework for decoding a stream of text in terms of the most likely sequence of causal states. Prior work with HMM has relied on observation of features that are correlated with underlying characters, without modeling them directly. This paper proposes to use the results of inkball-based character matching as a feature set input directly to the HMM. Experiments indicate that this technique outperforms other tested methods at handwritten word recognition on a common benchmark when applied without normalization or text deslanting.},
    keywords={Computational modeling;Handwriting recognition;Hidden Markov models;Mathematical model;Prototypes;Skeleton;Two dimensional displays;Handwriting recognition;Hidden Markov models;Image processing;Pattern recognition},
    doi={10.1109/ICFHR.2016.0030},
    ISSN={2167-6445},
    month={Oct},}

    Inkball models provide a tool for matching and comparison of spatially structured markings such as handwritten characters and words. Hidden Markov models offer a framework for decoding a stream of text in terms of the most likely sequence of causal states. Prior work with HMM has relied on observation of features that are correlated with underlying characters, without modeling them directly. This paper proposes to use the results of inkball-based character matching as a feature set input directly to the HMM. Experiments indicate that this technique outperforms other tested methods at handwritten word recognition on a common benchmark when applied without normalization or text deslanting.

  • [PDF] [DOI] Kai Chen, Cheng-Lin Liu, Mathias Seuret, Marcus Liwicki, Jean Hennebert, and R. Ingold, "Page Segmentation for Historical Document Images Based on Superpixel Classification with Unsupervised Feature Learning," 2016 12th IAPR Workshop on Document Analysis Systems (DAS), pp. 299-304, 2016.
    [Bibtex] [Abstract]
    @article{chen2016:das,
    author = {Kai Chen, and Cheng-Lin Liu, and Mathias Seuret, and Marcus Liwicki, and Jean Hennebert, and Rolf Ingold},
    title = {Page Segmentation for Historical Document Images Based on Superpixel Classification with Unsupervised Feature Learning},
    journal = {2016 12th IAPR Workshop on Document Analysis Systems (DAS)},
    volume = {},
    number = {},
    issn = {},
    year = {2016},
    pages = {299-304},
    doi = {/10.1109/DAS.2016.13},
    publisher = {IEEE Computer Society},
    address = {Los Alamitos, CA, USA},
    abstract={In this paper, we present an efficient page segmentation method for historical document images. Many existing methods either rely on hand-crafted features or perform rather slow as they treat the problem as a pixel-level assignment problem. In order to create a feasible method for real applications, we propose to use superpixels as basic units of segmentation, and features are learned directly from pixels. An image is first oversegmented into superpixels with the simple linear iterative clustering (SLIC) algorithm. Then, each superpixel is represented by the features of its central pixel. The features are learned from pixel intensity values with stacked convolutional autoencoders in an unsupervised manner. A support vector machine (SVM) classifier is used to classify superpixels into four classes: periphery, background, text block, and decoration. Finally, the segmentation results are refined by a connected component based smoothing procedure. Experiments on three public datasets demonstrate that compared to our previous method, the proposed method is much faster and achieves comparable segmentation results. Additionally, much fewer pixels are used for classifier training.},
    keywords={Image segmentation, Training, Feature extraction, Support vector machines, Clustering algorithms, Classification algorithms, Labeling,autoencoder, page segmentation, layout analysis, historical document image, superpixel, SLIC},
    pdf={http://www.hennebert.org/download/publications/iapr-2016-page-segmentation-for-historical-document-images-based-on-superpixel-classification-with-unsupervised-feature-learning.pdf},
    }

    In this paper, we present an efficient page segmentation method for historical document images. Many existing methods either rely on hand-crafted features or perform rather slow as they treat the problem as a pixel-level assignment problem. In order to create a feasible method for real applications, we propose to use superpixels as basic units of segmentation, and features are learned directly from pixels. An image is first oversegmented into superpixels with the simple linear iterative clustering (SLIC) algorithm. Then, each superpixel is represented by the features of its central pixel. The features are learned from pixel intensity values with stacked convolutional autoencoders in an unsupervised manner. A support vector machine (SVM) classifier is used to classify superpixels into four classes: periphery, background, text block, and decoration. Finally, the segmentation results are refined by a connected component based smoothing procedure. Experiments on three public datasets demonstrate that compared to our previous method, the proposed method is much faster and achieves comparable segmentation results. Additionally, much fewer pixels are used for classifier training.

  • M. Kunz, B. Wolf, H. Schulze, D. Atlan, T. Walles, H. Walles, and T. Dandekar, "Non-coding RNAs in lung cancer: Contribution of bioinformatics analysis to the development of non-invasive diagnostic tools: Human Genetics and Genomics," Human Genetics and Genomics, 2016.
    [Bibtex] [Abstract]
    @article{kunz:2606,
    author = "Meik Kunz and Beat Wolf and Harald Schulze and David Atlan
    and Thorsten Walles and Heike Walles and Thomas Dandekar",
    title = "Non-coding RNAs in lung cancer: Contribution of
    bioinformatics analysis to the development of non-invasive
    diagnostic tools: Human Genetics and Genomics",
    year = "2016",
    journal = "Human Genetics and Genomics",
    abstract = "Lung cancer is currently the leading cause of cancer
    related mortality due to late diagnosis and limited
    treatment intervention. Non-coding RNAs are not translated
    into proteins and have emerged as fundamental regulators of
    gene expression. Recent studies reported that microRNAs and
    long non-coding RNAs are involved in lung cancer development
    and progression. Moreover, they appear as new promising
    non-invasive biomarkers for early lung cancer diagnosis.
    Here, we highlight their potential as biomarker in lung
    cancer and present how bioinformatics can contribute to the
    development of non-invasive diagnostic tools. For this, we
    discuss several bioinformatics algorithms and software tools
    for a comprehensive understanding and functional
    characterization of microRNAs and long non-coding RNAs.",
    keywords = "lung cancer",
    keywords = "non-invasive biomarkers",
    keywords = "miRNAs",
    keywords = "lncRNAs",
    keywords = "early diagnosis",
    keywords = "bioinformatics",
    keywords = "algorithm",
    }

    Lung cancer is currently the leading cause of cancer related mortality due to late diagnosis and limited treatment intervention. Non-coding RNAs are not translated into proteins and have emerged as fundamental regulators of gene expression. Recent studies reported that microRNAs and long non-coding RNAs are involved in lung cancer development and progression. Moreover, they appear as new promising non-invasive biomarkers for early lung cancer diagnosis. Here, we highlight their potential as biomarker in lung cancer and present how bioinformatics can contribute to the development of non-invasive diagnostic tools. For this, we discuss several bioinformatics algorithms and software tools for a comprehensive understanding and functional characterization of microRNAs and long non-coding RNAs.

  • [PDF] L. Linder and F. Bapst, Tutoriel sur COJAC - Sniffeur de problèmes numériques et usine de nombres enrichis pour Java, 2016.
    [Bibtex]
    @Misc{linder-cojac-dvp-16,
    author = {Lucy Linder and Fr{\'e}d{\'e}ric Bapst},
    title = {Tutoriel sur {COJAC} - {S}niffeur de probl{\`e}mes num{\'e}riques et
    usine de nombres enrichis pour Java},
    howpublished = {On java.developpez.com},
    month = {January},
    year = {2016},
    url = {http://lucy-linder.developpez.com/tutoriels/java/introduction-cojac/}
    }
  • [PDF] A. Ridi, C. Gisler, and J. Hennebert, "Aggregation procedure of Gaussian Mixture Models for additive features," in 23rd International Conference on Pattern Recognition (ICPR), 2016, pp. 2545-2550.
    [Bibtex] [Abstract]
    @conference{ridi-icpr2016,
    author = "Antonio Ridi and Christophe Gisler and Jean Hennebert",
    abstract = "In this work we provide details on a new and effective approach able to generate Gaussian Mixture Models (GMMs) for the classification of aggregated time series. More specifically, our procedure can be applied to time series that are aggregated together by adding their features. The procedure takes advantage of the additive property of the Gaussians that complies with the additive property of the features. Our goal is to classify aggregated time series, i.e. we aim to identify the classes of the single time series contributing to the total. The standard approach consists in training the models using the combination of several time series coming from different classes. However, this has the drawback of being a very slow operation given the amount of data. The proposed approach, called GMMs aggregation procedure, addresses this problem. It consists of three steps: (i) modeling the independent classes, (ii) generation of the models for the class combinations and (iii) simplification of the generated models. We show the effectiveness of our approach by using time series in the context of electrical appliance consumption, where the time series are aggregated by adding the active and reactive power. Finally, we compare the proposed approach with the standard procedure.",
    booktitle = "23rd International Conference on Pattern Recognition (ICPR)",
    editor = "IEEE",
    keywords = "machine learning, electric signal, appliance signatures, GMMs",
    month = "December",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    pages = "2545-2550",
    title = "{A}ggregation procedure of {G}aussian {M}ixture {M}odels for additive features",
    url = "http://www.hennebert.org/download/publications/icpr-2016-aggregation-procedure-of-Gaussian-mixture-models-for-additive-features.pdf",
    year = "2016",
    }

    In this work we provide details on a new and effective approach able to generate Gaussian Mixture Models (GMMs) for the classification of aggregated time series. More specifically, our procedure can be applied to time series that are aggregated together by adding their features. The procedure takes advantage of the additive property of the Gaussians that complies with the additive property of the features. Our goal is to classify aggregated time series, i.e. we aim to identify the classes of the single time series contributing to the total. The standard approach consists in training the models using the combination of several time series coming from different classes. However, this has the drawback of being a very slow operation given the amount of data. The proposed approach, called GMMs aggregation procedure, addresses this problem. It consists of three steps: (i) modeling the independent classes, (ii) generation of the models for the class combinations and (iii) simplification of the generated models. We show the effectiveness of our approach by using time series in the context of electrical appliance consumption, where the time series are aggregated by adding the active and reactive power. Finally, we compare the proposed approach with the standard procedure.

  • K. Riesen, A. Fischer, and H. Bunke, "Approximation of Graph Edit Distance by Means of a Utility Matrix," in Proc. 7th Int. Workshop on Artificial Neural Networks in Pattern Recognition, 2016, p. 185–194.
    [Bibtex]
    @inproceedings{riesen16approximation,
    Author = {Riesen, K. and Fischer, A. and Bunke, H.},
    Booktitle = {Proc. 7th Int. Workshop on Artificial Neural Networks in Pattern Recognition},
    Date-Added = {2017-01-15 10:24:15 +0000},
    Date-Modified = {2017-01-15 10:25:50 +0000},
    Pages = {185--194},
    Title = {Approximation of Graph Edit Distance by Means of a Utility Matrix},
    Year = {2016}}
  • M. Stauffer, A. Fischer, and K. Riesen, "A Novel Graph Database for Handwritten Word Images," in Proc. Int. Workshop on Structural, Syntactic, and Statistical Pattern Recognition, 2016, p. 553–563.
    [Bibtex]
    @inproceedings{stauffer16anovel,
    Author = {Stauffer, M. and Fischer, A. and Riesen, K.},
    Booktitle = {Proc. Int. Workshop on Structural, Syntactic, and Statistical Pattern Recognition},
    Date-Added = {2017-01-15 10:19:52 +0000},
    Date-Modified = {2017-01-15 10:21:23 +0000},
    Pages = {553--563},
    Title = {A Novel Graph Database for Handwritten Word Images},
    Year = {2016}}
  • M. Stauffer, A. Fischer, and K. Riesen, "Graph-Based Keyword Spotting in Historical Handwritten Documents," in Proc. Int. Workshop on Structural, Syntactic, and Statistical Pattern Recognition, 2016, p. 564–573.
    [Bibtex]
    @inproceedings{stauffer16graphbased,
    Author = {Stauffer, M. and Fischer, A. and Riesen, K.},
    Booktitle = {Proc. Int. Workshop on Structural, Syntactic, and Statistical Pattern Recognition},
    Date-Added = {2017-01-15 10:16:12 +0000},
    Date-Modified = {2017-01-15 10:19:36 +0000},
    Pages = {564--573},
    Title = {Graph-Based Keyword Spotting in Historical Handwritten Documents},
    Year = {2016}}
  • B. Wicht, A. Fischer, and J. Hennebert, "Deep Learning Features for Handwritten Keyword Spotting," in 23rd International Conference on Pattern Recognition (ICPR), 2016, pp. 3423-3428.
    [Bibtex] [Abstract]
    @conference{wicht:icpr2016,
    author = "Baptiste Wicht and Andreas Fischer and Jean Hennebert",
    abstract = "Deep learning had a significant impact on diverse pattern recognition tasks in the recent past. In this paper, we investigate its potential for keyword spotting in handwritten documents by designing a novel feature extraction system based on Convolutional Deep Belief Networks. Sliding window features are learned from word images in an unsupervised manner. The proposed features are evaluated both for template-based word spotting with Dynamic Time Warping and for learning-based word spotting with Hidden Markov Models. In an experimental evaluation on three benchmark data sets with historical and modern handwriting, it is shown that the proposed learned features outperform three standard sets of handcrafted features.",
    booktitle = "23rd International Conference on Pattern Recognition (ICPR)",
    editor = "IEEE",
    keywords = "Handwriting Recognition, Deep learning, Artificial neural networks, keyword spotting",
    month = "December",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    pages = "3423-3428",
    title = "{D}eep {L}earning {F}eatures for {H}andwritten {K}eyword {S}potting",
    url = "http://www.hennebert.org/download/publications/icpr-2016-deep-learning-features-for-handwritten-keyword-spotting.pdf",
    year = "2016",
    }

    Deep learning had a significant impact on diverse pattern recognition tasks in the recent past. In this paper, we investigate its potential for keyword spotting in handwritten documents by designing a novel feature extraction system based on Convolutional Deep Belief Networks. Sliding window features are learned from word images in an unsupervised manner. The proposed features are evaluated both for template-based word spotting with Dynamic Time Warping and for learning-based word spotting with Hidden Markov Models. In an experimental evaluation on three benchmark data sets with historical and modern handwriting, it is shown that the proposed learned features outperform three standard sets of handcrafted features.

  • [DOI] B. Wicht, A. Fischer, and J. Hennebert, "On CPU Performance Optimization of Restricted Boltzmann Machine and Convolutional RBM," in Artificial Neural Networks in Pattern Recognition: 7th IAPR TC3 Workshop, ANNPR 2016, Ulm, Germany, September 28–30, 2016, Proceedings, F. Schwenker, H. M. Abbas, N. El Gayar, and E. Trentin, Eds., Cham: Springer International Publishing, 2016, p. 163–174.
    [Bibtex]
    @inbook{wicht:2016annpr,
    author = "Baptiste Wicht and Andreas Fischer and Jean Hennebert",
    address = "Cham",
    booktitle = "Artificial Neural Networks in Pattern Recognition: 7th IAPR TC3 Workshop, ANNPR 2016, Ulm, Germany, September 28--30, 2016, Proceedings",
    doi = "10.1007/978-3-319-46182-3_14",
    editor = "Schwenker, Friedhelm
    and Abbas, M. Hazem
    and El Gayar, Neamat
    and Trentin, Edmondo",
    isbn = "978-3-319-46182-3",
    pages = "163--174",
    publisher = "Springer International Publishing",
    title = "{O}n {CPU} {P}erformance {O}ptimization of {R}estricted {B}oltzmann {M}achine and {C}onvolutional {RBM}",
    url = "http://dx.doi.org/10.1007/978-3-319-46182-3_14",
    year = "2016",
    }
  • [DOI] B. Wicht, A. Fischer, and J. Hennebert, "Keyword Spotting with Convolutional Deep Belief Networks and Dynamic Time Warping," in Artificial Neural Networks and Machine Learning – ICANN 2016: 25th International Conference on Artificial Neural Networks, Barcelona, Spain, September 6-9, 2016, Proceedings, Part II, A. E. P. Villa, P. Masulli, and A. J. Pons Rivero, Eds., Cham: Springer International Publishing, 2016, p. 113–120.
    [Bibtex]
    @Inbook{wicht:2016icann,
    author="Wicht, Baptiste
    and Fischer, Andreas
    and Hennebert, Jean",
    editor="Villa, Alessandro E.P.
    and Masulli, Paolo
    and Pons Rivero, Antonio Javier",
    title="Keyword Spotting with Convolutional Deep Belief Networks and Dynamic Time Warping",
    bookTitle="Artificial Neural Networks and Machine Learning -- ICANN 2016: 25th International Conference on Artificial Neural Networks, Barcelona, Spain, September 6-9, 2016, Proceedings, Part II",
    year="2016",
    publisher="Springer International Publishing",
    address="Cham",
    pages="113--120",
    isbn="978-3-319-44781-0",
    doi="10.1007/978-3-319-44781-0_14",
    url="http://dx.doi.org/10.1007/978-3-319-44781-0_14"
    }
  • B. Wolf, P. Kuonen, and T. Dandekar, "GNATY: Optimized NGS variant calling and coverage analysis : IWBBIO 2016. International Work-Conference on Bioinformatics and Biomedical Engineering," IWBBIO 2016, 2016.
    [Bibtex] [Abstract]
    @article{Wolf:2607,
    author = "Beat Wolf and Pierre Kuonen and Thomas Dandekar",
    title = "GNATY: Optimized NGS variant calling and coverage analysis
    : IWBBIO 2016. International Work-Conference on
    Bioinformatics and Biomedical Engineering",
    month = "avr",
    year = "2016",
    journal = "IWBBIO 2016",
    abstract = "Next generation sequencing produces an ever increasing
    amount of data, requiring increasingly fast computing
    infrastructures to keep up. We present GNATY, a collection
    of tools for NGS data analysis, aimed at optimizing parts of
    the sequence analysis process to reduce the hardware
    requirements. The tools are developed with efficiency in
    mind, using multithreading and other techniques to speed up
    the analysis. The architecture has been verified by
    implementing a variant caller based on the Varscan 2 variant
    calling model, achieving a speedup of nearly 18 times.
    Additionally, the flexibility of the algorithm is also
    demonstrated by applying it to coverage analysis. Compared
    to BEDtools 2 the same analysis results were found but in
    only half the time by GNATY. The speed increase allows for a
    faster data analysis and more flexibility to analyse the
    same sample using multiple settings. The software is freely
    available for non-commercial usage at
    http://gnaty.phenosystems.com/",
    keywords = "Next generation sequencing",
    keywords = "Variant calling",
    keywords = "Algorithmics",
    }

    Next generation sequencing produces an ever increasing amount of data, requiring increasingly fast computing infrastructures to keep up. We present GNATY, a collection of tools for NGS data analysis, aimed at optimizing parts of the sequence analysis process to reduce the hardware requirements. The tools are developed with efficiency in mind, using multithreading and other techniques to speed up the analysis. The architecture has been verified by implementing a variant caller based on the Varscan 2 variant calling model, achieving a speedup of nearly 18 times. Additionally, the flexibility of the algorithm is also demonstrated by applying it to coverage analysis. Compared to BEDtools 2 the same analysis results were found but in only half the time by GNATY. The speed increase allows for a faster data analysis and more flexibility to analyse the same sample using multiple settings. The software is freely available for non-commercial usage at http://gnaty.phenosystems.com/

  • [PDF] [DOI] O. Zayene, N. Hajjej, S. Masmoudi Touj, S. Ben Mansour, J. Hennebert, R. Ingold, and N. Amara, "ICPR2016 contest on Arabic Text detection and Recognition in video frames - AcTiVComp." 2016, pp. 187-191.
    [Bibtex]
    @inproceedings{oussama2016icpr,
    author = {Zayene, Oussama and Hajjej, Nadia and Masmoudi Touj, Sameh and Ben Mansour, Soumaya and Hennebert, Jean and Ingold, Rolf and Amara, Najoua},
    year = {2016},
    month = {12},
    pages = {187-191},
    title = {ICPR2016 contest on Arabic Text detection and Recognition in video frames - AcTiVComp},
    doi = {10.1109/ICPR.2016.7899631}
    }
  • [PDF] [DOI] O. Zayene, M. Seuret, S. M. Touj, J. Hennebert, R. Ingold, and N. E. B. Amara, "Text Detection in Arabic News Video Based on SWT Operator and Convolutional Auto-Encoders," in 2016 12th IAPR Workshop on Document Analysis Systems (DAS), 2016, pp. 13-18.
    [Bibtex]
    @inproceedings{oussama2016das,
    author={O. {Zayene} and M. {Seuret} and S. M. {Touj} and J. {Hennebert} and R. {Ingold} and N. E. B. {Amara}},
    booktitle={2016 12th IAPR Workshop on Document Analysis Systems (DAS)},
    title={Text Detection in Arabic News Video Based on SWT Operator and Convolutional Auto-Encoders},
    year={2016},
    volume={},
    number={},
    pages={13-18},
    keywords={image coding;image filtering;natural language processing;text detection;transforms;unsupervised learning;video signal processing;visual databases;text specificities;antialiasing artifacts;horizontally aligned artificial text detection;Arabic news video;stroke width transform algorithm;SWT algorithm;convolutional autoencoder;text candidate components;geometric constraints;stroke width information;CAE;unsupervised feature learning method;textline candidates;Arabic-text-in-video database;AcTiV-DB;evaluation protocols;TV channels;compression artifacts;Feature extraction;Computer aided engineering;Image edge detection;Learning systems;Training;Filtering algorithms;Support vector machines;Arabic text detection;SWT operator;CAE;AcTiV-DB},
    doi={10.1109/DAS.2016.80},
    ISSN={},
    month={April}
    }
  • [PDF] [DOI] O. Zayene, S. M. Touj, J. Hennebert, R. Ingold, and N. E. Ben Amara, "Data, protocol and algorithms for performance evaluation of text detection in Arabic news video," in 2016 2nd International Conference on Advanced Technologies for Signal and Image Processing (ATSIP), 2016, pp. 258-263.
    [Bibtex] [Abstract]
    @inproceedings{oussama2016atsip,
    author={O. {Zayene} and S. M. {Touj} and J. {Hennebert} and R. {Ingold} and N. E. {Ben Amara}},
    booktitle={2016 2nd International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)},
    title={Data, protocol and algorithms for performance evaluation of text detection in Arabic news video},
    year={2016},
    volume={},
    number={},
    pages={258-263},
    abstract={Benchmark datasets and their corresponding evaluation protocols are commonly used by the computer vision community, in a variety of application domains, to assess the performance of existing systems. Even though text detection and recognition in video has seen much progress in recent years, relatively little work has been done to propose standardized annotations and evaluation protocols especially for Arabic Video-OCR systems. In this paper, we present a framework for evaluating text detection in videos. Additionally, dataset, ground-truth annotations and evaluation protocols, are provided for Arabic text detection. Moreover, two published text detection algorithms are tested on a part of the AcTiV database and evaluated using a set of the proposed evaluation protocols.},
    keywords={computer vision;natural language processing;optical character recognition;performance evaluation;text detection;video signal processing;performance evaluation;text detection;Arabic news video;computer vision;Arabic video-OCR system;Protocols;Databases;Image edge detection;Optical character recognition software;Detection algorithms;Detectors;XML;text detection;Evaluation Protocol;AcTiV database;Arabic Video-OCR},
    doi={10.1109/ATSIP.2016.7523079},
    ISSN={},
    month={March}
    }

    Benchmark datasets and their corresponding evaluation protocols are commonly used by the computer vision community, in a variety of application domains, to assess the performance of existing systems. Even though text detection and recognition in video has seen much progress in recent years, relatively little work has been done to propose standardized annotations and evaluation protocols especially for Arabic Video-OCR systems. In this paper, we present a framework for evaluating text detection in videos. Additionally, dataset, ground-truth annotations and evaluation protocols, are provided for Arabic text detection. Moreover, two published text detection algorithms are tested on a part of the AcTiV database and evaluated using a set of the proposed evaluation protocols.

  • E. Bach, B. Wolf, J. Oldenburg, C. Müller, and S. Rost, "Identification of deep intronic variants in 15 haemophilia A patients by next generation sequencing of the whole factor VIII gene.: Thrombosis and Haemostasis," Thrombosis and Haemostasis, p. 10, 2015.
    [Bibtex] [Abstract]
    @article{Elisa:2175,
    author = "Bach,Elisa and Wolf, Beat and Oldenburg, Johannes and Müller, Clemens and Rost, Simone",
    title = "Identification of deep intronic variants in 15 haemophilia
    A patients by next generation sequencing of the whole factor
    VIII gene.: Thrombosis and Haemostasis",
    year = "2015",
    journal = "Thrombosis and Haemostasis",
    pages = "10",
    issn = "0340-6245",
    abstract = "Current screening methods for factor VIII gene (F8)
    mutations can reveal the causative alteration in the vast
    majority of haemophilia A patients. Yet, standard diagnostic
    methods fail in about 2% of cases. This study aimed at
    analysing the entire intronic sequences of the F8 gene in 15
    haemophilia A patients by next generation sequencing. All
    patients had a mild to moderate phenotype and no mutation in
    the coding sequence and splice sites of the F8 gene could be
    diagnosed so far. Next generation sequencing data revealed
    23 deep intronic candidate variants in several F8 introns,
    including six recurrent variants and three variants that
    have been described before. One patient additionally showed
    a deletion of 9.2 kb in intron 1, mediated by Alu-type
    repeats. Several bioinformatic tools were used to score the
    variants in comparison to known pathogenic F8 mutations in
    order to predict their deleteriousness. Pedigree analyses
    showed a correct segregation pattern for three of the
    presumptive mutations. In each of the 15 patients analysed,
    at least one deep intronic variant in the F8 gene was
    identified and predicted to alter F8 mRNA splicing. Reduced
    F8 mRNA levels and/or stability would be well compatible
    with the patients' mild to moderate haemophilia A
    phenotypes. The next generation sequencing approach used
    proved an efficient method to screen the complete F8 gene
    and could be applied as a one-stop sequencing method for
    molecular diagnostics of haemophilia A.",
    keywords = "factor VIII",
    keywords = "haemophilia A",
    keywords = "next generation sequencing",
    keywords = "Alternative splice sites",
    keywords = "deep intronic variant",
    }

    Current screening methods for factor VIII gene (F8) mutations can reveal the causative alteration in the vast majority of haemophilia A patients. Yet, standard diagnostic methods fail in about 2% of cases. This study aimed at analysing the entire intronic sequences of the F8 gene in 15 haemophilia A patients by next generation sequencing. All patients had a mild to moderate phenotype and no mutation in the coding sequence and splice sites of the F8 gene could be diagnosed so far. Next generation sequencing data revealed 23 deep intronic candidate variants in several F8 introns, including six recurrent variants and three variants that have been described before. One patient additionally showed a deletion of 9.2 kb in intron 1, mediated by Alu-type repeats. Several bioinformatic tools were used to score the variants in comparison to known pathogenic F8 mutations in order to predict their deleteriousness. Pedigree analyses showed a correct segregation pattern for three of the presumptive mutations. In each of the 15 patients analysed, at least one deep intronic variant in the F8 gene was identified and predicted to alter F8 mRNA splicing. Reduced F8 mRNA levels and/or stability would be well compatible with the patients' mild to moderate haemophilia A phenotypes. The next generation sequencing approach used proved an efficient method to screen the complete F8 gene and could be applied as a one-stop sequencing method for molecular diagnostics of haemophilia A.

  • A. Bou Hernandez, A. Fischer, and R. Plamondon, "Omega-Lognormal Analysis of Oscillatory Movements as a Function of Brain Stroke Risk Factors," in Proc. 17th Conf. of the International Graphonomics Society, 2015, p. 59–62.
    [Bibtex]
    @inproceedings{bou15omega,
    Author = {A. {Bou Hernandez} and A. Fischer and R. Plamondon},
    Booktitle = {Proc. 17th Conf. of the International Graphonomics Society},
    Date-Added = {2017-01-17 10:41:35 +0000},
    Date-Modified = {2017-01-17 10:41:35 +0000},
    Pages = {59--62},
    Title = {Omega-Lognormal Analysis of Oscillatory Movements as a Function of Brain Stroke Risk Factors},
    Year = {2015}}
  • [PDF] K. Chen, M. Seuret, H. Wei, M. Liwicki, J. Hennebert, and R. Ingold, "Ground truth model, tool, and dataset for layout analysis of historical documents," in SPIE Electronic Imaging 2015, 2015.
    [Bibtex]
    @conference{chen2015spie,
    Author = {Kai Chen and Mathias Seuret and Hao Wei and Marcus Liwicki and Jean Hennebert and Rolf Ingold},
    Booktitle = {SPIE Electronic Imaging 2015},
    Keywords = {machine learning, image analysis, historical documents},
    Month = {February},
    Publisher = {SPIE Eloctronic Imaging},
    Title = {{G}round truth model, tool, and dataset for layout analysis of historical documents},
    Year = {2015}}
  • [PDF] [DOI] K. Chen, M. Seuret, M. Liwicki, J. Hennebert, and R. Ingold, "Page segmentation of historical document images with convolutional autoencoders," in 2015 13th International Conference on Document Analysis and Recognition (ICDAR), 2015, pp. 1011-1015.
    [Bibtex] [Abstract]
    @INPROCEEDINGS{chen2015:icdar,
    author={K. Chen and M. Seuret and M. Liwicki and J. Hennebert and R. Ingold},
    booktitle={2015 13th International Conference on Document Analysis and Recognition (ICDAR)},
    title={Page segmentation of historical document images with convolutional autoencoders},
    year={2015},
    pages={1011-1015},
    abstract={In this paper, we present an unsupervised feature learning method for page segmentation of historical handwritten documents available as color images. We consider page segmentation as a pixel labeling problem, i.e., each pixel is classified as either periphery, background, text block, or decoration. Traditional methods in this area rely on carefully hand-crafted features or large amounts of prior knowledge. In contrast, we apply convolutional autoencoders to learn features directly from pixel intensity values. Then, using these features to train an SVM, we achieve high quality segmentation without any assumption of specific topologies and shapes. Experiments on three public datasets demonstrate the effectiveness and superiority of the proposed approach.},
    keywords={document image processing;handwritten character recognition;history;image colour analysis;image segmentation;support vector machines;unsupervised learning;SVM;color images;convolutional autoencoders;historical document images;historical handwritten documents;page segmentation;pixel intensity values;pixel labeling problem;support vector machine;unsupervised feature learning method;Image segmentation;Robustness;Support vector machines},
    doi={10.1109/ICDAR.2015.7333914},
    month={Aug},
    pdf={http://www.hennebert.org/download/publications/icdar-2015-page-segmentation-of-historical-document-images-with-convolutional-autoencoders.pdf},}

    In this paper, we present an unsupervised feature learning method for page segmentation of historical handwritten documents available as color images. We consider page segmentation as a pixel labeling problem, i.e., each pixel is classified as either periphery, background, text block, or decoration. Traditional methods in this area rely on carefully hand-crafted features or large amounts of prior knowledge. In contrast, we apply convolutional autoencoders to learn features directly from pixel intensity values. Then, using these features to train an SVM, we achieve high quality segmentation without any assumption of specific topologies and shapes. Experiments on three public datasets demonstrate the effectiveness and superiority of the proposed approach.

  • M. Diaz-Cabrera, A. Fischer, R. Plamondon, and M. A. Ferrer, "Towards an On-line Automatic Signature Verifier Using Only One Reference Per Signer," in Proc. 13th Int. Conf. on Document Analysis and Recognition, 2015, p. 631–635.
    [Bibtex]
    @inproceedings{diaz15towards,
    Author = {Diaz-Cabrera, M. and Fischer, A. and Plamondon, R. and Ferrer, M.A.},
    Booktitle = {Proc. 13th Int. Conf. on Document Analysis and Recognition},
    Date-Added = {2017-01-16 23:38:18 +0000},
    Date-Modified = {2017-01-16 23:38:18 +0000},
    Pages = {631--635},
    Title = {Towards an On-line Automatic Signature Verifier Using Only One Reference Per Signer},
    Year = {2015}}
  • A. Fischer and R. Plamondon, "A Dissimilarity Measure for On-Line Signature Verification Based on the Sigma-Lognormal Model," in Proc. 17th Conf. of the International Graphonomics Society, 2015, p. 83–86.
    [Bibtex]
    @inproceedings{fischer15adissimilarity,
    Author = {A. Fischer and R. Plamondon},
    Booktitle = {Proc. 17th Conf. of the International Graphonomics Society},
    Date-Added = {2017-01-17 10:41:47 +0000},
    Date-Modified = {2017-01-17 10:41:47 +0000},
    Pages = {83--86},
    Title = {A Dissimilarity Measure for On-Line Signature Verification Based on the Sigma-Lognormal Model},
    Year = {2015}}
  • A. Fischer, S. Uchida, V. Frinken, K. Riesen, and H. Bunke, "Improving Hausdorff edit distance using structural node context," in Proc. 10th Int. Workshop on Graph-based Representations in Pattern Recognition, 2015, p. 148–157.
    [Bibtex]
    @inproceedings{fischer15improving,
    Author = {A. Fischer and S. Uchida and V. Frinken and K. Riesen and H. Bunke},
    Booktitle = {Proc. 10th Int. Workshop on Graph-based Representations in Pattern Recognition},
    Date-Added = {2017-01-17 10:41:03 +0000},
    Date-Modified = {2017-01-17 10:41:03 +0000},
    Pages = {148--157},
    Title = {Improving {H}ausdorff edit distance using structural node context},
    Year = {2015}}
  • A. Fischer, M. Diaz-Cabrera, R. Plamondon, and M. A. Ferrer, "Robust Score Normalization for DTW-Based On-Line Signature Verification," in Proc. 13th Int. Conf. on Document Analysis and Recognition, 2015, p. 241–245.
    [Bibtex]
    @inproceedings{fischer15robust,
    Author = {Fischer, A. and Diaz-Cabrera, M. and Plamondon, R. and Ferrer, M.A.},
    Booktitle = {Proc. 13th Int. Conf. on Document Analysis and Recognition},
    Date-Added = {2017-01-16 23:38:37 +0000},
    Date-Modified = {2017-01-16 23:38:37 +0000},
    Pages = {241--245},
    Title = {Robust Score Normalization for {DTW}-Based On-Line Signature Verification},
    Year = {2015}}
  • [PDF] [DOI] C. Gisler, A. Ridi, J. Hennebert, R. N. Weinreb, and K. Mansouri, "Automated Detection and Quantification of Circadian Eye Blinks Using a Contact Lens Sensor," Translational Vision Science and Technology (TVST), vol. 4, iss. 1, pp. 1-10, 2015.
    [Bibtex] [Abstract]
    @article{gisler2015automated,
    Abstract = {Purpose: To detect and quantify eye blinks during 24-hour intraocular pressure (IOP) monitoring with a contact lens sensor (CLS). Methods: A total of 249 recordings of 24-hour IOP patterns from 202 participants using a CLS were included. Software was developed to automatically detect eye blinks, and wake and sleep periods. The blink detection method was based on detection of CLS signal peaks greater than a threshold proportional to the signal amplitude. Three methods for automated detection of the sleep and wake periods were evaluated. These relied on blink detection and subsequent comparison of the local signal amplitude with a threshold proportional to the mean signal amplitude. These methods were compared to manual sleep/wake verification. In a pilot, simultaneous video recording of 10 subjects was performed to compare the software to observer-measured blink rates. Results: Mean (SD) age of participants was 57.4 $\pm$ 16.5 years (males, 49.5%). There was excellent agreement between software-detected number of blinks and visually measured blinks for both observers (intraclass correlation coefficient [ICC], 0.97 for observer 1; ICC, 0.98 for observer 2). The CLS measured a mean blink frequency of 29.8 $\pm$ 15.4 blinks/min, a blink duration of 0.26 $\pm$ 0.21 seconds and an interblink interval of 1.91 $\pm$ 2.03 seconds. The best method for identifying sleep periods had an accuracy of 95.2 $\pm$ 0.5%. Conclusions: Automated analysis of CLS 24-hour IOP recordings can accurately quantify eye blinks, and identify sleep and wake periods. Translational Relevance: This study sheds new light on the potential importance of eye blinks in glaucoma and may contribute to improved understanding of circadian IOP characteristics.},
    Author = {Christophe Gisler and Antonio Ridi and Jean Hennebert and Robert N Weinreb and Kaweh Mansouri},
    Doi = {10.1167/tvst.4.1.4},
    Journal = {Translational Vision Science and Technology (TVST)},
    Keywords = {machine learning, bio-medical signals, glaucoma prediction},
    Month = {January},
    Note = {Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.},
    Number = {1},
    Pages = {1-10},
    Publisher = {The Association for Research in Vision and Ophthalmology},
    Title = {{A}utomated {D}etection and {Q}uantification of {C}ircadian {E}ye {B}links {U}sing a {C}ontact {L}ens {S}ensor},
    Pdf = {http://www.hennebert.org/download/publications/TVST-2015-Automated-Detection-and-Quantification-of-Circadian-Eye-Blinks-Using-a-Contact-Lens-Sensor.pdf},
    Volume = {4},
    Year = {2015},
    Pdf = {http://www.hennebert.org/download/publications/TVST-2015-Automated-Detection-and-Quantification-of-Circadian-Eye-Blinks-Using-a-Contact-Lens-Sensor.pdf},
    Bdsk-Url-2 = {http://dx.doi.org/10.1167/tvst.4.1.4}}

    Purpose: To detect and quantify eye blinks during 24-hour intraocular pressure (IOP) monitoring with a contact lens sensor (CLS). Methods: A total of 249 recordings of 24-hour IOP patterns from 202 participants using a CLS were included. Software was developed to automatically detect eye blinks, and wake and sleep periods. The blink detection method was based on detection of CLS signal peaks greater than a threshold proportional to the signal amplitude. Three methods for automated detection of the sleep and wake periods were evaluated. These relied on blink detection and subsequent comparison of the local signal amplitude with a threshold proportional to the mean signal amplitude. These methods were compared to manual sleep/wake verification. In a pilot, simultaneous video recording of 10 subjects was performed to compare the software to observer-measured blink rates. Results: Mean (SD) age of participants was 57.4 $\pm$ 16.5 years (males, 49.5%). There was excellent agreement between software-detected number of blinks and visually measured blinks for both observers (intraclass correlation coefficient [ICC], 0.97 for observer 1; ICC, 0.98 for observer 2). The CLS measured a mean blink frequency of 29.8 $\pm$ 15.4 blinks/min, a blink duration of 0.26 $\pm$ 0.21 seconds and an interblink interval of 1.91 $\pm$ 2.03 seconds. The best method for identifying sleep periods had an accuracy of 95.2 $\pm$ 0.5%. Conclusions: Automated analysis of CLS 24-hour IOP recordings can accurately quantify eye blinks, and identify sleep and wake periods. Translational Relevance: This study sheds new light on the potential importance of eye blinks in glaucoma and may contribute to improved understanding of circadian IOP characteristics.

  • Y. Lu, I. Comsa, P. Kuonen, and B. Hirsbrunner, "Adaptive data aggregation with probabilistic routing in wireless sensor networks," Wireless Networks, pp. 1-15, 2015.
    [Bibtex]
    @article{Yao:2308,
    Author = {Yao Lu and Ioan-Sorin Comsa and Pierre Kuonen and Beat Hirsbrunner},
    Issn = {1022-0038},
    Journal = {Wireless Networks},
    Keywords = {Genetic algorithm},
    Month = {nov},
    Pages = {1-15},
    Title = {Adaptive data aggregation with probabilistic routing in wireless sensor networks},
    Year = {2015}}
  • [DOI] Y. Lu, I. S. Comsa, P. Kuonen, and B. Hirsbrunner, "Dynamic data aggregation protocol based on multiple objective tree in Wireless Sensor Networks," in 2015 IEEE Tenth International Conference on Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), 2015, pp. 1-7.
    [Bibtex] [Abstract]
    @INPROCEEDINGS{Yao:7106965,
    author={Y. Lu and I. S. Comsa and P. Kuonen and B. Hirsbrunner},
    booktitle={2015 IEEE Tenth International Conference on Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP)},
    title={Dynamic data aggregation protocol based on multiple objective tree in Wireless Sensor Networks},
    year={2015},
    pages={1-7},
    abstract={Data aggregation has been widely applied as efficient techniques in order to reduce the data redundancy and the communication load in Wireless Sensor Networks (WSNs). However, for dynamic scenarios, structured protocols may incur high overhead in the construction and the maintenance of the static structure. Without the explicit downstream and upstream relationship of nodes, it is also difficult to obtain high aggregation efficiency by using structure-free protocols. In order to address these aspects, we propose a semi-structured protocol based on the multi-objective tree. The routing scheme can explore the optimal structure by using the Ant Colony Optimization (ACO). Moreover, by using the prediction model for the arriving packets based on the sliding window, the adaptive timing policy can reduce the transmission delay and enhance the aggregation probability. Therefore, the packet transmission converges from both spatial and temporal points of view for the data aggregation procedure. Finally, simulation results validate the feasibility and the high efficiency of the novel protocol when compared with other existing approaches.},
    keywords={ant colony optimisation;redundancy;routing protocols;trees (mathematics);wireless sensor networks;ACO;WSNs;adaptive timing policy;aggregation probability;ant colony optimization;arriving packets;communication load;data redundancy reduction;dynamic data aggregation protocol;multiple objective tree;packet transmission;prediction model;routing scheme;sliding window;static structure maintenance;structure-free protocols;structured protocols;transmission delay reduction;wireless sensor networks;Delays;Energy consumption;Protocols;Routing;Topology;Wireless sensor networks;ACO;Data Aggregation;Sliding Window;WSNs},
    doi={10.1109/ISSNIP.2015.7106965},
    month={April},}

    Data aggregation has been widely applied as efficient techniques in order to reduce the data redundancy and the communication load in Wireless Sensor Networks (WSNs). However, for dynamic scenarios, structured protocols may incur high overhead in the construction and the maintenance of the static structure. Without the explicit downstream and upstream relationship of nodes, it is also difficult to obtain high aggregation efficiency by using structure-free protocols. In order to address these aspects, we propose a semi-structured protocol based on the multi-objective tree. The routing scheme can explore the optimal structure by using the Ant Colony Optimization (ACO). Moreover, by using the prediction model for the arriving packets based on the sliding window, the adaptive timing policy can reduce the transmission delay and enhance the aggregation probability. Therefore, the packet transmission converges from both spatial and temporal points of view for the data aggregation procedure. Finally, simulation results validate the feasibility and the high efficiency of the novel protocol when compared with other existing approaches.

  • Y. Lu, I. Comsa, P. Kuonen, and B. Hirsbrunner, "Probabilistic Data Aggregation Protocol Based on ACO-GA Hybrid Approach in Wireless Sensor Networks," 8th IFIP Wireless and Mobile Networking Conference, pp. 235-238, 2015.
    [Bibtex]
    @article{Yao:2307,
    Author = {Yao Lu and Ioan-Sorin Comsa and Pierre Kuonen and Beat Hirsbrunner},
    Journal = {8th IFIP Wireless and Mobile Networking Conference},
    Month = {oct},
    Pages = {235 - 238},
    Title = {Probabilistic Data Aggregation Protocol Based on ACO-GA Hybrid Approach in Wireless Sensor Networks},
    Year = {2015}}
  • A. Ridi, N. Zarkadis, C. Gisler, and J. Hennebert, "Duration Models for Activity Recognition and Prediction in Buildings using Hidden Markov Models," in Proceedings of the 2015 International Conference on Data Science and Advanced Analytics (DSAA 2015), Paris, France, 2015, p. 10.
    [Bibtex] [Abstract]
    @inproceedings{RidiDSAA2015,
    abstract = {Activity recognition and prediction in buildings can have multiple positive effects in buildings: improve elderly monitoring, detect intrusions, maximize energy savings and optimize occupant comfort. In this paper we apply human activity recognition by using data coming from a network of motion and door sensors distributed in a Smart Home environment. We use Hidden Markov Models (HMM) as the basis of a machine learning algorithm on data collected over an 8-month period from a single-occupant home available as part of the WSU CASAS Smart Home project. In the first implementation the HMM models 24 hours of activities and classifies them in 8 distinct activity categories with an accuracy rate of 84.6{\%}. To improve the identification rate and to help detect potential abnormalities related with the duration of an activity (i.e. when certain activities last too much), we implement minimum duration modeling where the algorithm is forced to remain in a certain state for a specific amount of time. Two subsequent implementations of the minimum duration HMM (mean-based length modeling and quantile length modeling) yield a further 2{\%} improvement of the identification rate. To predict the sequence of activities in the future, Artificial Neural Networks (ANN) are employed and identified activities clustered in 3 principal activity groups with an average accuracy rate of 71-77.5{\%}, depending on the forecasting window. To explore the energy savings potential, we apply thermal dynamic simulations on buildings in central European climate for a period of 65 days during the winter and we obtain energy savings for space heating of up to 17{\%} with 3-hour forecasting for two different types of buildings.},
    address = {Paris, France},
    author = {Ridi, Antonio and Zarkadis, Nikos and Gisler, Christophe and Hennebert, Jean},
    booktitle = {Proceedings of the 2015 International Conference on Data Science and Advanced Analytics (DSAA 2015)},
    editor = {Gaussier, Eric and Cao, Longbing},
    file = {:Users/gislerc/Documents/Mendeley/Articles/Ridi et al/Proceedings of the 2015 International Conference on Data Science and Advanced Analytics (DSAA 2015)/Ridi et al. - 2015 - Duration Models for Activity Recognition and Prediction in Buildings using Hidden Markov Models.pdf:pdf},
    isbn = {9781467382731},
    keywords = {Activity recognition,Energy savings in buildings,Expanded Hidden,Markov Models,Minimum Duration modeling,activity recognition,energy savings in buildings,expanded hidden,markov models,minimum duration modeling},
    mendeley-tags = {Activity recognition,Energy savings in buildings,Expanded Hidden,Markov Models,Minimum Duration modeling},
    pages = {10},
    publisher = {IEEE Computer Society},
    title = {{Duration Models for Activity Recognition and Prediction in Buildings using Hidden Markov Models}},
    url = {http://dsaa2015.lip6.fr},
    year = {2015}
    }

    Activity recognition and prediction in buildings can have multiple positive effects in buildings: improve elderly monitoring, detect intrusions, maximize energy savings and optimize occupant comfort. In this paper we apply human activity recognition by using data coming from a network of motion and door sensors distributed in a Smart Home environment. We use Hidden Markov Models (HMM) as the basis of a machine learning algorithm on data collected over an 8-month period from a single-occupant home available as part of the WSU CASAS Smart Home project. In the first implementation the HMM models 24 hours of activities and classifies them in 8 distinct activity categories with an accuracy rate of 84.6{\%}. To improve the identification rate and to help detect potential abnormalities related with the duration of an activity (i.e. when certain activities last too much), we implement minimum duration modeling where the algorithm is forced to remain in a certain state for a specific amount of time. Two subsequent implementations of the minimum duration HMM (mean-based length modeling and quantile length modeling) yield a further 2{\%} improvement of the identification rate. To predict the sequence of activities in the future, Artificial Neural Networks (ANN) are employed and identified activities clustered in 3 principal activity groups with an average accuracy rate of 71-77.5{\%}, depending on the forecasting window. To explore the energy savings potential, we apply thermal dynamic simulations on buildings in central European climate for a period of 65 days during the winter and we obtain energy savings for space heating of up to 17{\%} with 3-hour forecasting for two different types of buildings.

  • [PDF] [DOI] A. Ridi, C. Gisler, and J. Hennebert, "Processing smart plug signals using machine learning," in Wireless Communications and Networking Conference Workshops (WCNCW), 2015 IEEE, 2015, pp. 75-80.
    [Bibtex] [Abstract]
    @conference{ridi2015wcnc,
    Abstract = {The automatic identification of appliances through the analysis of their electricity consumption has several purposes in Smart Buildings including better understanding of the energy consumption, appliance maintenance and indirect observation of human activities. Electric signatures are typically acquired with IoT smart plugs integrated or added to wall sockets. We observe an increasing number of research teams working on this topic under the umbrella Intrusive Load Monitoring. This term is used as opposition to Non-Intrusive Load Monitoring that refers to the use of global smart meters. We first present the latest evolutions of the ACS-F database, a collections of signatures that we made available for the scientific community. The database contains different brands and/or models of appliances with up to 450 signatures. Two evaluation protocols are provided with the database to benchmark systems able to recognise appliances from their electric signature. We present in this paper two additional evaluation protocols intended to measure the impact of the analysis window length. Finally, we present our current best results using machine learning approaches on the 4 evaluation protocols.},
    Author = {A. Ridi and C. Gisler and J. Hennebert},
    Booktitle = {Wireless Communications and Networking Conference Workshops (WCNCW), 2015 IEEE},
    Doi = {10.1109/WCNCW.2015.7122532},
    Keywords = {learning,artificial intelligence, power engineering computing,power supplies to apparatus,ACS-F database,IoT smart plugs,machine learning approaches,smart buildings,smart plug signals,umbrella intrusive load monitoring,Accuracy,Databases,Hidden Markov mod},
    Month = {March},
    Note = {Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.},
    Pages = {75-80},
    Title = {{P}rocessing smart plug signals using machine learning},
    Pdf = {http://www.hennebert.org/download/publications/wcncw-2015-Processing-smart-plug-signals-using-machine-learning.pdf},
    Year = {2015},
    Pdf = {http://www.hennebert.org/download/publications/wcncw-2015-Processing-smart-plug-signals-using-machine-learning.pdf},
    Bdsk-Url-2 = {http://dx.doi.org/10.1109/WCNCW.2015.7122532}}

    The automatic identification of appliances through the analysis of their electricity consumption has several purposes in Smart Buildings including better understanding of the energy consumption, appliance maintenance and indirect observation of human activities. Electric signatures are typically acquired with IoT smart plugs integrated or added to wall sockets. We observe an increasing number of research teams working on this topic under the umbrella Intrusive Load Monitoring. This term is used as opposition to Non-Intrusive Load Monitoring that refers to the use of global smart meters. We first present the latest evolutions of the ACS-F database, a collections of signatures that we made available for the scientific community. The database contains different brands and/or models of appliances with up to 450 signatures. Two evaluation protocols are provided with the database to benchmark systems able to recognise appliances from their electric signature. We present in this paper two additional evaluation protocols intended to measure the impact of the analysis window length. Finally, we present our current best results using machine learning approaches on the 4 evaluation protocols.

  • [PDF] [DOI] A. Ridi, C. Gisler, and J. Hennebert, "User Interaction Event Detection in the Context of Appliance Monitoring," in The 13th International Conference on Pervasive Computing and Communications (PerCom 2015), Workshop on Pervasive Energy Services (PerEnergy), 2015, pp. 323-328.
    [Bibtex] [Abstract]
    @conference{ridi2015percom,
    Abstract = {In this paper we assess about the recognition of User Interaction events when handling electrical devices. This work is placed in the context of Intrusive Load Monitoring used for appliance recognition. ILM implies several Smart Metering Sensors to be placed inside the environment under analysis (in our case we have one Smart Metering Sensor per device). Our existing system is able to recognise the appliance class (as coffee machine, printer, etc.) and the sequence of states (typically Active / Non-Active) by using Hidden Markov Models as machine learning algorithm. In this paper we add a new layer to our system architecture called User Interaction Layer, aimed to infer the moments (called User Interaction events) during which the user interacts with the appliance. This layer uses as input the information coming from HMM (i.e. the recognised appliance class and the sequence of states). The User Interaction events are derived from the analysis of the transitions in the sequences of states and a ruled-based system adds or removes these events depending on the recognised class. Finally we compare the list of events with the ground truth and we obtain three different accuracy rates: (i) 96.3% when the correct model and the real sequence of states are known a priori, (ii) 82.5% when only the correct model is known and (iii) 80.5% with no a priori information.},
    Author = {Antonio Ridi and Christophe Gisler and Jean Hennebert},
    Booktitle = {The 13th International Conference on Pervasive Computing and Communications (PerCom 2015), Workshop on Pervasive Energy Services (PerEnergy)},
    Doi = {10.1109/PERCOMW.2015.7134056},
    Keywords = {domestic appliances;hidden Markov models;home automation;human computer interaction;learning (artificial intelligence);smart meters;HMM;ILM;appliance monitoring;appliance recognition;electrical devices;hidden Markov models;intrusive load monitoring;machine learning algorithm;ruled-based system;smart metering sensors;user interaction event detection;user interaction layer;Accuracy;Databases;Hidden Markov models;Home appliances;Mobile handsets;Monitoring;Senior citizens;Appliance Identification;Intrusive Load Monitoring (ILM);User-Appliance Interaction},
    Note = {Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.},
    Pages = {323-328},
    Title = {{U}ser {I}nteraction {E}vent {D}etection in the {C}ontext of {A}ppliance {M}onitoring},
    Pdf = {http://www.hennebert.org/download/publications/percom-2015-User-Interaction-Event-Detection-in-the-Context-of-Appliance-Monitoring.pdf},
    Year = {2015},
    Pdf = {http://www.hennebert.org/download/publications/percom-2015-User-Interaction-Event-Detection-in-the-Context-of-Appliance-Monitoring.pdf},
    Bdsk-Url-2 = {http://dx.doi.org/10.1109/PERCOMW.2015.7134056}}

    In this paper we assess about the recognition of User Interaction events when handling electrical devices. This work is placed in the context of Intrusive Load Monitoring used for appliance recognition. ILM implies several Smart Metering Sensors to be placed inside the environment under analysis (in our case we have one Smart Metering Sensor per device). Our existing system is able to recognise the appliance class (as coffee machine, printer, etc.) and the sequence of states (typically Active / Non-Active) by using Hidden Markov Models as machine learning algorithm. In this paper we add a new layer to our system architecture called User Interaction Layer, aimed to infer the moments (called User Interaction events) during which the user interacts with the appliance. This layer uses as input the information coming from HMM (i.e. the recognised appliance class and the sequence of states). The User Interaction events are derived from the analysis of the transitions in the sequences of states and a ruled-based system adds or removes these events depending on the recognised class. Finally we compare the list of events with the ground truth and we obtain three different accuracy rates: (i) 96.3% when the correct model and the real sequence of states are known a priori, (ii) 82.5% when only the correct model is known and (iii) 80.5% with no a priori information.

  • K. Riesen, M. Ferrer, A. Fischer, and H. Bunke, "Approximation of graph edit distance in quadratic time," in Proc. 10th Int. Workshop on Graph-based Representations in Pattern Recognition, 2015, p. 3–12.
    [Bibtex]
    @inproceedings{riesen15approximation,
    Author = {K. Riesen and M. Ferrer and A. Fischer and H. Bunke},
    Booktitle = {Proc. 10th Int. Workshop on Graph-based Representations in Pattern Recognition},
    Date-Added = {2017-01-17 10:41:22 +0000},
    Date-Modified = {2017-01-17 10:41:22 +0000},
    Pages = {3--12},
    Title = {Approximation of graph edit distance in quadratic time},
    Year = {2015}}
  • M. Seuret, A. Fischer, A. Garz, M. Liwicki, and R. Ingold, "Clustering Historical Documents Based on the Reconstruction Error of Autoencoders," in Proc. 3rd Int. Workshop on Historical Document Imaging and Processing, 2015, p. 85–91.
    [Bibtex]
    @inproceedings{seuret15clustering,
    Author = {Seuret, M. and Fischer, A. and Garz, A. and Liwicki, M. and Ingold, R.},
    Booktitle = {Proc. 3rd Int. Workshop on Historical Document Imaging and Processing},
    Date-Added = {2017-01-17 09:37:18 +0000},
    Date-Modified = {2017-01-17 10:39:35 +0000},
    Pages = {85--91},
    Title = {Clustering Historical Documents Based on the Reconstruction Error of Autoencoders},
    Year = {2015}}
  • H. Wei, M. Seuret, K. Chen, A. Fischer, M. Liwicki, and R. Ingold, "Selecting Autoencoder Features for Layout Analysis of Historical Documents," in Proc. 3rd Int. Workshop on Historical Document Imaging and Processing, 2015, p. 55–62.
    [Bibtex]
    @inproceedings{wei15selecting,
    Author = {Wei, H. and Seuret, M. and Chen, K. and Fischer, A. and Liwicki, M. and Ingold, R.},
    Booktitle = {Proc. 3rd Int. Workshop on Historical Document Imaging and Processing},
    Date-Added = {2017-01-17 10:38:06 +0000},
    Date-Modified = {2017-01-17 10:39:57 +0000},
    Pages = {55--62},
    Title = {Selecting Autoencoder Features for Layout Analysis of Historical Documents},
    Year = {2015}}
  • [PDF] [DOI] B. Wicht and J. Henneberty, "Mixed handwritten and printed digit recognition in Sudoku with Convolutional Deep Belief Network," in 2015 13th International Conference on Document Analysis and Recognition (ICDAR), 2015, pp. 861-865.
    [Bibtex] [Abstract]
    @INPROCEEDINGS{wicht:icdar2015,
    author={B. Wicht and J. Henneberty},
    booktitle={2015 13th International Conference on Document Analysis and Recognition (ICDAR)},
    title={Mixed handwritten and printed digit recognition in Sudoku with Convolutional Deep Belief Network},
    year={2015},
    pages={861-865},
    abstract={In this paper, we propose a method to recognize Sudoku puzzles containing both handwritten and printed digits from images taken with a mobile camera. The grid and the digits are detected using various image processing techniques including Hough Transform and Contour Detection. A Convolutional Deep Belief Network is then used to extract high-level features from raw pixels. The features are finally classified using a Support Vector Machine. One of the scientific question addressed here is about the capability of the Deep Belief Network to learn extracting features on mixed inputs, printed and handwritten. The system is thoroughly tested on a set of 200 Sudoku images captured with smartphone cameras under varying conditions, e.g. distortion and shadows. The system shows promising results with 92% of the cells correctly classified. When cell detection errors are not taken into account, the cell recognition accuracy increases to 97.7%. Interestingly, the Deep Belief Network is able to handle the complex conditions often present on images taken with phone cameras and the complexity of mixed printed and handwritten digits.},
    keywords={Hough transforms;belief networks;handwriting recognition;image sensors;mobile computing;support vector machines;Hough Transform;Sudoku;Sudoku images;Sudoku puzzles;contour detection;convolutional deep belief network;handwritten digits;image processing techniques;mixed handwritten recognition;printed digit recognition;printed digits;smartphone cameras;support vector machine;Camera-based OCR;Convolution;Convolutional Deep Belief Network;Text Detection;Text Recognition},
    doi={10.1109/ICDAR.2015.7333884},
    month={Aug},
    Pdf = {http://www.hennebert.org/download/publications/icdar-2015-mixed-handwritten-and-printed-digit-recognition-in-sudoku-with-convolutional-deep-belief-network.pdf},
    }

    In this paper, we propose a method to recognize Sudoku puzzles containing both handwritten and printed digits from images taken with a mobile camera. The grid and the digits are detected using various image processing techniques including Hough Transform and Contour Detection. A Convolutional Deep Belief Network is then used to extract high-level features from raw pixels. The features are finally classified using a Support Vector Machine. One of the scientific question addressed here is about the capability of the Deep Belief Network to learn extracting features on mixed inputs, printed and handwritten. The system is thoroughly tested on a set of 200 Sudoku images captured with smartphone cameras under varying conditions, e.g. distortion and shadows. The system shows promising results with 92% of the cells correctly classified. When cell detection errors are not taken into account, the cell recognition accuracy increases to 97.7%. Interestingly, the Deep Belief Network is able to handle the complex conditions often present on images taken with phone cameras and the complexity of mixed printed and handwritten digits.

  • [PDF] B. Wolf, L. Monney, and P. Kuonen, "FriendComputing: Organic application centric distributed computing," Nesus 2015 workshop, 2015.
    [Bibtex] [Abstract]
    @article{Wolf:2173,
    Abstract = {Building Ultrascale computer systems is a hard problem, not yet solved and fully explored. Combining the computing resources of multiple organizations, often in different administrative domains with heterogeneous hardware and diverse demands on the system, requires new tools and frameworks to be put in place. During previous work we developed POP-Java, a Java programming language extension that allows to easily develop distributed applications in a heterogeneous environment. We now present an extension to the POP-Java language, that allows to create application centered networks in which any member can benefit from the computing power and storage capacity of its members. An accounting system is integrated, allowing the different members of the network to bill the usage of their resources to the other members, if so desired. The system is expanded through a similar process as seen in social networks, making it possible to use the resources of friend and friends of friends. Parts of the proposed system has been implemented as a prototype inside the POP-Java programming language.},
    Author = {Beat Wolf and Loic Monney and Pierre Kuonen},
    Journal = {Nesus 2015 workshop},
    Keywords = {Distributed computing},
    Month = {sep},
    Pdf = {http://e-archivo.uc3m.es/bitstream/handle/10016/22003/friendcomputing_NESUS_2015.pdf},
    Title = {FriendComputing: Organic application centric distributed computing},
    Year = {2015}}

    Building Ultrascale computer systems is a hard problem, not yet solved and fully explored. Combining the computing resources of multiple organizations, often in different administrative domains with heterogeneous hardware and diverse demands on the system, requires new tools and frameworks to be put in place. During previous work we developed POP-Java, a Java programming language extension that allows to easily develop distributed applications in a heterogeneous environment. We now present an extension to the POP-Java language, that allows to create application centered networks in which any member can benefit from the computing power and storage capacity of its members. An accounting system is integrated, allowing the different members of the network to bill the usage of their resources to the other members, if so desired. The system is expanded through a similar process as seen in social networks, making it possible to use the resources of friend and friends of friends. Parts of the proposed system has been implemented as a prototype inside the POP-Java programming language.

  • [PDF] B. Wolf, P. Kuonen, and T. Dandekar, "Multilevel parallelism in sequence alignment using a streaming approach," Nesus 2015 workshop, 2015.
    [Bibtex] [Abstract]
    @article{Wolf:2172,
    Abstract = {Ultrascale computing and bioinformatics are two rapidly growing fields with a big impact right now and even more so in the future. The introduction of next generation sequencing pushes current bioinformatics tools and workflows to their limits in terms of performance. This forces the tools to become increasingly performant to keep up with the growing speed at which sequencing data is created. Ultrascale computing can greatly benefit bioinformatics in the challenges it faces today, especially in terms of scalability, data management and reliability. But before this is possible, the algorithms and software used in the field of bioinformatics need to be prepared to be used in a heterogeneous distributed environment. For this paper we choose to look at sequence alignment, which has been an active topic of research to speed up next generation sequence analysis, as it is ideally suited for parallel processing. We present a multilevel stream based parallel architecture to transparently distribute sequence alignment over multiple cores of the same machine, multiple machines and cloud resources. The same concepts are used to achieve multithreaded and distributed parallelism, making the architecture simple to extend and adapt to new situations. A prototype of the architecture has been implemented using an existing commercial sequence aligner. We demonstrate the flexibility of the implementation by running it on different configurations, combining local and cloud computing resources.},
    Author = {Beat Wolf and Pierre Kuonen and Thomas Dandekar},
    Journal = {Nesus 2015 workshop},
    Keywords = {Genetics},
    Month = {sep},
    Pdf = {http://e-archivo.uc3m.es/handle/10016/22004},
    Title = {Multilevel parallelism in sequence alignment using a streaming approach},
    Url = {http://e-archivo.uc3m.es/bitstream/handle/10016/22004/multilevel_NESUS_2015.pdf},
    Year = {2015},
    Pdf = {http://e-archivo.uc3m.es/bitstream/handle/10016/22004/multilevel_NESUS_2015.pdf}}

    Ultrascale computing and bioinformatics are two rapidly growing fields with a big impact right now and even more so in the future. The introduction of next generation sequencing pushes current bioinformatics tools and workflows to their limits in terms of performance. This forces the tools to become increasingly performant to keep up with the growing speed at which sequencing data is created. Ultrascale computing can greatly benefit bioinformatics in the challenges it faces today, especially in terms of scalability, data management and reliability. But before this is possible, the algorithms and software used in the field of bioinformatics need to be prepared to be used in a heterogeneous distributed environment. For this paper we choose to look at sequence alignment, which has been an active topic of research to speed up next generation sequence analysis, as it is ideally suited for parallel processing. We present a multilevel stream based parallel architecture to transparently distribute sequence alignment over multiple cores of the same machine, multiple machines and cloud resources. The same concepts are used to achieve multithreaded and distributed parallelism, making the architecture simple to extend and adapt to new situations. A prototype of the architecture has been implemented using an existing commercial sequence aligner. We demonstrate the flexibility of the implementation by running it on different configurations, combining local and cloud computing resources.

  • [PDF] B. Wolf, P. Kuonen, T. Dandekar, and D. Atlan, "DNAseq Workflow in a Diagnostic Context and an Example of a User Friendly Implementation," BioMed Research International, vol. 2015, p. 11, 2015.
    [Bibtex] [Abstract]
    @article{Wolf:2171,
    Abstract = {Over recent years next generation sequencing (NGS) technologies evolved from costly tools used by very few, to a much more accessible and economically viable technology. Through this recently gained popularity, its use-cases expanded from research environments into clinical settings. But the technical know-how and infrastructure required to analyze the data remain an obstacle for a wider adoption of this technology, especially in smaller laboratories. We present GensearchNGS, a commercial DNAseq software suite distributed by Phenosystems SA. The focus of GensearchNGS is the optimal usage of already existing infrastructure, while keeping its use simple. This is achieved through the integration of existing tools in a comprehensive software environment, as well as custom algorithms developed with the restrictions of limited infrastructures in mind. This includes the possibility to connect multiple computers to speed up computing intensive parts of the analysis such as sequence alignments. We present a typical DNAseq workflow for NGS data analysis and the approach GensearchNGS takes to implement it. The presented workflow goes from raw data quality control to the final variant report. This includes features such as gene panels and the integration of online databases, like Ensembl for annotations or Cafe Variome for variant sharing.},
    Author = {Beat Wolf and Pierre Kuonen and Thomas Dandekar and David Atlan},
    Journal = {BioMed Research International},
    Keywords = {Genetics},
    Month = {may},
    Pages = {11},
    Pdf = {http://e-archivo.uc3m.es/bitstream/handle/10016/22004/multilevel_NESUS_2015.pdf},
    Title = {DNAseq Workflow in a Diagnostic Context and an Example of a User Friendly Implementation},
    Volume = {2015},
    Year = {2015}}

    Over recent years next generation sequencing (NGS) technologies evolved from costly tools used by very few, to a much more accessible and economically viable technology. Through this recently gained popularity, its use-cases expanded from research environments into clinical settings. But the technical know-how and infrastructure required to analyze the data remain an obstacle for a wider adoption of this technology, especially in smaller laboratories. We present GensearchNGS, a commercial DNAseq software suite distributed by Phenosystems SA. The focus of GensearchNGS is the optimal usage of already existing infrastructure, while keeping its use simple. This is achieved through the integration of existing tools in a comprehensive software environment, as well as custom algorithms developed with the restrictions of limited infrastructures in mind. This includes the possibility to connect multiple computers to speed up computing intensive parts of the analysis such as sequence alignments. We present a typical DNAseq workflow for NGS data analysis and the approach GensearchNGS takes to implement it. The presented workflow goes from raw data quality control to the final variant report. This includes features such as gene panels and the integration of online databases, like Ensembl for annotations or Cafe Variome for variant sharing.

  • [PDF] B. Wolf, P. Kuonen, T. Dandekar, and D. Atlan, "GensearchNGS: Interactive variant analysis," 13th International Symposium on Mutation in the Genome: detection, genome sequencing & interpretation, 2015.
    [Bibtex] [Abstract]
    @article{Wolf:2090,
    Abstract = {NGS data analysis is increasingly popular in the diagnostics field thanks to advances in sequencing technologies which improved the speed, quantity and quality of the produced data. Due to those improvements, the analysis of the data requires an increasing amount of technical knowledge and processing power. Several software tools exist to handle these technical challenges involved in NGS data analysis. We present the latest improvements in one of those software-tools, GensearchNGS 1.6, a NGS data analysis software allowing users to go from raw NGS data to variant reports. We focus on the improvements made in terms of variant calling, annotation and filtering. The variant calling algorithm has been completely rewritten, based on the variant calling model used in Varscan 2, greatly improving its speed (over 10 times faster than Varscan 2, over 5 times faster than GATK) and accuracy, while reducing memory requirements. For the subsequent annotation of the called variants, various new datasources have been integrated, such as Human Phenotype Ontology and the clinical predictions from Ensembl, which give the user more information about the clinical relevance of the called variants. An initial prototype of the integration of interactome data from different sources, such as CCSB or BioGRID, is also presented, further increasing the available information for variant effect prediction. The addition of annotation data has been accompanied by various optimizations, keeping memory requirements and analysis times stable. The interactive variant filtering, which updates a variant list presented to the user while he changes the filters, has been further optimized, making it possible to filter variants interactively even on computers with limited processing power and memory. Similar improvements have also been made to the visualizer, allowing for a faster visualization requiring fewer resources, while integrating more data, such as the previously mentioned databases.},
    Author = {Beat Wolf and Pierre Kuonen and Thomas Dandekar and David Atlan},
    Journal = {13th International Symposium on Mutation in the Genome: detection, genome sequencing & interpretation},
    Keywords = {Diagnostique genetique},
    Month = {avr},
    Pdf = {http://www.hindawi.com/journals/bmri/2015/403497/},
    Title = {GensearchNGS: Interactive variant analysis},
    Year = {2015}}

    NGS data analysis is increasingly popular in the diagnostics field thanks to advances in sequencing technologies which improved the speed, quantity and quality of the produced data. Due to those improvements, the analysis of the data requires an increasing amount of technical knowledge and processing power. Several software tools exist to handle these technical challenges involved in NGS data analysis. We present the latest improvements in one of those software-tools, GensearchNGS 1.6, a NGS data analysis software allowing users to go from raw NGS data to variant reports. We focus on the improvements made in terms of variant calling, annotation and filtering. The variant calling algorithm has been completely rewritten, based on the variant calling model used in Varscan 2, greatly improving its speed (over 10 times faster than Varscan 2, over 5 times faster than GATK) and accuracy, while reducing memory requirements. For the subsequent annotation of the called variants, various new datasources have been integrated, such as Human Phenotype Ontology and the clinical predictions from Ensembl, which give the user more information about the clinical relevance of the called variants. An initial prototype of the integration of interactome data from different sources, such as CCSB or BioGRID, is also presented, further increasing the available information for variant effect prediction. The addition of annotation data has been accompanied by various optimizations, keeping memory requirements and analysis times stable. The interactive variant filtering, which updates a variant list presented to the user while he changes the filters, has been further optimized, making it possible to filter variants interactively even on computers with limited processing power and memory. Similar improvements have also been made to the visualizer, allowing for a faster visualization requiring fewer resources, while integrating more data, such as the previously mentioned databases.

  • [PDF] B. Wolf, P. Kuonen, T. Dandekar, and D. Atlan, "Speeding up NGS analysis through local and remote computing resources," The EUROPEAN HUMAN GENETICS CONFERENCE, 2015.
    [Bibtex] [Abstract]
    @article{eshg2015,
    Abstract = {The explosion of NGS data which requires increasingly fast computers to keep up with the analysis pushed smaller laboratories to their limits. We previously presented a way for GensearchNGS users to distribute sequence alignment over multiple computers. This possibility has now been expanded to combine multiple computers in the same network with cloud computing resources. For our prototype, the user can by request add Amazon AWS EC2 cloud instances to the alignment process. The cloud resources are dynamically created and destroyed on demand, transparently to the user. It is also possible to combine local alignment on the computer starting the alignment, distributed alignment on multiple computers in the same network and cloud computing in any possible configuration. Even completely offloading the alignment is now possible. This flexibility allows especially smaller laboratories to adapt the software configuration to their needs.},
    Author = {Beat Wolf and Pierre Kuonen and Thomas Dandekar and David Atlan},
    Journal = {The EUROPEAN HUMAN GENETICS CONFERENCE},
    Keywords = {Diagnostique g{\'e}n{\'e}tique},
    Title = {Speeding up NGS analysis through local and remote computing resources},
    Year = {2015}}

    The explosion of NGS data which requires increasingly fast computers to keep up with the analysis pushed smaller laboratories to their limits. We previously presented a way for GensearchNGS users to distribute sequence alignment over multiple computers. This possibility has now been expanded to combine multiple computers in the same network with cloud computing resources. For our prototype, the user can by request add Amazon AWS EC2 cloud instances to the alignment process. The cloud resources are dynamically created and destroyed on demand, transparently to the user. It is also possible to combine local alignment on the computer starting the alignment, distributed alignment on multiple computers in the same network and cloud computing in any possible configuration. Even completely offloading the alignment is now possible. This flexibility allows especially smaller laboratories to adapt the software configuration to their needs.

  • [PDF] B. Wolf, P. Kuonen, and T. Dandekar, "GNATY: A tools library for faster variant calling and coverage analysis," German Conference on Bioinformatics, 2015.
    [Bibtex] [Abstract]
    @article{Wolf:2174,
    Abstract = {Following the speed increases in next generation sequencing over recent years, the proportion of time spent in sequence analysis compared to sequencing has increasingly shifted towards sequence analysis. While certain analysis steps such as sequence alignment were able to benefit from various speed increases, others, equally important steps like variant calling or coverage analysis, did not receive the same improvements. Analysing NGS data remains a complicated and time consuming process, requiring a substantial amount of computing power. Most current approaches to address the increasing data quantity rely on the usage of more powerful hardware or offload calculations to the cloud. In this poster we show that by using modern software development techniques such as stream processing, those additional analysis steps can be sped up without changing the analysis results. Developing more efficient implementations of existing algorithms makes it possible to process larger datasets on existing infrastructure, without changing the analysis results. This not only reduces the overall cost of data analysis, but also gives researches more flexibility when exploring different settings for the data analysis. We present the application GNATY, a stand-alone version of NGS data analysis tools used in GensearchNGS [WKDD15] developed by Phenosystems SA. The goal during the development of the GNATY tools was not to create new methods with different results to existing approaches, but explore the possibilities of improving the efficiency of existing approaches. A modular architecture has been developed to create efficient sequence alignment analysis tools, using stream processing techniques which allow for multithreading and reusable data analysis blocks. The modular architecture uses a stream processing based workflow, efficiently splitting data access and data processing analysis steps, resulting in a more efficient use of the available computing resources. The architecture has been verified by implementing a variant caller based on the Varscan 2 [KZL+12] variant calling model, achieving a speedup of nearly 18 times. The results of the variant calling in GNATY are identical to Varscan 2, avoiding the issue of adding yet another variant calling model to the existing ones. To further demonstrate the flexibility and efficiency of the approach, the algorithm is also applied to coverage analysis. Compared to BEDtools 2 [QH10], GNATY was twice as fast to perform coverage analysis, while producing the exact same results. Through the example of 2 existing next generation sequencing data analysis algorithms which are reimplemented with an efficient stream based modular architecture, we show the performance potential in existing data analysis tools. We hope that our work will lead to more efficient algorithms in bioinformatics in general, lessening the hardware requirements to cope with the ever increasing amounts of data to be analysed. The developed GNATY software is freely available for non-commercial usage at http://gnaty.phenosystems.com/.},
    Author = {Beat Wolf and Pierre Kuonen and Thomas Dandekar},
    Journal = {German Conference on Bioinformatics},
    Keywords = {Genetics},
    Month = {sep},
    Pdf = {https://peerj.com/preprints/1350.pdf#page=16},
    Title = {GNATY: A tools library for faster variant calling and coverage analysis},
    Year = {2015}}

    Following the speed increases in next generation sequencing over recent years, the proportion of time spent in sequence analysis compared to sequencing has increasingly shifted towards sequence analysis. While certain analysis steps such as sequence alignment were able to benefit from various speed increases, others, equally important steps like variant calling or coverage analysis, did not receive the same improvements. Analysing NGS data remains a complicated and time consuming process, requiring a substantial amount of computing power. Most current approaches to address the increasing data quantity rely on the usage of more powerful hardware or offload calculations to the cloud. In this poster we show that by using modern software development techniques such as stream processing, those additional analysis steps can be sped up without changing the analysis results. Developing more efficient implementations of existing algorithms makes it possible to process larger datasets on existing infrastructure, without changing the analysis results. This not only reduces the overall cost of data analysis, but also gives researches more flexibility when exploring different settings for the data analysis. We present the application GNATY, a stand-alone version of NGS data analysis tools used in GensearchNGS [WKDD15] developed by Phenosystems SA. The goal during the development of the GNATY tools was not to create new methods with different results to existing approaches, but explore the possibilities of improving the efficiency of existing approaches. A modular architecture has been developed to create efficient sequence alignment analysis tools, using stream processing techniques which allow for multithreading and reusable data analysis blocks. The modular architecture uses a stream processing based workflow, efficiently splitting data access and data processing analysis steps, resulting in a more efficient use of the available computing resources. The architecture has been verified by implementing a variant caller based on the Varscan 2 [KZL+12] variant calling model, achieving a speedup of nearly 18 times. The results of the variant calling in GNATY are identical to Varscan 2, avoiding the issue of adding yet another variant calling model to the existing ones. To further demonstrate the flexibility and efficiency of the approach, the algorithm is also applied to coverage analysis. Compared to BEDtools 2 [QH10], GNATY was twice as fast to perform coverage analysis, while producing the exact same results. Through the example of 2 existing next generation sequencing data analysis algorithms which are reimplemented with an efficient stream based modular architecture, we show the performance potential in existing data analysis tools. We hope that our work will lead to more efficient algorithms in bioinformatics in general, lessening the hardware requirements to cope with the ever increasing amounts of data to be analysed. The developed GNATY software is freely available for non-commercial usage at http://gnaty.phenosystems.com/.

  • [PDF] [DOI] O. Zayene, J. Hennebert, M. S. Touj, R. Ingold, and E. B. N. Amara, "A dataset for Arabic text detection, tracking and recognition in news videos- AcTiV," in 2015 13th International Conference on Document Analysis and Recognition (ICDAR), 2015, pp. 996-1000.
    [Bibtex] [Abstract]
    @INPROCEEDINGS{zayene2015:icdar,
    author={O. Zayene and J. Hennebert and S. Masmoudi Touj and R. Ingold and N. Essoukri Ben Amara},
    booktitle={2015 13th International Conference on Document Analysis and Recognition (ICDAR)},
    title={A dataset for Arabic text detection, tracking and recognition in news videos- AcTiV},
    year={2015},
    pages={996-1000},
    abstract={Recently, promising results have been reported on video text detection and recognition. Most of the proposed methods are tested on private datasets with non-uniform evaluation metrics. We report here on the development of a publicly accessible annotated video dataset designed to assess the performance of different artificial Arabic text detection, tracking and recognition systems. The dataset includes 80 videos (more than 850,000 frames) collected from 4 different Arabic news channels. An attempt was made to ensure maximum diversities of the textual content in terms of size, position and background. This data is accompanied by detailed annotations for each textbox. We also present a region-based text detection approach in addition to a set of evaluation protocols on which the performance of different systems can be measured.},
    keywords={natural language processing;optical character recognition;text detection;video signal processing;AcTiV;Arabic news channels;artificial Arabic text detection system;artificial Arabic text recognition systems;artificial Arabic text tracking system;non-uniform evaluation metrics;private datasets;publicly accessible annotated video dataset;region-based text detection approach;textual content;video text detection;video text recognition;Ferroelectric films;High definition video;Manganese;Nonvolatile memory;Protocols;Random access memory;Arabic text;Benchmark;Video OCR;Video database},
    doi={10.1109/ICDAR.2015.7333911},
    month={Aug},
    pdf={http://www.hennebert.org/download/publications/icdar-2015-a-dataset-for-arabic-text-detection-tracking-and-recognition-in-news-videos-activ.pdf},}

    Recently, promising results have been reported on video text detection and recognition. Most of the proposed methods are tested on private datasets with non-uniform evaluation metrics. We report here on the development of a publicly accessible annotated video dataset designed to assess the performance of different artificial Arabic text detection, tracking and recognition systems. The dataset includes 80 videos (more than 850,000 frames) collected from 4 different Arabic news channels. An attempt was made to ensure maximum diversities of the textual content in terms of size, position and background. This data is accompanied by detailed annotations for each textbox. We also present a region-based text detection approach in addition to a set of evaluation protocols on which the performance of different systems can be measured.

  • [PDF] [DOI] D. Zufferey, T. Hofer, J. Hennebert, M. Schumacher, R. Ingold, and S. Bromuri, "Performance comparison of multi-label learning algorithms on clinical data for chronic diseases," Computers in Biology and Medicine, vol. 65, pp. 34-43, 2015.
    [Bibtex] [Abstract]
    @article{Zufferey201534,
    Abstract = {We are motivated by the issue of classifying diseases of chronically ill patients to assist physicians in their everyday work. Our goal is to provide a performance comparison of state-of-the-art multi-label learning algorithms for the analysis of multivariate sequential clinical data from medical records of patients affected by chronic diseases. As a matter of fact, the multi-label learning approach appears to be a good candidate for modeling overlapped medical conditions, specific to chronically ill patients. With the availability of such comparison study, the evaluation of new algorithms should be enhanced. According to the method, we choose a summary statistics approach for the processing of the sequential clinical data, so that the extracted features maintain an interpretable link to their corresponding medical records. The publicly available MIMIC-II dataset, which contains more than 19,000 patients with chronic diseases, is used in this study. For the comparison we selected the following multi-label algorithms: ML-kNN, AdaBoostMH, binary relevance, classifier chains, \{HOMER\} and RAkEL. Regarding the results, binary relevance approaches, despite their elementary design and their independence assumption concerning the chronic illnesses, perform optimally in most scenarios, in particular for the detection of relevant diseases. In addition, binary relevance approaches scale up to large dataset and are easy to learn. However, the \{RAkEL\} algorithm, despite its scalability problems when it is confronted to large dataset, performs well in the scenario which consists of the ranking of the labels according to the dominant disease of the patient. },
    Author = {Damien Zufferey and Thomas Hofer and Jean Hennebert and Michael Schumacher and Rolf Ingold and Stefano Bromuri},
    Doi = {10.1016/j.compbiomed.2015.07.017},
    Issn = {0010-4825},
    Journal = {Computers in Biology and Medicine},
    Keywords = {Multi-label learning},
    Note = {Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.},
    Pages = {34 - 43},
    Title = {{P}erformance comparison of multi-label learning algorithms on clinical data for chronic diseases},
    Pdf = {http://www.hennebert.org/download/publications/CBM-2015-performance-comparison-of-multi-label-learning-algorithms-on-clinical-data-for-chronic-diseases.pdf},
    Volume = {65},
    Year = {2015},
    Pdf = {http://www.hennebert.org/download/publications/CBM-2015-performance-comparison-of-multi-label-learning-algorithms-on-clinical-data-for-chronic-diseases.pdf},
    Bdsk-Url-2 = {http://dx.doi.org/10.1016/j.compbiomed.2015.07.017}}

    We are motivated by the issue of classifying diseases of chronically ill patients to assist physicians in their everyday work. Our goal is to provide a performance comparison of state-of-the-art multi-label learning algorithms for the analysis of multivariate sequential clinical data from medical records of patients affected by chronic diseases. As a matter of fact, the multi-label learning approach appears to be a good candidate for modeling overlapped medical conditions, specific to chronically ill patients. With the availability of such comparison study, the evaluation of new algorithms should be enhanced. According to the method, we choose a summary statistics approach for the processing of the sequential clinical data, so that the extracted features maintain an interpretable link to their corresponding medical records. The publicly available MIMIC-II dataset, which contains more than 19,000 patients with chronic diseases, is used in this study. For the comparison we selected the following multi-label algorithms: ML-kNN, AdaBoostMH, binary relevance, classifier chains, \{HOMER\} and RAkEL. Regarding the results, binary relevance approaches, despite their elementary design and their independence assumption concerning the chronic illnesses, perform optimally in most scenarios, in particular for the detection of relevant diseases. In addition, binary relevance approaches scale up to large dataset and are easy to learn. However, the \{RAkEL\} algorithm, despite its scalability problems when it is confronted to large dataset, performs well in the scenario which consists of the ranking of the labels according to the dominant disease of the patient.

  • [PDF] [DOI] G. Bovet, G. Briard, and J. Hennebert, "A Scalable Cloud Storage for Sensor Networks," in 4th Int. Conf. on Internet of Things. IoT 2014 MIT, MA, USA. Web of Things Workshop, 2014.
    [Bibtex] [Abstract]
    @conference{bovet2014:wot2,
    author = "G{\'e}r{\^o}me Bovet and Gautier Briard and Jean Hennebert",
    abstract = "Data storage has become a major topic in sensor networks as large quantities of data need to be archived for future processing. In this paper, we present a cloud storage solution benefiting from the available memory on smart things becoming data nodes. In-network storage reduces the heavy traffic resulting of the transmission of all the data to an outside central sink. The system built on agents allows an autonomous management of the cloud and therefore requires no human in the loop. It also makes an intensive use of Web technologies to follow the clear trend of sensors adopting the Web-of-Things paradigm. Further, we make a performance evaluation demonstrating its suitability in building management systems.",
    booktitle = "4th Int. Conf. on Internet of Things. IoT 2014 MIT, MA, USA. Web of Things Workshop",
    doi = "10.1145/2684432.2684437",
    isbn = "978-1-4503-3066-4",
    keywords = "Distributed databases, cloud storage, web-of-things, sensor networks, internet-of-things",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    publisher = "ACM New York",
    series = "International Conference on the Internet of Things - IoT 2014",
    title = "{A} {S}calable {C}loud {S}torage for {S}ensor {N}etworks",
    Pdf = "http://www.hennebert.org/download/publications/wot-2014-a-scalable-cloud-storage-for-sensor-networks.pdf",
    year = "2014",
    }

    Data storage has become a major topic in sensor networks as large quantities of data need to be archived for future processing. In this paper, we present a cloud storage solution benefiting from the available memory on smart things becoming data nodes. In-network storage reduces the heavy traffic resulting of the transmission of all the data to an outside central sink. The system built on agents allows an autonomous management of the cloud and therefore requires no human in the loop. It also makes an intensive use of Web technologies to follow the clear trend of sensors adopting the Web-of-Things paradigm. Further, we make a performance evaluation demonstrating its suitability in building management systems.

  • [PDF] [DOI] G. Bovet and J. Hennebert, "Will web technologies impact on building automation systems architecture?," in International Workshop on Enabling ICT for Smart Buildings (ICT-SB 2014), 2014, pp. 985-990.
    [Bibtex] [Abstract]
    @conference{bovet2014:ant,
    Abstract = {Optimizationffices, factories and even private housings are more and more endowed with building management systems (BMS) targeting an increase of comfort as well as lowering energy costs. This expansion is made possible by the progress realized in pervasive computing, providing small sized and affordable sensing devices. However, current BMS are often based on proprietary technologies, making their interoperability and evolution more didcult. For example, we observe the emergence of new applications based on intelligent data analysis able to compute more complex models about the use of the building. Such applications rely on heterogeneous sets of sensors, web data, user feedback and self-learning algorithms. In this position paper, we discuss the role of Web technologies for standardizing the application layer, and thus providing a framework for developing advanced building applications. We present our vision of TASSo, a layered Web model facing actual and future challenges for building management systems.},
    Author = {G{\'e}r{\^o}me Bovet and Jean Hennebert},
    Booktitle = {International Workshop on Enabling ICT for Smart Buildings (ICT-SB 2014)},
    Doi = {10.1016/j.procs.2014.05.522},
    Issn = {1877-0509},
    Keywords = {Building Management System, Internet-of-Things, Web-of-Things, Architecture},
    Note = {Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.},
    Pages = {985-990},
    Series = {Procedia Computer Science},
    Title = {{W}ill web technologies impact on building automation systems architecture?},
    Pdf = {http://www.hennebert.org/download/publications/ant-procedia-2013-energy-efficient-optimization-layer-for-event-based-communications-on-wi-fi-things.pdf},
    Volume = {32},
    Year = {2014},
    Pdf = {http://www.hennebert.org/download/publications/ant-procedia-2013-energy-efficient-optimization-layer-for-event-based-communications-on-wi-fi-things.pdf},
    Bdsk-Url-2 = {http://dx.doi.org/10.1016/j.procs.2014.05.522}}

    Optimizationffices, factories and even private housings are more and more endowed with building management systems (BMS) targeting an increase of comfort as well as lowering energy costs. This expansion is made possible by the progress realized in pervasive computing, providing small sized and affordable sensing devices. However, current BMS are often based on proprietary technologies, making their interoperability and evolution more didcult. For example, we observe the emergence of new applications based on intelligent data analysis able to compute more complex models about the use of the building. Such applications rely on heterogeneous sets of sensors, web data, user feedback and self-learning algorithms. In this position paper, we discuss the role of Web technologies for standardizing the application layer, and thus providing a framework for developing advanced building applications. We present our vision of TASSo, a layered Web model facing actual and future challenges for building management systems.

  • [PDF] [DOI] G. Bovet and J. Hennebert, "Distributed Semantic Discovery for Web-of-Things Enabled Smart Buildings," in First International Workshop on Architectures and Technologies for Smart Cities, Dubai, United Arab Emirates, 2014, pp. 1-5.
    [Bibtex] [Abstract]
    @conference{bovet:ntms:2014,
    Abstract = {Nowadays, our surrounding environment is more and more scattered with various types of sensors. Due to their intrinsic properties and representation formats, they form small islands isolated from each other. In order to increase interoperability and release their full capabilities, we propose to represent devices descriptions including data and service invocation with a common model allowing to compose mashups of heterogeneous sensors. Pushing this paradigm further, we also propose to augment service descriptions with a discovery protocol easing automatic assimilation of knowledge. In this work, we describe the architecture supporting what can be called a Semantic Sensor Web-of-Things. As proof of concept, we apply our proposal to the domain of smart buildings, composing a novel ontology covering heterogeneous sensing, actuation and service invocation. Our architecture also emphasizes on the energetic aspect and is optimized for constrained environments.},
    Address = {Dubai, United Arab Emirates},
    Author = {G{\'e}r{\^o}me Bovet and Jean Hennebert},
    Booktitle = {First International Workshop on Architectures and Technologies for Smart Cities},
    Doi = {10.1109/NTMS.2014.6814015},
    Editor = {Mohamad Badra; Omar Alfandi},
    Isbn = {9781479932245},
    Keywords = {Smart buildings, Discovery, Semantics, Ontologies},
    Month = {Mar},
    Pages = {1-5},
    Publisher = {IEEE},
    Title = {{D}istributed {S}emantic {D}iscovery for {W}eb-of-{T}hings {E}nabled {S}mart {B}uildings},
    Pdf = {http://www.hennebert.org/download/publications/ntms-2014-distributed-semantic-discovery-for-web-of-things-enabled-smart-buildings.pdf},
    Year = {2014},
    Pdf = {http://www.hennebert.org/download/publications/ntms-2014-distributed-semantic-discovery-for-web-of-things-enabled-smart-buildings.pdf},
    Bdsk-Url-2 = {http://dx.doi.org/10.1109/NTMS.2014.6814015}}

    Nowadays, our surrounding environment is more and more scattered with various types of sensors. Due to their intrinsic properties and representation formats, they form small islands isolated from each other. In order to increase interoperability and release their full capabilities, we propose to represent devices descriptions including data and service invocation with a common model allowing to compose mashups of heterogeneous sensors. Pushing this paradigm further, we also propose to augment service descriptions with a discovery protocol easing automatic assimilation of knowledge. In this work, we describe the architecture supporting what can be called a Semantic Sensor Web-of-Things. As proof of concept, we apply our proposal to the domain of smart buildings, composing a novel ontology covering heterogeneous sensing, actuation and service invocation. Our architecture also emphasizes on the energetic aspect and is optimized for constrained environments.

  • [PDF] [DOI] G. Bovet, G. Briard, and J. Hennebert, "A Scalable Cloud Storage for Sensor Networks," in 4th Int. Conf. on Internet of Things. IoT 2014 MIT, MA, USA. Web of Things Workshop, 2014.
    [Bibtex] [Abstract]
    @conference{bovet2014:wot2,
    Abstract = {Data storage has become a major topic in sensor networks as large quantities of data need to be archived for future processing. In this paper, we present a cloud storage solution beneting from the available memory on smart things becoming data nodes. In-network storage reduces the heavy traffic resulting of the transmission of all the data to an outside central sink. The system built on agents allows an autonomous management of the cloud and therefore requires no human in the loop. It also makes an intensive use of Web technologies to follow the clear trend of sensors adopting the Web-of-Things paradigm. Further, we make a performance evaluation demonstrating its suitability in building management systems.},
    Author = {G{\'e}r{\^o}me Bovet and Gautier Briard and Jean Hennebert},
    Booktitle = {4th Int. Conf. on Internet of Things. IoT 2014 MIT, MA, USA. Web of Things Workshop},
    Doi = {10.1145/2684432.2684437},
    Isbn = {978-1-4503-3066-4},
    Keywords = {Distributed databases, cloud storage, web-of-things, sensor networks, internet-of-things},
    Note = {Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.},
    Publisher = {ACM New York},
    Series = {International Conference on the Internet of Things - IoT 2014},
    Title = {{A} {S}calable {C}loud {S}torage for {S}ensor {N}etworks},
    Pdf = {http://www.hennebert.org/download/publications/wot-2014-a-scalable-cloud-storage-for-sensor-networks.pdf},
    Year = {2014},
    Pdf = {http://www.hennebert.org/download/publications/wot-2014-a-scalable-cloud-storage-for-sensor-networks.pdf},
    Bdsk-Url-2 = {http://dx.doi.org/10.1145/2684432.2684437}}

    Data storage has become a major topic in sensor networks as large quantities of data need to be archived for future processing. In this paper, we present a cloud storage solution bene ting from the available memory on smart things becoming data nodes. In-network storage reduces the heavy traffic resulting of the transmission of all the data to an outside central sink. The system built on agents allows an autonomous management of the cloud and therefore requires no human in the loop. It also makes an intensive use of Web technologies to follow the clear trend of sensors adopting the Web-of-Things paradigm. Further, we make a performance evaluation demonstrating its suitability in building management systems.

  • [PDF] [DOI] G. Bovet, A. Ridi, and J. Hennebert, "Virtual Things for Machine Learning Applications:," in Fifth International Workshop on the Web of Things - WoT 2014, 2014.
    [Bibtex] [Abstract]
    @conference{bovet2014:wot,
    Abstract = {Internet-of-Things (IoT) devices, especially sensors are producing large quantities of data that can be used for gathering knowledge. In this field, machine learning technologies are increasingly used to build versatile data-driven models. In this paper, we present a novel architecture able to execute machine learning algorithms within the sensor network, presenting advantages in terms of privacy and data transfer efficiency. We first argument that some classes of machine learning algorithms are compatible with this approach, namely based on the use of generative models that allow a distribution of the computation on a setof nodes. We then detail our architecture proposal, leveraging on the use of Web-of-Things technologies to ease integration into networks. The convergence of machine learning generative models and Web-of-Things paradigms leads us to the concept of virtual things exposing higher level knowledge by exploiting sensor data in the network. Finally, we demonstrate with a real scenario the feasibility and performances of our proposal.},
    Author = {G{\'e}r{\^o}me Bovet and Antonio Ridi and Jean Hennebert},
    Booktitle = {Fifth International Workshop on the Web of Things - WoT 2014},
    Doi = {10.1145/2684432.2684434},
    Isbn = {978-1-4503-3066-4},
    Journal = {Fifth International Workshop on the Web of Things (WoT 2014)},
    Keywords = {Machine learning, Sensor network, Web-of-Things, Internet-of-Things},
    Month = {oct},
    Note = {Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.},
    Series = {International Conference on the Internet of Things - IoT 2014},
    Title = {{V}irtual {T}hings for {M}achine {L}earning {A}pplications:},
    Pdf = {http://www.hennebert.org/download/publications/wot-2014-virtual-things-for-machine-learning-applications.pdf},
    Year = {2014},
    Pdf = {http://www.hennebert.org/download/publications/wot-2014-virtual-things-for-machine-learning-applications.pdf},
    Bdsk-Url-2 = {http://dx.doi.org/10.1145/2684432.2684434}}

    Internet-of-Things (IoT) devices, especially sensors are producing large quantities of data that can be used for gathering knowledge. In this field, machine learning technologies are increasingly used to build versatile data-driven models. In this paper, we present a novel architecture able to execute machine learning algorithms within the sensor network, presenting advantages in terms of privacy and data transfer efficiency. We first argument that some classes of machine learning algorithms are compatible with this approach, namely based on the use of generative models that allow a distribution of the computation on a setof nodes. We then detail our architecture proposal, leveraging on the use of Web-of-Things technologies to ease integration into networks. The convergence of machine learning generative models and Web-of-Things paradigms leads us to the concept of virtual things exposing higher level knowledge by exploiting sensor data in the network. Finally, we demonstrate with a real scenario the feasibility and performances of our proposal.

  • [PDF] G. Bovet, A. Ridi, and J. Hennebert, "Appliance Recognition on Internet-of-Things Devices," in 4th Int. Conf. on Internet of Things. IoT 2014 MIT, MA, USA., 2014.
    [Bibtex] [Abstract]
    @conference{bovet2014:iotdemo,
    author = "G{\'e}r{\^o}me Bovet and Antonio Ridi and Jean Hennebert",
    abstract = "Machine Learning (ML) approaches are increasingly used to model data coming from sensor networks. Typical ML implementations are cpu intensive and are often running server-side. However, IoT devices provide increasing cpu capabilities and some classes of ML algorithms are compatible with distribution and downward scalability. In this demonstration we explore the possibility of distributing ML tasks to IoT devices in the sensor network. We demonstrate a concrete scenario of appliance recognition where a smart plug provides electrical measures that are distributed to WiFi nodes running the ML algorithms. Each node estimates class-conditional probabilities that are then merged for recognizing the appliance category. Finally, our architectures relies on Web technologies for complying with Web-of-Things paradigms.
    ",
    booktitle = "4th Int. Conf. on Internet of Things. IoT 2014 MIT, MA, USA.",
    keywords = "Internet-of-Things, Machine Learning, Appliance Recognition, NILM, Non Intrusive Load Monitoring, HMM, Hidden Markov Models",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    title = "{A}ppliance {R}ecognition on {I}nternet-of-{T}hings {D}evices",
    Pdf = "http://www.hennebert.org/download/publications/iot-2014-appliance-recognition-on-internet-of-things-devices-demo-session.pdf",
    year = "2014",
    }

    Machine Learning (ML) approaches are increasingly used to model data coming from sensor networks. Typical ML implementations are cpu intensive and are often running server-side. However, IoT devices provide increasing cpu capabilities and some classes of ML algorithms are compatible with distribution and downward scalability. In this demonstration we explore the possibility of distributing ML tasks to IoT devices in the sensor network. We demonstrate a concrete scenario of appliance recognition where a smart plug provides electrical measures that are distributed to WiFi nodes running the ML algorithms. Each node estimates class-conditional probabilities that are then merged for recognizing the appliance category. Finally, our architectures relies on Web technologies for complying with Web-of-Things paradigms.

  • [PDF] [DOI] G. Bovet and J. Hennebert, "A Distributed Web-based Naming System for Smart Buildings," in Third IEEE workshop on the IoT: Smart Objects and Services, Sydney, Australie, 2014, pp. 1-6.
    [Bibtex] [Abstract]
    @conference{bovet:hal-01022861,
    author = "G{\'e}r{\^o}me Bovet and Jean Hennebert",
    abstract = "Nowadays, pervasive application scenarios relying on sensor networks are gaining momentum. The field of smart buildings is a promising playground where the use of sensors allows a reduction of the overall energy consumption. Most of current applications are using the classical DNS which is not suited for the Internet-of-Things because of requiring humans to get it working. From another perspective, Web technologies are pushing in sensor networks following the Web-of-Things paradigm advocating to use RESTful APIs for manipulating resources representing device capabilities. Being aware of these two observations, we propose to build on top of Web technologies leading to a novel naming system that is entirely autonomous. In this work, we describe the architecture supporting what can be called an autonomous Web-oriented naming system. As proof of concept, we simulate a rather large building and compare the behaviour of our approach to the legacy DNS and Multicast DNS (mDNS).",
    address = "Sydney, Australie",
    booktitle = "Third IEEE workshop on the IoT: Smart Objects and Services",
    doi = "10.1109/WoWMoM.2014.6918930",
    isbn = "9781479947850",
    keywords = "iot, wot, smart building, web of things, internet of things",
    month = "Jun",
    pages = "1-6",
    title = "{A} {D}istributed {W}eb-based {N}aming {S}ystem for {S}mart {B}uildings",
    Pdf = "http://hennebert.org/download/publications/iotsos-2014-a-distributed-web-based-naming-system-for-smart-building.pdf",
    year = "2014",
    }

    Nowadays, pervasive application scenarios relying on sensor networks are gaining momentum. The field of smart buildings is a promising playground where the use of sensors allows a reduction of the overall energy consumption. Most of current applications are using the classical DNS which is not suited for the Internet-of-Things because of requiring humans to get it working. From another perspective, Web technologies are pushing in sensor networks following the Web-of-Things paradigm advocating to use RESTful APIs for manipulating resources representing device capabilities. Being aware of these two observations, we propose to build on top of Web technologies leading to a novel naming system that is entirely autonomous. In this work, we describe the architecture supporting what can be called an autonomous Web-oriented naming system. As proof of concept, we simulate a rather large building and compare the behaviour of our approach to the legacy DNS and Multicast DNS (mDNS).

  • [PDF] [DOI] G. Bovet, A. Ridi, and J. Hennebert, "Toward Web Enhanced Building Automation Systems - Big Data and Internet of Things: A Roadmap for Smart Environments," , C. D. Nik Bessis, Ed., Springer, 2014, vol. 546, pp. 259-284.
    [Bibtex] [Abstract]
    @inbook{bovet:2014:bookchap,
    Abstract = {The emerging concept of Smart Building relies on an intensive use of sensors and actuators and therefore appears, at first glance, to be a domain of predilection for the IoT. However, technology providers of building automation systems have been functioning, for a long time, with dedicated networks, communication protocols and APIs. Eventually, a mix of different technologies can even be present in a given building. IoT principles are now appearing in buildings as a way to simplify and standardise application development. Nevertheless, many issues remain due to this heterogeneity between existing installations and native IP devices that induces complexity and maintenance efforts of building management systems. A key success factor for the IoT adoption in Smart Buildings is to provide a loosely-coupled Web protocol stack allowing interoperation between all devices present in a building. We review in this chapter different strategies that are going in this direction. More specifically, we emphasise on several aspects issued from pervasive and ubiquitous computing like service discovery. Finally, making the assumption of seamless access to sensor data through IoT paradigms, we provide an overview of some of the most exciting enabling applications that rely on intelligent data analysis and machine learning for energy saving in buildings.},
    Author = {G{\'e}r{\^o}me Bovet and Antonio Ridi and Jean Hennebert},
    Chapter = {11},
    Doi = {10.1007/978-3-319-05029-4_11},
    Editor = {Nik Bessis, Ciprian Dobre},
    Isbn = {9783319050287},
    Keywords = {iot, wot, smart building},
    Note = {http://www.springer.com/engineering/computational+intelligence+and+complexity/book/978-3-319-05028-7},
    Pages = {259-284},
    Publisher = {Springer},
    Series = {Studies in Computational Intelligence},
    Title = {{T}oward {W}eb {E}nhanced {B}uilding {A}utomation {S}ystems - {B}ig {D}ata and {I}nternet of {T}hings: {A} {R}oadmap for {S}mart {E}nvironments},
    Pdf = {http://hennebert.org/download/publications/springer-2014_towards-web-enhanced-building-automation-systems.pdf},
    Volume = {546},
    Year = {2014},
    Pdf = {http://hennebert.org/download/publications/springer-2014_towards-web-enhanced-building-automation-systems.pdf},
    Bdsk-Url-2 = {http://dx.doi.org/10.1007/978-3-319-05029-4_11}}

    The emerging concept of Smart Building relies on an intensive use of sensors and actuators and therefore appears, at first glance, to be a domain of predilection for the IoT. However, technology providers of building automation systems have been functioning, for a long time, with dedicated networks, communication protocols and APIs. Eventually, a mix of different technologies can even be present in a given building. IoT principles are now appearing in buildings as a way to simplify and standardise application development. Nevertheless, many issues remain due to this heterogeneity between existing installations and native IP devices that induces complexity and maintenance efforts of building management systems. A key success factor for the IoT adoption in Smart Buildings is to provide a loosely-coupled Web protocol stack allowing interoperation between all devices present in a building. We review in this chapter different strategies that are going in this direction. More specifically, we emphasise on several aspects issued from pervasive and ubiquitous computing like service discovery. Finally, making the assumption of seamless access to sensor data through IoT paradigms, we provide an overview of some of the most exciting enabling applications that rely on intelligent data analysis and machine learning for energy saving in buildings.

  • [DOI] S. Bromuri, D. Zufferey, J. Hennebert, and M. Schumacher, "Multi-Label Classification of Chronically Ill Patients with Bag of Words and Supervised Dimensionality Reduction Algorithms," Journal of Biomedical Informatics, vol. 54, p. 165.175, 2014.
    [Bibtex] [Abstract]
    @article{brom:jbi:2014,
    author = "Stefano Bromuri and Damien Zufferey and Jean Hennebert and Michael Schumacher",
    abstract = "Objective.
    This research is motivated by the issue of classifying illnesses of chronically ill patients for decision support in clinical settings. Our main objective is to propose multi-label classification of multivariate time series contained in medical records of chronically ill patients, by means of quantization methods, such as bag of words (BoW), and multi-label classification algorithms. Our second objective is to compare supervised dimensionality reduction techniques to state-of-the-art multi-label classification algorithms. The hypothesis is that kernel methods and locality preserving projections make such algorithms good candidates to study multi-label medical time series.
    Methods.
    We combine BoW and supervised dimensionality reduction algorithms to perform multi-label classification on health records of chronically ill patients. The considered algorithms are compared with state-of-the-art multi-label classifiers in two real world datasets. Portavita dataset contains 525 diabetes type 2 (DT2) patients, with co-morbidities of DT2 such as hypertension, dyslipidemia, and microvascular or macrovascular issues. MIMIC II dataset contains 2635 patients affected by thyroid disease, diabetes mellitus, lipoid metabolism disease, fluid electrolyte disease, hypertensive disease, thrombosis, hypotension, chronic obstructive pulmonary disease (COPD), liver disease and kidney disease. The algorithms are evaluated using multi-label evaluation metrics such as hamming loss, one error, coverage, ranking loss, and average precision.
    Results.
    Non-linear dimensionality reduction approaches behave well on medical time series quantized using the BoW algorithm, with results comparable to state-of-the-art multi-label classification algorithms. Chaining the projected features has a positive impact on the performance of the algorithm with respect to pure binary relevance approaches.
    Conclusions.
    The evaluation highlights the feasibility of representing medical health records using the BoW for multi-label classification tasks. The study also highlights that dimensionality reduction algorithms based on kernel methods, locality preserving projections or both are good candidates to deal with multi-label classification tasks in medical time series with many missing values and high label density.",
    doi = "10.1016/j.jbi.2014.05.010",
    issn = "15320464",
    journal = "Journal of Biomedical Informatics",
    keywords = "Machine Learning, Multi-label classification, Complex patient, Diabetes type 2, Clinical data, Dimensionality reduction, Kernel methods",
    month = "2014/05/30",
    pages = "165.175",
    publisher = "Elsevier",
    title = "{M}ulti-{L}abel {C}lassification of {C}hronically {I}ll {P}atients with {B}ag of {W}ords and {S}upervised {D}imensionality {R}eduction {A}lgorithms",
    url = "http://www.j-biomed-inform.com/article/S1532-0464(14)00127-0/abstract",
    volume = "54",
    year = "2014",
    }

    Objective. This research is motivated by the issue of classifying illnesses of chronically ill patients for decision support in clinical settings. Our main objective is to propose multi-label classification of multivariate time series contained in medical records of chronically ill patients, by means of quantization methods, such as bag of words (BoW), and multi-label classification algorithms. Our second objective is to compare supervised dimensionality reduction techniques to state-of-the-art multi-label classification algorithms. The hypothesis is that kernel methods and locality preserving projections make such algorithms good candidates to study multi-label medical time series. Methods. We combine BoW and supervised dimensionality reduction algorithms to perform multi-label classification on health records of chronically ill patients. The considered algorithms are compared with state-of-the-art multi-label classifiers in two real world datasets. Portavita dataset contains 525 diabetes type 2 (DT2) patients, with co-morbidities of DT2 such as hypertension, dyslipidemia, and microvascular or macrovascular issues. MIMIC II dataset contains 2635 patients affected by thyroid disease, diabetes mellitus, lipoid metabolism disease, fluid electrolyte disease, hypertensive disease, thrombosis, hypotension, chronic obstructive pulmonary disease (COPD), liver disease and kidney disease. The algorithms are evaluated using multi-label evaluation metrics such as hamming loss, one error, coverage, ranking loss, and average precision. Results. Non-linear dimensionality reduction approaches behave well on medical time series quantized using the BoW algorithm, with results comparable to state-of-the-art multi-label classification algorithms. Chaining the projected features has a positive impact on the performance of the algorithm with respect to pure binary relevance approaches. Conclusions. The evaluation highlights the feasibility of representing medical health records using the BoW for multi-label classification tasks. The study also highlights that dimensionality reduction algorithms based on kernel methods, locality preserving projections or both are good candidates to deal with multi-label classification tasks in medical time series with many missing values and high label density.

  • [PDF] [DOI] K. Chen, H. Wei, M. Liwicki, J. Hennebert, and R. Ingold, "Robust Text Line Segmentation for Historical Manuscript Images Using Color and Texture," in 22nd International Conference on Pattern Recognition - ICPR, 2014, pp. 2978-2983.
    [Bibtex] [Abstract]
    @conference{chen2014:icpr,
    Abstract = {In this paper we present a novel text line segmentation method for historical manuscript images. We use a pyramidal approach where at the first level, pixels are classified into: text, background, decoration, and out of page; at the second level, text regions are split into text line and non text line. Color and texture features based on Local Binary Patterns and Gabor Dominant Orientation are used for classification. By applying a modified Fast Correlation-Based Filter feature selection algorithm, redundant and irrelevant features are removed. Finally, the text line segmentation results are refined by a smoothing post-processing procedure. Unlike other projection profile or connected components methods, the proposed algorithm does not use any script-specific knowledge and is applicable to color images. The proposed algorithm is evaluated on three historical manuscript image datasets of diverse nature and achieved an average precision of 91% and recall of 84%. Experiments also show that the proposed algorithm is robust with respect to changes of the writing style, page layout, and noise on the image.},
    Author = {Kai Chen and Hao Wei and Marcus Liwicki and Jean Hennebert and Rolf Ingold},
    Booktitle = {22nd International Conference on Pattern Recognition - ICPR},
    Doi = {10.1109/ICPR.2014.514},
    Isbn = {9781479952106},
    Keywords = {Machine Learning, Document Understanding, Segmentation, features and descriptors, Texture and color analysis},
    Note = {Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.},
    Pages = {2978-2983},
    Publisher = {Institute of Electrical and Electronics Engineers ( IEEE )},
    Title = {{R}obust {T}ext {L}ine {S}egmentation for {H}istorical {M}anuscript {I}mages {U}sing {C}olor and {T}exture},
    Pdf = {http://www.hennebert.org/download/publications/icpr-2014-robust-text-line-segmentation-for-historical-manuscript-images-using-color-and-texture.pdf},
    Year = {2014},
    Pdf = {http://www.hennebert.org/download/publications/icpr-2014-robust-text-line-segmentation-for-historical-manuscript-images-using-color-and-texture.pdf},
    Bdsk-Url-2 = {http://dx.doi.org/10.1109/ICPR.2014.514}}

    In this paper we present a novel text line segmentation method for historical manuscript images. We use a pyramidal approach where at the first level, pixels are classified into: text, background, decoration, and out of page; at the second level, text regions are split into text line and non text line. Color and texture features based on Local Binary Patterns and Gabor Dominant Orientation are used for classification. By applying a modified Fast Correlation-Based Filter feature selection algorithm, redundant and irrelevant features are removed. Finally, the text line segmentation results are refined by a smoothing post-processing procedure. Unlike other projection profile or connected components methods, the proposed algorithm does not use any script-specific knowledge and is applicable to color images. The proposed algorithm is evaluated on three historical manuscript image datasets of diverse nature and achieved an average precision of 91% and recall of 84%. Experiments also show that the proposed algorithm is robust with respect to changes of the writing style, page layout, and noise on the image.

  • [PDF] [DOI] K. Chen and J. Hennebert, "Content-Based Image Retrieval with LIRe and SURF on a Smart-phone-Based Product Image Database," in Pattern Recognition, J. Martinez-Trinidad, J. Carrasco-Ochoa, J. Olvera-Lopez, J. Salas-Rodriguez, and C. Suen, Eds., Springer International Publishing, 2014, pp. 231-240.
    [Bibtex] [Abstract]
    @incollection{chen2014:mcpr,
    Abstract = {We present the evaluation of a product identification task using the LIRe system and SURF (Speeded-Up Robust Features) for content-based image retrieval (CBIR). The evaluation is performed on the Fribourg Product Image Database (FPID) that contains more than 3'000 pictures of consumer products taken using mobile phone cameras in realistic conditions. Using the evaluation protocol proposed with FPID, we explore the performance of different prepro- cessing and feature extraction. We observe that by using SURF, we can improve significantly the performance on this task. Image resizing and Lucene indexing are used in order to speed up CBIR task with SURF. We also show the benefit of using simple preprocessing of the images such as a proportional cropping of the images. The experiments demonstrate the effectiveness of the proposed method for the product identification task.},
    Author = {Kai Chen and Jean Hennebert},
    Booktitle = {Pattern Recognition},
    Doi = {10.1007/978-3-319-07491-7_24},
    Editor = {Martinez-Trinidad, Jos{\'e}Francisco and Carrasco-Ochoa, Jes{\'u}sAriel and Olvera-Lopez, Jos{\'e}Arturo and Salas-Rodriguez, Joaquin and Suen, ChingY.},
    Isbn = {9783319074900},
    Keywords = {cbir, image recognition, machine learning, product identification, smartphone-based iimage database, fpid, benchmarking},
    Note = {Lecture Notes in Computer Science. 6th Mexican Conference on Pattern Recognition (MCPR2014)},
    Pages = {231-240},
    Publisher = {Springer International Publishing},
    Title = {{C}ontent-{B}ased {I}mage {R}etrieval with {LIR}e and {SURF} on a {S}mart-phone-{B}ased {P}roduct {I}mage {D}atabase},
    Pdf = {http://www.hennebert.org/download/publications/mcpr-2014-content-based-image-retrieval-with-LIRe-and-SURF-on-a-smartphone-based-product-image-database.pdf},
    Year = {2014},
    Pdf = {http://www.hennebert.org/download/publications/mcpr-2014-content-based-image-retrieval-with-LIRe-and-SURF-on-a-smartphone-based-product-image-database.pdf},
    Bdsk-Url-2 = {http://dx.doi.org/10.1007/978-3-319-07491-7_24}}

    We present the evaluation of a product identification task using the LIRe system and SURF (Speeded-Up Robust Features) for content-based image retrieval (CBIR). The evaluation is performed on the Fribourg Product Image Database (FPID) that contains more than 3'000 pictures of consumer products taken using mobile phone cameras in realistic conditions. Using the evaluation protocol proposed with FPID, we explore the performance of different prepro- cessing and feature extraction. We observe that by using SURF, we can improve significantly the performance on this task. Image resizing and Lucene indexing are used in order to speed up CBIR task with SURF. We also show the benefit of using simple preprocessing of the images such as a proportional cropping of the images. The experiments demonstrate the effectiveness of the proposed method for the product identification task.

  • [PDF] [DOI] K. Chen, H. Wei, J. Hennebert, R. Ingold, and M. Liwicki, "Page Segmentation for Historical Handwritten Document Images Using Color and Texture Features," in International Conference on Frontiers in Handwriting Recognition (ICFHR), 2014, pp. 488-493.
    [Bibtex] [Abstract]
    @conference{chen2014:ICFHR,
    Abstract = {In this paper we present a physical structure detection method for historical handwritten document images. We considered layout analysis as a pixel labeling problem. By classifying each pixel as either periphery, background, text block, or decoration, we achieve high quality segmentation without any assumption of specific topologies and shapes. Various color and texture features such as color variance, smoothness, Laplacian, Local Binary Patterns, and Gabor Dominant Orientation Histogram are used for classification. Some of these features have so far not got many attentions for document image layout analysis. By applying an Improved Fast Correlation-Based Filter feature selection algorithm, the redundant and irrelevant features are removed. Finally, the segmentation results are refined by a smoothing post-processing procedure. The proposed method is demonstrated by experiments conducted on three different historical handwritten document image datasets. Experiments show the benefit of combining various color and texture features for classification. The results also show the advantage of using a feature selection method to choose optimal feature subset. By applying the proposed method we achieve superior accuracy compared with earlier work on several datasets, e.g., We achieved 93% accuracy compared with 91% of the previous method on the Parzival dataset which contains about 100 million pixels.},
    Author = {Kai Chen and Hao Wei and Jean Hennebert and Rolf Ingold and Marcus Liwicki},
    Booktitle = {International Conference on Frontiers in Handwriting Recognition (ICFHR)},
    Doi = {10.1109/ICFHR.2014.88},
    Isbn = {9781479978922},
    Keywords = {machine learning, image analysis},
    Pages = {488-493},
    Publisher = {Institute of Electrical and Electronics Engineers ( IEEE )},
    Title = {{P}age {S}egmentation for {H}istorical {H}andwritten {D}ocument {I}mages {U}sing {C}olor and {T}exture {F}eatures},
    Pdf = {http://www.hennebert.org/download/publications/icfhr-2014-page-segmentation-for-historical-handwritten-document-images-using-color-and-texture-features.pdf},
    Year = {2014},
    Pdf = {http://www.hennebert.org/download/publications/icfhr-2014-page-segmentation-for-historical-handwritten-document-images-using-color-and-texture-features.pdf},
    Bdsk-Url-2 = {http://dx.doi.org/10.1109/ICFHR.2014.88}}

    In this paper we present a physical structure detection method for historical handwritten document images. We considered layout analysis as a pixel labeling problem. By classifying each pixel as either periphery, background, text block, or decoration, we achieve high quality segmentation without any assumption of specific topologies and shapes. Various color and texture features such as color variance, smoothness, Laplacian, Local Binary Patterns, and Gabor Dominant Orientation Histogram are used for classification. Some of these features have so far not got many attentions for document image layout analysis. By applying an Improved Fast Correlation-Based Filter feature selection algorithm, the redundant and irrelevant features are removed. Finally, the segmentation results are refined by a smoothing post-processing procedure. The proposed method is demonstrated by experiments conducted on three different historical handwritten document image datasets. Experiments show the benefit of combining various color and texture features for classification. The results also show the advantage of using a feature selection method to choose optimal feature subset. By applying the proposed method we achieve superior accuracy compared with earlier work on several datasets, e.g., We achieved 93% accuracy compared with 91% of the previous method on the Parzival dataset which contains about 100 million pixels.

  • [DOI] J. Chen, Y. Lu, I. Comsa, and P. Kuonen, "A scalability hierarchical fault tolerance strategy: Community Fault Tolerance," in 2014 20th International Conference on Automation and Computing, 2014, pp. 212-217.
    [Bibtex] [Abstract]
    @INPROCEEDINGS{chen:6935488,
    author={J. Chen and Y. Lu and I. Comsa and P. Kuonen},
    booktitle={2014 20th International Conference on Automation and Computing},
    title={A scalability hierarchical fault tolerance strategy: Community Fault Tolerance},
    year={2014},
    pages={212-217},
    abstract={Most of hierarchical fault tolerance strategies did not pay much attention to scalability of fault tolerance. In distributed system, scalability is a very important feature. To tolerant failures when the scale of the system changing is a normal and important scenario. Especially in nowadays, almost all the cloud computing companies provide their computing services elastically. To add extra devices or remove devices in order to provide different services happens all the time. In such a scenario, it is very important that the fault tolerance strategy is scalable. In this paper, we introduce dynamic programming thoughts to build hierarchical regions as communities for fault tolerance strategy and apply different strategies based on communities instead of a single process. We call this fault tolerance strategy as Community Fault Tolerance. It cannot only reduce the memory overload by eliminating the number of records of messages inside the community region, but also provides a good characteristic of scalability. The scalability property of our strategy makes it handle with the scenario of adding devices or removing devices in the distributed system easily.},
    keywords={cloud computing;dynamic programming;fault tolerant computing;cloud computing;community fault tolerance;distributed system;dynamic programming;memory overload reduction;scalability hierarchical fault tolerance strategy;Checkpointing;Communities;Dynamic programming;Fault tolerance;Fault tolerant systems;Parallel processing;Scalability;distributed system;dynamic programming;hierarchical fault tolerance;scalability},
    doi={10.1109/IConAC.2014.6935488},
    month={Sept},}

    Most of hierarchical fault tolerance strategies did not pay much attention to scalability of fault tolerance. In distributed system, scalability is a very important feature. To tolerant failures when the scale of the system changing is a normal and important scenario. Especially in nowadays, almost all the cloud computing companies provide their computing services elastically. To add extra devices or remove devices in order to provide different services happens all the time. In such a scenario, it is very important that the fault tolerance strategy is scalable. In this paper, we introduce dynamic programming thoughts to build hierarchical regions as communities for fault tolerance strategy and apply different strategies based on communities instead of a single process. We call this fault tolerance strategy as Community Fault Tolerance. It cannot only reduce the memory overload by eliminating the number of records of messages inside the community region, but also provides a good characteristic of scalability. The scalability property of our strategy makes it handle with the scenario of adding devices or removing devices in the distributed system easily.

  • [DOI] I. S. Comşa, M. Aydin, S. Zhang, P. Kuonen, J. F. Wagen, and Y. Lu, "Scheduling policies based on dynamic throughput and fairness tradeoff control in LTE-A networks," in 39th Annual IEEE Conference on Local Computer Networks, 2014, pp. 418-421.
    [Bibtex] [Abstract]
    @INPROCEEDINGS{comsa:6925806,
    author={I. S. Comşa and M. Aydin and S. Zhang and P. Kuonen and J. F. Wagen and Y. Lu},
    booktitle={39th Annual IEEE Conference on Local Computer Networks},
    title={Scheduling policies based on dynamic throughput and fairness tradeoff control in LTE-A networks},
    year={2014},
    pages={418-421},
    abstract={In LTE-A cellular networks there is a fundamental trade-off between the cell throughput and fairness levels for preselected users which are sharing the same amount of resources at one transmission time interval (TTI). The static parameterization of the Generalized Proportional Fair (GPF) scheduling rule is not able to maintain a satisfactory level of fairness at each TTI when a very dynamic radio environment is considered. The novelty of the current paper aims to find the optimal policy of GPF parameters in order to respect the fairness criterion. From sustainability reasons, the multi-layer perceptron neural network (MLPNN) is used to map at each TTI the continuous and multidimensional scheduler state into a desired GPF parameter. The MLPNN non-linear function is trained TTI-by-TTI based on the interaction between LTE scheduler and the proposed intelligent controller. The interaction is modeled by using the reinforcement learning (RL) principle in which the LTE scheduler behavior is modeled based on the Markov Decision Process (MDP) property. The continuous actor-critic learning automata (CACLA) RL algorithm is proposed to select at each TTI the continuous and optimal GPF parameter for a given MDP problem. The results indicate that CACLA enhances the convergence speed to the optimal fairness condition when compared with other existing methods by minimizing in the same time the number of TTIs when the scheduler is declared unfair.},
    keywords={Long Term Evolution;Markov processes;cellular radio;convergence;intelligent control;learning (artificial intelligence);learning automata;multilayer perceptrons;nonlinear functions;scheduling;CACLA RL algorithm;GPF parameters optimal policy;GPF scheduling rule;LTE scheduler behavior;LTE-A cellular network;MDP;MLPNN nonlinear function;Markov decision process;TTI;continuous actor-critic learning automata;convergence speed;dynamic cell throughput;dynamic radio environment;fairness tradeoff control;generalized proportional fair;intelligent controller;multidimensional scheduler state;multilayer perceptron neural network;optimal fairness condition;reinforcement learning principle;scheduling policy;transmission time interval;Approximation algorithms;Dynamic scheduling;Heuristic algorithms;Linear programming;Optimization;Telecommunication traffic;Throughput;CACLA;CQI;LTE-A;MDP;MLPNN;RL;TTI;fairness;policy;scheduling rule;throughput},
    doi={10.1109/LCN.2014.6925806},
    ISSN={0742-1303},
    month={Sept},}

    In LTE-A cellular networks there is a fundamental trade-off between the cell throughput and fairness levels for preselected users which are sharing the same amount of resources at one transmission time interval (TTI). The static parameterization of the Generalized Proportional Fair (GPF) scheduling rule is not able to maintain a satisfactory level of fairness at each TTI when a very dynamic radio environment is considered. The novelty of the current paper aims to find the optimal policy of GPF parameters in order to respect the fairness criterion. From sustainability reasons, the multi-layer perceptron neural network (MLPNN) is used to map at each TTI the continuous and multidimensional scheduler state into a desired GPF parameter. The MLPNN non-linear function is trained TTI-by-TTI based on the interaction between LTE scheduler and the proposed intelligent controller. The interaction is modeled by using the reinforcement learning (RL) principle in which the LTE scheduler behavior is modeled based on the Markov Decision Process (MDP) property. The continuous actor-critic learning automata (CACLA) RL algorithm is proposed to select at each TTI the continuous and optimal GPF parameter for a given MDP problem. The results indicate that CACLA enhances the convergence speed to the optimal fairness condition when compared with other existing methods by minimizing in the same time the number of TTIs when the scheduler is declared unfair.

  • [PDF] [DOI] C. Gisler, A. Ridi, M. Fauquey, D. Genoud, and J. Hennebert, "Towards Glaucoma Detection Using Intraocular Pressure Monitoring," in The 6th International Conference of Soft Computing and Pattern Recognition (SoCPaR 2014), 2014, pp. 255-260.
    [Bibtex] [Abstract]
    @conference{gisler2014:socpar,
    Abstract = {Diagnosing the glaucoma is a very difficult task for healthcare professionals. High intraocular pressure (IOP) remains the main treatable symptom of this degenerative disease which leads to blindness. Nowadays, new types of wearable sensors, such as the contact lens sensor Triggerfish{\textregistered}, provide an automated recording of 24-hour profile of ocular dimensional changes related to IOP. Through several clinical studies, more and more IOP-related profiles have been recorded by those sensors and made available for elaborating data-driven experiments. The objective of such experiments is to analyse and detect IOP pattern differences between ill and healthy subjects. The potential is to provide medical doctors with analysis and detection tools allowing them to better diagnose and treat glaucoma. In this paper we present the methodologies, signal processing and machine learning algorithms elaborated in the task of automated detection of glaucomatous IOP- related profiles within a set of 100 24-hour recordings. As first convincing results, we obtained a classification ROC AUC of 81.5%.},
    Author = {Christophe Gisler and Antonio Ridi and Mil{\`e}ne Fauquey and Dominique Genoud and Jean Hennebert},
    Booktitle = {The 6th International Conference of Soft Computing and Pattern Recognition (SoCPaR 2014)},
    Doi = {10.1109/SOCPAR.2014.7008015},
    Isbn = {9781479959358},
    Keywords = {Biomedical signal processing, Glaucoma diagnosis, Machine learning},
    Pages = {255-260},
    Publisher = {Institute of Electrical and Electronics Engineers ( IEEE )},
    Title = {{T}owards {G}laucoma {D}etection {U}sing {I}ntraocular {P}ressure {M}onitoring},
    Pdf = {http://www.hennebert.org/download/publications/socpar-2014-towards-glaucoma-detection-using-intraocular-pressure-monitoring.pdf},
    Year = {2014},
    Pdf = {http://www.hennebert.org/download/publications/socpar-2014-towards-glaucoma-detection-using-intraocular-pressure-monitoring.pdf},
    Bdsk-Url-2 = {http://dx.doi.org/10.1109/SOCPAR.2014.7008015}}

    Diagnosing the glaucoma is a very difficult task for healthcare professionals. High intraocular pressure (IOP) remains the main treatable symptom of this degenerative disease which leads to blindness. Nowadays, new types of wearable sensors, such as the contact lens sensor Triggerfish{\textregistered}, provide an automated recording of 24-hour profile of ocular dimensional changes related to IOP. Through several clinical studies, more and more IOP-related profiles have been recorded by those sensors and made available for elaborating data-driven experiments. The objective of such experiments is to analyse and detect IOP pattern differences between ill and healthy subjects. The potential is to provide medical doctors with analysis and detection tools allowing them to better diagnose and treat glaucoma. In this paper we present the methodologies, signal processing and machine learning algorithms elaborated in the task of automated detection of glaucomatous IOP- related profiles within a set of 100 24-hour recordings. As first convincing results, we obtained a classification ROC AUC of 81.5%.

  • Y. Lu, I. Comsa, P. Kuonen, and B. Hirsbrunner, "Construction of Data Aggregation Tree for Multi-objectives in Wireless Sensor Networks through Jump Particle Swarm Optimization," Procedia Computer Science, p. 73–82, 2014.
    [Bibtex]
    @article{Yao:2309,
    Author = {Yao Lu and Ioan-Sorin Comsa and Pierre Kuonen and Beat Hirsbrunner},
    Journal = {Procedia Computer Science},
    Keywords = {PSO},
    Month = {dec},
    Pages = {73--82},
    Title = {Construction of Data Aggregation Tree for Multi-objectives in Wireless Sensor Networks through Jump Particle Swarm Optimization},
    Year = {2014}}
  • [PDF] [DOI] A. Ridi, C. Gisler, and J. Hennebert, "ACS-F2 - A new database of appliance consumption signatures," in Soft Computing and Pattern Recognition (SoCPaR), 2014 6th International Conference of, 2014, pp. 145-150.
    [Bibtex] [Abstract]
    @conference{ridi2014socpar,
    Abstract = {We present ACS-F2, a new electric consumption signature database acquired from domestic appliances. The scenario of use is appliance identification with emerging applications such as domestic electricity consumption understanding, load shedding management and indirect human activity monitoring. The novelty of our work is to use low-end electricity consumption sensors typically located at the plug. Our approach consists in acquiring signatures at a low frequency, which contrast with high frequency transient analysis approaches that are costlier and have been well studied in former research works. Electrical consumption signatures comprise real power, reactive power, RMS current, RMS voltage, frequency and phase of voltage relative to current. A total of 225 appliances were recorded over two sessions of one hour. The database is balanced with 15 different brands/models spread into 15 categories. Two realistic appliance recognition protocols are proposed and the database is made freely available to the scientific community for the experiment reproducibility. We also report on recognition results following these protocols and using baseline recognition algorithms like k-NN and GMM.},
    Author = {A. Ridi and C. Gisler and J. Hennebert},
    Booktitle = {Soft Computing and Pattern Recognition (SoCPaR), 2014 6th International Conference of},
    Doi = {10.1109/SOCPAR.2014.7007996},
    Isbn = {9781479959358},
    Keywords = {machine learning, electric signal, appliance signatures},
    Month = {Aug},
    Pages = {145-150},
    Publisher = {IEEE},
    Title = {{ACS}-{F}2 - {A} new database of appliance consumption signatures},
    Pdf = {http://www.hennebert.org/download/publications/socpar-2014-ACS-F2-a-new-databas-of-appliance-consumption-signatures.pdf},
    Year = {2014},
    Pdf = {http://www.hennebert.org/download/publications/socpar-2014-ACS-F2-a-new-databas-of-appliance-consumption-signatures.pdf},
    Bdsk-Url-2 = {http://dx.doi.org/10.1109/SOCPAR.2014.7007996}}

    We present ACS-F2, a new electric consumption signature database acquired from domestic appliances. The scenario of use is appliance identification with emerging applications such as domestic electricity consumption understanding, load shedding management and indirect human activity monitoring. The novelty of our work is to use low-end electricity consumption sensors typically located at the plug. Our approach consists in acquiring signatures at a low frequency, which contrast with high frequency transient analysis approaches that are costlier and have been well studied in former research works. Electrical consumption signatures comprise real power, reactive power, RMS current, RMS voltage, frequency and phase of voltage relative to current. A total of 225 appliances were recorded over two sessions of one hour. The database is balanced with 15 different brands/models spread into 15 categories. Two realistic appliance recognition protocols are proposed and the database is made freely available to the scientific community for the experiment reproducibility. We also report on recognition results following these protocols and using baseline recognition algorithms like k-NN and GMM.

  • [PDF] [DOI] A. Ridi, C. Gisler, and J. Hennebert, "Appliance and State Recognition using Hidden Markov Models," in The 2014 International Conference on Data Science and Advanced Analytics (DSAA 2014), Shangai, China, 2014, pp. 270-276.
    [Bibtex] [Abstract]
    @conference{ridi2014dsaa,
    Abstract = {We asset about the analysis of electrical appliance consumption signatures for the identification task. We apply Hidden Markov Models to appliance signatures for the identification of their category and of the most probable sequence of states. The electrical signatures are measured at low frequency (101 Hz) and are sourced from a specific database. We follow two predefined protocols for providing comparable results. Recovering information on the actual appliance state permits to potentially adopt energy saving measures, as switching off stand-by appliances or, generally speaking, changing their state. Moreover, in most of the cases appliance states are related to user activities: the user interaction usually involves a transition of the appliance state. Information about the state transition could be useful in Smart Home / Building Systems to reduce energy consumption and increase human comfort. We report the results of the classification tasks in terms of confusion matrices and accuracy rates. Finally, we present our application for a real-time data visualization and the recognition of the appliance category with its actual state.},
    Address = {Shangai, China},
    Author = {Antonio Ridi and Christophe Gisler and Jean Hennebert},
    Booktitle = {The 2014 International Conference on Data Science and Advanced Analytics (DSAA 2014)},
    Doi = {10.1109/DSAA.2014.7058084},
    Isbn = {9781479969821},
    Keywords = {Appliance Identification, Appliance State Recognition, Intrusive Load Monitoring, ILM},
    Month = {10/2014},
    Note = {Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.},
    Pages = {270-276},
    Publisher = {Institute of Electrical and Electronics Engineers ( IEEE )},
    Title = {{A}ppliance and {S}tate {R}ecognition using {H}idden {M}arkov {M}odels},
    Pdf = {http://www.hennebert.org/download/publications/DSAA-2014-appliance-and-state-recognition-using-hidden-markov-models.pdf},
    Year = {2014},
    Pdf = {http://www.hennebert.org/download/publications/DSAA-2014-appliance-and-state-recognition-using-hidden-markov-models.pdf},
    Bdsk-Url-2 = {http://dx.doi.org/10.1109/DSAA.2014.7058084}}

    We asset about the analysis of electrical appliance consumption signatures for the identification task. We apply Hidden Markov Models to appliance signatures for the identification of their category and of the most probable sequence of states. The electrical signatures are measured at low frequency (101 Hz) and are sourced from a specific database. We follow two predefined protocols for providing comparable results. Recovering information on the actual appliance state permits to potentially adopt energy saving measures, as switching off stand-by appliances or, generally speaking, changing their state. Moreover, in most of the cases appliance states are related to user activities: the user interaction usually involves a transition of the appliance state. Information about the state transition could be useful in Smart Home / Building Systems to reduce energy consumption and increase human comfort. We report the results of the classification tasks in terms of confusion matrices and accuracy rates. Finally, we present our application for a real-time data visualization and the recognition of the appliance category with its actual state.

  • [PDF] [DOI] A. Ridi and J. Hennebert, "Hidden Markov Models for ILM Appliance Identification," in The 5th International Conference on Ambient Systems, Networks and Technologies (ANT-2014), the 4th International Conference on Sustainable Energy Information Technology (SEIT-2014), 2014, p. 1010–1015.
    [Bibtex] [Abstract]
    @conference{ridi2014:ant,
    Abstract = {The automatic recognition of appliances through the monitoring of their electricity consumption finds many applications in smart buildings. In this paper we discuss the use of Hidden Markov Models (HMMs) for appliance recognition using so-called intrusive load monitoring (ILM) devices. Our motivation is found in the observation of electric signatures of appliances that usually show time varying profiles depending to the use made of the appliance or to the intrinsic internal operating of the appliance. To determine the benefit of such modelling, we propose a comparison of stateless modelling based on Gaussian mixture models and state-based models using Hidden Markov Models. The comparison is run on the publicly available database ACS-F1. We also compare differ- ent approaches to determine the best model topologies. More specifically we compare the use of a priori information on the device, a procedure based on a criteria of log-likelihood maximization and a heuristic approach.},
    Author = {Antonio Ridi and Jean Hennebert},
    Booktitle = {The 5th International Conference on Ambient Systems, Networks and Technologies (ANT-2014), the 4th International Conference on Sustainable Energy Information Technology (SEIT-2014)},
    Doi = {10.1016/j.procs.2014.05.526},
    Issn = {1877-0509},
    Keywords = {Hidden Markov Models, appliance recognition, Intrusive Load Monitoring},
    Note = {Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.},
    Pages = {1010--1015},
    Series = {Procedia Computer Science},
    Title = {{H}idden {M}arkov {M}odels for {ILM} {A}ppliance {I}dentification},
    Pdf = {http://www.hennebert.org/download/publications/ant-ictsb-2014-hidden-markov-models-for-ILM-appliance-identification.pdf},
    Volume = {32},
    Year = {2014},
    Pdf = {http://www.hennebert.org/download/publications/ant-ictsb-2014-hidden-markov-models-for-ILM-appliance-identification.pdf},
    Bdsk-Url-2 = {http://dx.doi.org/10.1016/j.procs.2014.05.526}}

    The automatic recognition of appliances through the monitoring of their electricity consumption finds many applications in smart buildings. In this paper we discuss the use of Hidden Markov Models (HMMs) for appliance recognition using so-called intrusive load monitoring (ILM) devices. Our motivation is found in the observation of electric signatures of appliances that usually show time varying profiles depending to the use made of the appliance or to the intrinsic internal operating of the appliance. To determine the benefit of such modelling, we propose a comparison of stateless modelling based on Gaussian mixture models and state-based models using Hidden Markov Models. The comparison is run on the publicly available database ACS-F1. We also compare differ- ent approaches to determine the best model topologies. More specifically we compare the use of a priori information on the device, a procedure based on a criteria of log-likelihood maximization and a heuristic approach.

  • [PDF] [DOI] A. Ridi, C. Gisler, and J. Hennebert, "A Survey on Intrusive Load Monitoring for Appliance Recognition," in 22nd International Conference on Pattern Recognition - ICPR, 2014, pp. 3702-3707.
    [Bibtex] [Abstract]
    @conference{ridi2014:icpr,
    Abstract = {Electricity load monitoring of appliances has be- come an important task considering the recent economic and ecological trends. In this game, machine learning has an important part to play, allowing for energy consumption understanding, critical equipment monitoring and even human activity recognition. This paper provides a survey of current researches on Intrusive Load Monitoring (ILM) techniques. ILM relies on low- end electricity meter devices spread inside the habitations, as opposed to Non-Intrusive Load Monitoring (NILM) that relies on an unique point of measurement, the smart meter. Potential applications and principles of ILMs are presented and compared to NILM. A focus is also given on feature extraction and machine learning algorithms typically used for ILM applications.},
    Author = {Antonio Ridi and Christophe Gisler and Jean Hennebert},
    Booktitle = {22nd International Conference on Pattern Recognition - ICPR},
    Doi = {10.1109/ICPR.2014.636},
    Isbn = {9781479952106},
    Keywords = {Machine Learning, Intrusive Load Monitoring, ILM, IT for efficiency, Green Computing},
    Month = {August},
    Note = {Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.},
    Organization = {IEEE},
    Pages = {3702-3707},
    Title = {{A} {S}urvey on {I}ntrusive {L}oad {M}onitoring for {A}ppliance {R}ecognition},
    Pdf = {http://www.hennebert.org/download/publications/icpr-2014-a-survey-on-intrusive-load-monitoring-for-appliance-recognition.pdf},
    Year = {2014},
    Pdf = {http://www.hennebert.org/download/publications/icpr-2014-a-survey-on-intrusive-load-monitoring-for-appliance-recognition.pdf},
    Bdsk-Url-2 = {http://dx.doi.org/10.1109/ICPR.2014.636}}

    Electricity load monitoring of appliances has be- come an important task considering the recent economic and ecological trends. In this game, machine learning has an important part to play, allowing for energy consumption understanding, critical equipment monitoring and even human activity recognition. This paper provides a survey of current researches on Intrusive Load Monitoring (ILM) techniques. ILM relies on low- end electricity meter devices spread inside the habitations, as opposed to Non-Intrusive Load Monitoring (NILM) that relies on an unique point of measurement, the smart meter. Potential applications and principles of ILMs are presented and compared to NILM. A focus is also given on feature extraction and machine learning algorithms typically used for ILM applications.

  • [PDF] [DOI] A. Ridi, C. Gisler, and J. Hennebert, "A Survey on Intrusive Load Monitoring for Appliance Recognition," in 22nd International Conference on Pattern Recognition - ICPR, 2014, pp. 3702-3707.
    [Bibtex] [Abstract]
    @conference{ridi2014:icpr,
    author = "Antonio Ridi and Christophe Gisler and Jean Hennebert",
    abstract = "Electricity load monitoring of appliances has be- come an important task considering the recent economic and ecological trends. In this game, machine learning has an important part to play, allowing for energy consumption understanding, critical equipment monitoring and even human activity recognition. This paper provides a survey of current researches on Intrusive Load Monitoring (ILM) techniques. ILM relies on low- end electricity meter devices spread inside the habitations, as opposed to Non-Intrusive Load Monitoring (NILM) that relies on an unique point of measurement, the smart meter. Potential applications and principles of ILMs are presented and compared to NILM. A focus is also given on feature extraction and machine learning algorithms typically used for ILM applications.",
    booktitle = "22nd International Conference on Pattern Recognition - ICPR",
    doi = "10.1109/ICPR.2014.636",
    isbn = "9781479952106",
    keywords = "Machine Learning, Intrusive Load Monitoring, ILM, IT for efficiency, Green Computing",
    month = "August",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    organization = "IEEE",
    pages = "3702-3707",
    title = "{A} {S}urvey on {I}ntrusive {L}oad {M}onitoring for {A}ppliance {R}ecognition",
    Pdf = "http://www.hennebert.org/download/publications/icpr-2014-a-survey-on-intrusive-load-monitoring-for-appliance-recognition.pdf",
    year = "2014",
    }

    Electricity load monitoring of appliances has be- come an important task considering the recent economic and ecological trends. In this game, machine learning has an important part to play, allowing for energy consumption understanding, critical equipment monitoring and even human activity recognition. This paper provides a survey of current researches on Intrusive Load Monitoring (ILM) techniques. ILM relies on low- end electricity meter devices spread inside the habitations, as opposed to Non-Intrusive Load Monitoring (NILM) that relies on an unique point of measurement, the smart meter. Potential applications and principles of ILMs are presented and compared to NILM. A focus is also given on feature extraction and machine learning algorithms typically used for ILM applications.

  • [PDF] [DOI] B. Wicht and J. Hennebert, "Camera-based Sudoku Recognition with Deep Belief Network," in 2014 6th International Conference of Soft Computing and Pattern Recognition (SoCPaR 2014), 2014, pp. 83-88.
    [Bibtex] [Abstract]
    @conference{wicht2014:socpar,
    Abstract = {In this paper, we propose a method to detect and recognize a Sudoku puzzle on images taken from a mobile camera. The lines of the grid are detected with a Hough transform. The grid is then recomposed from the lines. The digits position are extracted from the grid and finally, each character is recognized using a Deep Belief Network (DBN). To test our implementation, we collected and made public a dataset of Sudoku images coming from cell phones. Our method proved successful on our dataset, achieving 87.5% of correct detection on the testing set. Only 0.37% of the cells were incorrectly guessed. The algorithm is capable of handling some alterations of the images, often present on phone-based images, such as distortion, perspective, shadows, illumination gradients or scaling. On average, our solution is able to produce a result from a Sudoku in less than 100ms.},
    Author = {Baptiste Wicht and Jean Hennebert},
    Booktitle = {2014 6th International Conference of Soft Computing and Pattern Recognition (SoCPaR 2014)},
    Doi = {10.1109/SOCPAR.2014.7007986},
    Isbn = {9781479959358},
    Keywords = {Machine Learning, DBN, Deep Belief Network, Image Recognition, Text Detection, Text Recognition},
    Pages = {83-88},
    Publisher = {Institute of Electrical and Electronics Engineers ( IEEE )},
    Title = {{C}amera-based {S}udoku {R}ecognition with {D}eep {B}elief {N}etwork},
    Pdf = {http://www.hennebert.org/download/publications/socpar-2014-camera-based-sudoku-recognition-with-deep-belief-network.pdf},
    Year = {2014},
    Pdf = {http://www.hennebert.org/download/publications/socpar-2014-camera-based-sudoku-recognition-with-deep-belief-network.pdf},
    Bdsk-Url-2 = {http://dx.doi.org/10.1109/SOCPAR.2014.7007986}}

    In this paper, we propose a method to detect and recognize a Sudoku puzzle on images taken from a mobile camera. The lines of the grid are detected with a Hough transform. The grid is then recomposed from the lines. The digits position are extracted from the grid and finally, each character is recognized using a Deep Belief Network (DBN). To test our implementation, we collected and made public a dataset of Sudoku images coming from cell phones. Our method proved successful on our dataset, achieving 87.5% of correct detection on the testing set. Only 0.37% of the cells were incorrectly guessed. The algorithm is capable of handling some alterations of the images, often present on phone-based images, such as distortion, perspective, shadows, illumination gradients or scaling. On average, our solution is able to produce a result from a Sudoku in less than 100ms.

  • B. Wolf, P. Kuonen, and T. Dandekar, "POP-Java : Parallélisme et distribution orienté objet: Compas2014 (Conférence d'informatique en Parallélisme, Architecture et Système) . Compas2014 (Conférence d'informatique en Parallélisme, Architecture et Système)," Compas2014 (Conférence d'informatique en Parallélisme, Architecture et Système), 2014.
    [Bibtex]
    @article{Beat:1856,
    author = "Wolf, Beat and Kuonen, Pierre and Dandekar, Thomas",
    title = "POP-Java : Parallélisme et distribution orienté objet:
    Compas2014 (Conférence d'informatique en Parallélisme,
    Architecture et Système) . Compas2014 (Conférence
    d'informatique en Parallélisme, Architecture et Système) ",
    month = "avr",
    year = "2014",
    journal = "Compas2014 (Conférence d'informatique en Parallélisme,
    Architecture et Système) ",
    }
  • [PDF] [DOI] O. Zayene, S. M. Touj, J. Hennebert, R. Ingold, and B. N. E. Amara, "Semi-automatic news video annotation framework for Arabic text," in 2014 4th International Conference on Image Processing Theory, Tools and Applications (IPTA), 2014, pp. 1-6.
    [Bibtex] [Abstract]
    @INPROCEEDINGS{zayene2014:ipta,
    author={O. Zayene and S. M. Touj and J. Hennebert and R. Ingold and N. E. Ben Amara},
    booktitle={2014 4th International Conference on Image Processing Theory, Tools and Applications (IPTA)},
    title={Semi-automatic news video annotation framework for Arabic text},
    year={2014},
    pages={1-6},
    abstract={In this paper, we present a semi-automatic news video annotation tool. The tool and its algorithms are dedicated to artificial Arabic text embedded in video news in the form of static text as well as scrolling one. It is performed at two different levels. Including specificities of Arabic script, the tool manages a global level which concerns the entire video and a local level which concerns any specific frame extracted from the video. The global annotation is performed manually thanks to a user interface. As a result of this step, we obtain the global xml file. The local annotation at the frame level is done automatically according to the information contained in the global metafile and a proposed text tracking algorithm. The main application of our tool is the ground truthing of textual information in video content. It is being used for this purpose in the Arabic Text in Video (AcTiV) database project in our lab. One of the functions that AcTiV provides, is a benchmark to compare existing and future Arabic video OCR systems.},
    keywords={XML;electronic publishing;natural language processing;text analysis;video signal processing;visual databases;AcTiV database;Arabic script;Arabic text in video database project;Arabic video OCR systems;artificial Arabic text;global XML file;global annotation;global level;global meta file;ground truthing;local level;scrolling text;semiautomatic news video annotation framework;static text;text tracking algorithm;textual information;user interface;Databases;Educational institutions;Heuristic algorithms;Optical character recognition software;Streaming media;User interfaces;XML;Benchmarking VideoOCR systems;annotation;artificial Arabic text;data sets},
    doi={10.1109/IPTA.2014.7001963},
    ISSN={2154-5111},
    month={Oct},
    pdf={http://www.hennebert.org/download/publications/ipta-2014-semi-automatic-news-video-annotation-framework-for-arabic-text.pdf},}

    In this paper, we present a semi-automatic news video annotation tool. The tool and its algorithms are dedicated to artificial Arabic text embedded in video news in the form of static text as well as scrolling one. It is performed at two different levels. Including specificities of Arabic script, the tool manages a global level which concerns the entire video and a local level which concerns any specific frame extracted from the video. The global annotation is performed manually thanks to a user interface. As a result of this step, we obtain the global xml file. The local annotation at the frame level is done automatically according to the information contained in the global metafile and a proposed text tracking algorithm. The main application of our tool is the ground truthing of textual information in video content. It is being used for this purpose in the Arabic Text in Video (AcTiV) database project in our lab. One of the functions that AcTiV provides, is a benchmark to compare existing and future Arabic video OCR systems.

  • [PDF] G. Bovet and J. Hennebert, "Web-of-Things Gateways for KNX and EnOcean Networks," in International Conference on Cleantech for Smart Cities & Buildings from Nano to Urban Scale (CISBAT 2013), 2013, pp. 519-524.
    [Bibtex] [Abstract]
    @conference{bovet2013:cisbat,
    author = "G{\'e}r{\^o}me Bovet and Jean Hennebert",
    abstract = "Smart buildings tend to democratize both in new and renovated constructions aiming at minimizing energy consumption and maximizing comfort. They rely on dedicated networks of sensors and actuators orchestrated by management systems. Those systems tend to migrate from simple reactive control to complex predictive systems using self- learning algorithms requiring access to history data. The underlying building networks are often heterogeneous, leading to complex software systems having to implement all the available protocols and resulting in low system integration and heavy maintenance efforts. Typical building networks offer no common standardized application layer for building applications. This is not only true for data access but also for functionality discovery. They base on specific protocols for each technology, that are requiring expert knowledge when building software applications on top of them. The emerging Web-of-Things (WoT) framework, using well-known technologies like HTTP and RESTful APIs to offer a simple and homogeneous application layer must be considered as a strong candidate for standardization purposes. In this work, we defend the position that the WoT framework is an excellent candidate to elaborate next generation BMS systems, mainly due to the simplicity and universality of the telecommunication and application protocols. Further to this, we investigate the possibility to implement a gateway allowing access to devices connected to KNX and EnOcean networks in a Web-of-Things manner. By taking advantage of the bests practices of the WoT, we show the possibility of a fast integration of KNX in every control system. The elaboration of WoT gateways for EnOcean network presents further challenges that are described in the paper, essentially due to optimization of the underlying communication protocol.",
    booktitle = "International Conference on Cleantech for Smart Cities {{\&}} Buildings from Nano to Urban Scale (CISBAT 2013)",
    keywords = "IT for Sustainability, Smart Buildings, Web-of-Things, RESTful, KNX, EnOcean, Gateways",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    pages = "519-524",
    title = "{W}eb-of-{T}hings {G}ateways for {KNX} and {E}n{O}cean {N}etworks",
    Pdf = "http://www.hennebert.org/download/publications/cisbat-2013-web-of-things-gateways-for-knx-and-enocean-networks.pdf",
    year = "2013",
    }

    Smart buildings tend to democratize both in new and renovated constructions aiming at minimizing energy consumption and maximizing comfort. They rely on dedicated networks of sensors and actuators orchestrated by management systems. Those systems tend to migrate from simple reactive control to complex predictive systems using self- learning algorithms requiring access to history data. The underlying building networks are often heterogeneous, leading to complex software systems having to implement all the available protocols and resulting in low system integration and heavy maintenance efforts. Typical building networks offer no common standardized application layer for building applications. This is not only true for data access but also for functionality discovery. They base on specific protocols for each technology, that are requiring expert knowledge when building software applications on top of them. The emerging Web-of-Things (WoT) framework, using well-known technologies like HTTP and RESTful APIs to offer a simple and homogeneous application layer must be considered as a strong candidate for standardization purposes. In this work, we defend the position that the WoT framework is an excellent candidate to elaborate next generation BMS systems, mainly due to the simplicity and universality of the telecommunication and application protocols. Further to this, we investigate the possibility to implement a gateway allowing access to devices connected to KNX and EnOcean networks in a Web-of-Things manner. By taking advantage of the bests practices of the WoT, we show the possibility of a fast integration of KNX in every control system. The elaboration of WoT gateways for EnOcean network presents further challenges that are described in the paper, essentially due to optimization of the underlying communication protocol.

  • [PDF] [DOI] G. Bovet and J. Hennebert, "Energy-Efficient Optimization Layer for Event-Based Communications on Wi-Fi Things," Procedia Computer Science, vol. 19, pp. 256-264, 2013.
    [Bibtex] [Abstract]
    @article{bovet:2013:ant:procedia,
    author = "G{\'e}r{\^o}me Bovet and Jean Hennebert",
    abstract = "The Web-of-Things or WoT offers a way to standardize the access to services embedded on everyday objects, leveraging on well accepted standards of the Web such as HTTP and REST services. The WoT offers new ways to build mashups of object services, notably in smart buildings composed of sensors and actuators. Many things are now taking advantage of the progresses of embedded systems relying on the ubiquity of Wi-Fi networks following the 802.11 standards. Such things are often battery powered and the question of energy efficiency is therefore critical. In our research, we believe that several optimizations can be applied in the application layer to optimize the energy consumption of things. More specifically in this paper, we propose an hybrid layer automatically selecting the most appropriate communication protocol between current standards of WoT. Our results show that indeed not all protocols are equivalent in terms of energy consumption, and that some noticeable energy saves can be achieved by using our hybrid layer. ",
    doi = "/10.1016/j.procs.2013.06.037",
    issn = "1877-0509",
    journal = "Procedia Computer Science ",
    keywords = "Web-of-Things, RESTful services, WebSockets, CoAP, Energy efficiency, Smart buildings",
    note = "The 4th International Conference on Ambient Systems, Networks and Technologies (ANT 2013), the 3rd International Conference on Sustainable Energy Information Technology (SEIT-2013).
    Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    pages = "256-264",
    title = "{E}nergy-{E}fficient {O}ptimization {L}ayer for {E}vent-{B}ased {C}ommunications on {W}i-{F}i {T}hings ",
    Pdf = "http://www.hennebert.org/download/publications/ant-procedia-2013-energy-efficient-optimization-layer-for-event-based-communications-on-wi-fi-things.pdf",
    volume = "19",
    year = "2013",
    }

    The Web-of-Things or WoT offers a way to standardize the access to services embedded on everyday objects, leveraging on well accepted standards of the Web such as HTTP and REST services. The WoT offers new ways to build mashups of object services, notably in smart buildings composed of sensors and actuators. Many things are now taking advantage of the progresses of embedded systems relying on the ubiquity of Wi-Fi networks following the 802.11 standards. Such things are often battery powered and the question of energy efficiency is therefore critical. In our research, we believe that several optimizations can be applied in the application layer to optimize the energy consumption of things. More specifically in this paper, we propose an hybrid layer automatically selecting the most appropriate communication protocol between current standards of WoT. Our results show that indeed not all protocols are equivalent in terms of energy consumption, and that some noticeable energy saves can be achieved by using our hybrid layer.

  • [PDF] [DOI] G. Bovet and J. Hennebert, "An Energy Efficient Layer for Event-Based Communications in Web-of-Things Frameworks," in The 7th FTRA International Conference on Multimedia and Ubiquitous Engineering (MUE 2013), Springer - Lecture Notes in Electrical Engineering, 2013, vol. 240, pp. 93-101.
    [Bibtex] [Abstract]
    @inbook{bovet:2013:mue,
    author = "G{\'e}r{\^o}me Bovet and Jean Hennebert",
    abstract = "Leveraging on the Web-of-Things (WoT) allows standardizing the access of things from an application level point of view. The protocols of the Web and especially HTTP are offering new ways to build mashups of things consisting of sensors and actuators. Two communication protocols are now emerging in the WoT domain for event-based data exchange, namely WebSockets and RESTful APIs. In this work, we motivate and demonstrate the use of a hybrid layer able to choose dynamically the most energy efficient protocol.",
    booktitle = "The 7th FTRA International Conference on Multimedia and Ubiquitous Engineering (MUE 2013)",
    doi = "10.1007/978-94-007-6738-6_12",
    isbn = "9789400767379",
    keywords = "Web-of-Things, RESTful services, WebSockets",
    month = "May",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    number = "3",
    pages = "93-101",
    publisher = "Springer - Lecture Notes in Electrical Engineering",
    series = "Multimedia and Ubiquitous Engineering",
    title = "{A}n {E}nergy {E}fficient {L}ayer for {E}vent-{B}ased {C}ommunications in {W}eb-of-{T}hings {F}rameworks",
    Pdf = "http://www.hennebert.org/download/publications/mue-2013-an-energy-efficient-layer-for-event-based-communications-in-web-of-things-frameworks.pdf",
    volume = "240",
    year = "2013",
    }

    Leveraging on the Web-of-Things (WoT) allows standardizing the access of things from an application level point of view. The protocols of the Web and especially HTTP are offering new ways to build mashups of things consisting of sensors and actuators. Two communication protocols are now emerging in the WoT domain for event-based data exchange, namely WebSockets and RESTful APIs. In this work, we motivate and demonstrate the use of a hybrid layer able to choose dynamically the most energy efficient protocol.

  • [PDF] [DOI] G. Bovet and J. Hennebert, "Offering Web-of-things Connectivity to Building Networks," in Proceedings of the 2013 ACM Conference on Pervasive and Ubiquitous Computing Adjunct Publication, New York, NY, USA, 2013, pp. 1555-1564.
    [Bibtex] [Abstract]
    @conference{bovet2013:wot,
    author = "G{\'e}r{\^o}me Bovet and Jean Hennebert",
    abstract = "Building management systems (BMS) are nowadays present in new and renovated buildings, relying on dedicated networks. The presence of various building networks leads to problems of heterogeneity, especially for developing BMS. In this paper, we propose to leverage on the Web-of-Things (WoT) framework, using well-known standard technologies of the Web like HTTP and RESTful APIs for standardizing the access to devices seen from an application point of view. We present the implementation of two gateways using the WoT approach for exposing KNX and EnOcean device capabilities as Web services, allowing a fast integration in existing and new management systems.",
    address = "New York, NY, USA",
    booktitle = "Proceedings of the 2013 ACM Conference on Pervasive and Ubiquitous Computing Adjunct Publication",
    doi = "10.1145/2494091.2497590",
    isbn = "9781450322157",
    keywords = "building networks, enocean, gateways, knx, web-of-things",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    pages = "1555-1564",
    publisher = "ACM",
    series = "UbiComp '13 Adjunct",
    title = "{O}ffering {W}eb-of-things {C}onnectivity to {B}uilding {N}etworks",
    Pdf = "http://www.hennebert.org/download/publications/wot-2013-offering-web-of-things-connectivity-to-building-networks.pdf",
    year = "2013",
    }

    Building management systems (BMS) are nowadays present in new and renovated buildings, relying on dedicated networks. The presence of various building networks leads to problems of heterogeneity, especially for developing BMS. In this paper, we propose to leverage on the Web-of-Things (WoT) framework, using well-known standard technologies of the Web like HTTP and RESTful APIs for standardizing the access to devices seen from an application point of view. We present the implementation of two gateways using the WoT approach for exposing KNX and EnOcean device capabilities as Web services, allowing a fast integration in existing and new management systems.

  • [PDF] [DOI] K. Chen and J. Hennebert, "The Fribourg Product Image Database for Product Identification Tasks," in Proceedings of the 1st IEEE/IIAE International Conference on Intelligent Systems and Image Processing, 2013, pp. 162-169.
    [Bibtex] [Abstract]
    @conference{chen2013:icisip,
    author = "Kai Chen and Jean Hennebert",
    abstract = "We present in this paper a new database containing images of end-consumer products. The database currently contains more than 3'000 pictures of products taken exclusively using mobile phones. We focused the acquisition on 3 families of product: water bottles, chocolate and coffee. Nine mobile phones have been used and about 353 different products are available. Pictures are taken in real-life conditions, i.e. directly in the shops and without controlling the illumination, centering of the product or removing the background. Each image is provided with ground truth information including the product label, mobile phone brand and series as well as region of interest in the images. The database is made freely available for the scientific community and can be used for content-based image retrieval benchmark dataset or verification tasks.",
    booktitle = "Proceedings of the 1st IEEE/IIAE International Conference on Intelligent Systems and Image Processing",
    doi = "10.12792/icisip2013.033",
    keywords = "CBIR, image retrieval, image database, FPID, benchmarking, product identification, machine learning",
    month = "September",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    pages = "162-169",
    title = "{T}he {F}ribourg {P}roduct {I}mage {D}atabase for {P}roduct {I}dentification {T}asks",
    Pdf = "http://hennebert.org/download/publications/icisip-2013-the-fribourg-product-image-database-for-product-identification-tasks.pdf",
    year = "2013",
    }

    We present in this paper a new database containing images of end-consumer products. The database currently contains more than 3'000 pictures of products taken exclusively using mobile phones. We focused the acquisition on 3 families of product: water bottles, chocolate and coffee. Nine mobile phones have been used and about 353 different products are available. Pictures are taken in real-life conditions, i.e. directly in the shops and without controlling the illumination, centering of the product or removing the background. Each image is provided with ground truth information including the product label, mobile phone brand and series as well as region of interest in the images. The database is made freely available for the scientific community and can be used for content-based image retrieval benchmark dataset or verification tasks.

  • [PDF] [DOI] C. Gisler, A. Ridi, and J. Hennebert, "Appliance Consumption Signature Database and Recognition Test Protocols," in WOSSPA2013 The 9th International Workshop on Systems, Signal Processing and their Applications 2013, 2013, pp. 336-341.
    [Bibtex] [Abstract]
    @conference{gisler:2013:wosspa,
    author = "Christophe Gisler and Antonio Ridi and Jean Hennebert",
    abstract = "We report on the creation of a database of appliance consumption signatures and two test protocols to be used for appliance recognition tasks. By means of plug-based low-end sensors measuring the electrical consumption at low frequency, typically every 10 seconds, we made two acquisition sessions of one hour on about 100 home appliances divided into 10 categories: mobile phones (via chargers), coffee machines, computer stations (including monitor), fridges and freezers, Hi-Fi systems (CD players), lamp (CFL), laptops (via chargers), microwave oven, printers, and televisions (LCD or LED). We measured their consumption in terms of real power (W), reactive power (var), RMS current (A) and phase of voltage relative to current (varphi). We plan to give free access to the database for the whole scientific community. The proposed test protocols will help to objectively compare new algorithms. ",
    booktitle = "WOSSPA2013 The 9th International Workshop on Systems, Signal Processing and their Applications 2013",
    doi = "10.1109/WoSSPA.2013.6602387",
    isbn = "9781467355407",
    keywords = "electric consumption modelling, benchmark protocols",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    pages = "336-341",
    title = "{A}ppliance {C}onsumption {S}ignature {D}atabase and {R}ecognition {T}est {P}rotocols",
    Pdf = "http://hennebert.org/download/publications/wosspa-2013-appliance-consumption-signature-database-and-recognition-test-protocols.pdf",
    year = "2013",
    }

    We report on the creation of a database of appliance consumption signatures and two test protocols to be used for appliance recognition tasks. By means of plug-based low-end sensors measuring the electrical consumption at low frequency, typically every 10 seconds, we made two acquisition sessions of one hour on about 100 home appliances divided into 10 categories: mobile phones (via chargers), coffee machines, computer stations (including monitor), fridges and freezers, Hi-Fi systems (CD players), lamp (CFL), laptops (via chargers), microwave oven, printers, and televisions (LCD or LED). We measured their consumption in terms of real power (W), reactive power (var), RMS current (A) and phase of voltage relative to current (varphi). We plan to give free access to the database for the whole scientific community. The proposed test protocols will help to objectively compare new algorithms.

  • Y. Lu, J. Chen, I. Comsa, and P. Kuonen, "Backup Path with Energy Prediction based on Energy-Aware Spanning Tree in Wireless Sensor Networks," International Conference on Cyber-enabled distributed computing and knowledge discovery (CyberC), 2013.
    [Bibtex]
    @article{Yao:1494,
    Author = {Yao Lu and Jianping Chen and Ioan Comsa and Pierre Kuonen},
    Journal = {International Conference on Cyber-enabled distributed computing and knowledge discovery (CyberC)},
    Month = {oct},
    Title = {Backup Path with Energy Prediction based on Energy-Aware Spanning Tree in Wireless Sensor Networks},
    Year = {2013}}
  • [PDF] [DOI] A. Ridi, C. Gisler, and J. Hennebert, "Automatic Identification of Electrical Appliances Using Smart Plugs," in WOSSPA2013 The 9th International Workshop on Systems, Signal Processing and their Applications 2013, Algeria, 2013, pp. 301-305.
    [Bibtex] [Abstract]
    @conference{ridi:2013:wosspa,
    author = "Antonio Ridi and Christophe Gisler and Jean Hennebert",
    abstract = "We report on the evaluation of signal processing and classification algorithms to automatically recognize electric appliances. The system is based on low-cost smart-plugs measuring periodically the electricity values and producing time series of measurements that are specific to the appliance consumptions. In a similar way as for biometric applications, such electric signatures can be used to identify the type of appliance in use. In this paper, we propose to use dynamic features based on time derivative and time second derivative features and we compare different classification algorithms including K-Nearest Neighbor and Gaussian Mixture Models. We use the recently recorded electric signature database ACS-F1 and its intersession protocol to evaluate our algorithm propositions. The best combination of features and classifiers shows 93.6% accuracy.",
    address = "Algeria",
    booktitle = "WOSSPA2013 The 9th International Workshop on Systems, Signal Processing and their Applications 2013",
    doi = "10.1109/WoSSPA.2013.6602380",
    isbn = "9781467355407",
    keywords = "machine learning, electric consumption analysis, GMM, HMM",
    month = "May",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    pages = "301-305",
    publisher = "IEEE",
    title = "{A}utomatic {I}dentification of {E}lectrical {A}ppliances {U}sing {S}mart {P}lugs",
    Pdf = "http://hennebert.org/download/publications/wosspa-2013-automatic-identification-of-electrical-appliances-using-smart-plugs.pdf",
    year = "2013",
    }

    We report on the evaluation of signal processing and classification algorithms to automatically recognize electric appliances. The system is based on low-cost smart-plugs measuring periodically the electricity values and producing time series of measurements that are specific to the appliance consumptions. In a similar way as for biometric applications, such electric signatures can be used to identify the type of appliance in use. In this paper, we propose to use dynamic features based on time derivative and time second derivative features and we compare different classification algorithms including K-Nearest Neighbor and Gaussian Mixture Models. We use the recently recorded electric signature database ACS-F1 and its intersession protocol to evaluate our algorithm propositions. The best combination of features and classifiers shows 93.6% accuracy.

  • [PDF] [DOI] A. Ridi, C. Gisler, and J. Hennebert, "Unseen Appliances Identification," in Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications, 2013, p. 75–82.
    [Bibtex] [Abstract]
    @conference{ridi2013:ciarp,
    author = "Antonio Ridi and Christophe Gisler and Jean Hennebert",
    abstract = "We assess the feasibility of unseen appliance recognition through the analysis of their electrical signatures recorded using low-cost smart plugs. By unseen, we stress that our approach focuses on the identification of appliances that are of different brands or models than the one in training phase. We follow a strictly defined protocol in order to provide comparable results to the scientific community. We first evaluate the drop of performance when going from seen to unseen appliances. We then analyze the results of different machine learning algorithms, as the k-Nearest Neighbor (k-NN) and Gaussian Mixture Models (GMMs). Several tunings allow us to achieve 74% correct accuracy using GMMs which is our current best system.",
    booktitle = "Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications",
    doi = "10.1007/978-3-642-41827-3_10",
    editor = "Jos{\'e} Ruiz-Shulcloper and Gabriella Sanniti di Baja",
    isbn = "9783642418266",
    keywords = "machine learning, nilm, appliance identification, load monitoring",
    pages = "75--82",
    publisher = "Springer",
    series = "Lecture Notes in Computer Science",
    title = "{U}nseen {A}ppliances {I}dentification",
    Pdf = "http://www.hennebert.org/download/publications/ciarp-2013-unseen-appliance-identification.pdf",
    volume = "8259",
    year = "2013",
    }

    We assess the feasibility of unseen appliance recognition through the analysis of their electrical signatures recorded using low-cost smart plugs. By unseen, we stress that our approach focuses on the identification of appliances that are of different brands or models than the one in training phase. We follow a strictly defined protocol in order to provide comparable results to the scientific community. We first evaluate the drop of performance when going from seen to unseen appliances. We then analyze the results of different machine learning algorithms, as the k-Nearest Neighbor (k-NN) and Gaussian Mixture Models (GMMs). Several tunings allow us to achieve 74% correct accuracy using GMMs which is our current best system.

  • [PDF] A. Ridi, C. Gisler, and J. Hennebert, "Le machine learning: un atout pour une meilleure efficacité - applications à la gestion énergétique des bâtiments," Bulletin Electrosuisse, iss. 10s, pp. 21-24, 2013.
    [Bibtex] [Abstract]
    @article{ridi2013:electrosuisse,
    author = "Antonio Ridi and Christophe Gisler and Jean Hennebert",
    abstract = "Comment g{\'e}rer de mani{\`e}re intelligente les consommations et productions d’{\'e}nergie dans les b{\^a}timents ? Les solutions {\`a} ce probl{\`e}me complexe pourraient venir du monde de l’apprentissage automatique ou «machine learning». Celui-ci permet la mise au point d’algorithmes de contr{\^o}le avanc{\'e}s visant simultan{\'e}ment la r{\'e}duction de la consommation d’{\'e}nergie, l’am{\'e}lioration du confort de l’utilisateur et l’adaptation {\`a} ses besoins.",
    issn = "1660-6728",
    journal = "Bulletin Electrosuisse",
    keywords = "Machine Learning, Energy Efficiency, Smart Buildings",
    number = "10s",
    pages = "21-24",
    title = "{L}e machine learning: un atout pour une meilleure efficacit{\'e} - applications {\`a} la gestion {\'e}nerg{\'e}tique des b{\^a}timents",
    Pdf = "http://hennebert.org/download/publications/electrosuisse-2013-machine-learning-meilleure-efficacite.pdf",
    year = "2013",
    }

    Comment gérer de manière intelligente les consommations et productions d’énergie dans les bâtiments ? Les solutions à ce problème complexe pourraient venir du monde de l’apprentissage automatique ou «machine learning». Celui-ci permet la mise au point d’algorithmes de contrôle avancés visant simultanément la réduction de la consommation d’énergie, l’amélioration du confort de l’utilisateur et l’adaptation à ses besoins.

  • [PDF] A. Ridi, N. Zarkadis, G. Bovet, N. Morel, and J. Hennebert, "Towards Reliable Stochastic Data-Driven Models Applied to the Energy Saving in Buildings," in International Conference on Cleantech for Smart Cities & Buildings from Nano to Urban Scale (CISBAT 2013), 2013, pp. 501-506.
    [Bibtex] [Abstract]
    @conference{ridi2013:cisbat,
    author = "Antonio Ridi and Nikos Zarkadis and G{\'e}r{\^o}me Bovet and Nicolas Morel and Jean Hennebert",
    abstract = "We aim at the elaboration of Information Systems able to optimize energy consumption in buildings while preserving human comfort. Our focus is in the use of state-based stochas- tic modeling applied to temporal signals acquired from heterogeneous sources such as distributed sensors, weather web services, calendar information and user triggered events. Our general scientific objectives are: (1) global instead of local optimization of building automation sub-systems (heating, ventilation, cooling, solar shadings, electric lightings), (2) generalization to unseen building configuration or usage through self-learning data- driven algorithms and (3) inclusion of stochastic state-based modeling to better cope with seasonal and building activity patterns. We leverage on state-based models such as Hidden Markov Models (HMMs) to be able to capture the spatial (states) and temporal (sequence of states) characteristics of the signals. We envision several application layers as per the intrinsic nature of the signals to be modeled. We also envision room-level systems able to leverage on a set of distributed sensors (temperature, presence, electricity consumption, etc.). A typical example of room-level system is to infer room occupancy information or activities done in the rooms as a function of time. Finally, building-level systems can be composed to infer global usage and to propose optimization strategies for the building as a whole. In our approach, each layer may be fed by the output of the previous layers.
    More specifically in this paper, we report on the design, conception and validation of several machine learning applications. We present three different applications of state-based modeling. In the first case we report on the identification of consumer appliances through an analysis of their electric loads. In the second case we perform the activity recognition task, representing human activities through state-based models. The third case concerns the season prediction using building data, building characteristic parameters and meteorological data.",
    booktitle = "International Conference on Cleantech for Smart Cities {{\&}} Buildings from Nano to Urban Scale (CISBAT 2013)",
    keywords = "IT for Sustainability, Smart Buildings, Machine Learning",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    pages = "501-506",
    title = "{T}owards {R}eliable {S}tochastic {D}ata-{D}riven {M}odels {A}pplied to the {E}nergy {S}aving in {B}uildings",
    Pdf = "http://www.hennebert.org/download/publications/cisbat-2013-towards-reliable-stochastic-data-driven-models-applied-to-the-energy-saving-in-building.pdf",
    year = "2013",
    }

    We aim at the elaboration of Information Systems able to optimize energy consumption in buildings while preserving human comfort. Our focus is in the use of state-based stochas- tic modeling applied to temporal signals acquired from heterogeneous sources such as distributed sensors, weather web services, calendar information and user triggered events. Our general scientific objectives are: (1) global instead of local optimization of building automation sub-systems (heating, ventilation, cooling, solar shadings, electric lightings), (2) generalization to unseen building configuration or usage through self-learning data- driven algorithms and (3) inclusion of stochastic state-based modeling to better cope with seasonal and building activity patterns. We leverage on state-based models such as Hidden Markov Models (HMMs) to be able to capture the spatial (states) and temporal (sequence of states) characteristics of the signals. We envision several application layers as per the intrinsic nature of the signals to be modeled. We also envision room-level systems able to leverage on a set of distributed sensors (temperature, presence, electricity consumption, etc.). A typical example of room-level system is to infer room occupancy information or activities done in the rooms as a function of time. Finally, building-level systems can be composed to infer global usage and to propose optimization strategies for the building as a whole. In our approach, each layer may be fed by the output of the previous layers. More specifically in this paper, we report on the design, conception and validation of several machine learning applications. We present three different applications of state-based modeling. In the first case we report on the identification of consumer appliances through an analysis of their electric loads. In the second case we perform the activity recognition task, representing human activities through state-based models. The third case concerns the season prediction using building data, building characteristic parameters and meteorological data.

  • [PDF] [DOI] F. Slimane, S. Kanoun, J. Hennebert, A. M. Alimi, and R. Ingold, "A Study on Font-Family and Font-Size Recognition Applied to Arabic Word Images at Ultra-Low Resolution," Pattern recognition Letters (PRL), vol. 34, iss. 2, pp. 209-218, 2013.
    [Bibtex] [Abstract]
    @article{fouad2013:prl,
    author = "Fouad Slimane and Slim Kanoun and Jean Hennebert and Adel M. Alimi and Rolf Ingold",
    abstract = "In this paper, we propose a new font and size identification method for ultra-low resolution Arabic word images using a stochastic approach. The literature has proved the difficulty for Arabic text recognition systems to treat multi-font and multi-size word images. This is due to the variability induced by some font family, in addition to the inherent difficulties of Arabic writing including cursive representation, overlaps and ligatures. This research work proposes an efficient stochastic approach to tackle the problem of font and size recognition. Our method treats a word image with a fixed-length, overlapping sliding window. Each window is represented with a 102 features whose distribution is captured by Gaussian Mixture Models (GMMs). We present three systems: (1) a font recognition system, (2) a size recognition system and (3) a font and size recognition system. We demonstrate the importance of font identification before recognizing the word images with two multi-font Arabic OCRs (cascading and global). The cascading system is about 23% better than the global multi-font system in terms of word recognition rate on the Arabic Printed Text Image (APTI) database which is freely available to the scientific community.",
    doi = "/10.1016/j.patrec.2012.09.012",
    issn = "0167-8655",
    journal = "Pattern recognition Letters (PRL)",
    keywords = "Font and size recognition, GMM, HMM, Arabic OCR, Sliding window, Ultra-low resolution, Machine Learning",
    month = "January",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    number = "2",
    pages = "209-218",
    title = "{A} {S}tudy on {F}ont-{F}amily and {F}ont-{S}ize {R}ecognition {A}pplied to {A}rabic {W}ord {I}mages at {U}ltra-{L}ow {R}esolution",
    Pdf = "http://www.hennebert.org/download/publications/prl-2013-a-study-on-font-family-and-font-size-recognition-applied-to-arabic-word-images-at-ultra-low-resolution.pdf",
    volume = "34",
    year = "2013",
    }

    In this paper, we propose a new font and size identification method for ultra-low resolution Arabic word images using a stochastic approach. The literature has proved the difficulty for Arabic text recognition systems to treat multi-font and multi-size word images. This is due to the variability induced by some font family, in addition to the inherent difficulties of Arabic writing including cursive representation, overlaps and ligatures. This research work proposes an efficient stochastic approach to tackle the problem of font and size recognition. Our method treats a word image with a fixed-length, overlapping sliding window. Each window is represented with a 102 features whose distribution is captured by Gaussian Mixture Models (GMMs). We present three systems: (1) a font recognition system, (2) a size recognition system and (3) a font and size recognition system. We demonstrate the importance of font identification before recognizing the word images with two multi-font Arabic OCRs (cascading and global). The cascading system is about 23% better than the global multi-font system in terms of word recognition rate on the Arabic Printed Text Image (APTI) database which is freely available to the scientific community.

  • [PDF] [DOI] F. Slimane, S. Kanoun, H. E. Abed, A. M. Alimi, R. Ingold, and J. Hennebert, "ICDAR2013 Competition on Multi-font and Multi-size Digitally Represented Arabic Text," in Document Analysis and Recognition (ICDAR), 2013 12th International Conference on, 2013, pp. 1433-1437.
    [Bibtex] [Abstract]
    @conference{slimane2013:icdar,
    author = "Fouad Slimane and Slim Kanoun and Haikal El Abed and Adel M. Alimi and Rolf Ingold and Jean Hennebert",
    abstract = "This paper describes the Arabic Recognition Competition: Multi-font Multi-size Digitally Represented Text held in the context of the 12th International Conference on Document Analysis and Recognition (ICDAR'2013), during August 25-28, 2013, Washington DC, United States of America. This competition has used the freely available Arabic Printed Text Image (APTI) database. A first edition took place in ICDAR'2011. In this edition, four groups with six systems are participating in the competition. The systems are compared using the recognition rates at character and word levels. The systems were tested in a blind manner using set 6 of APTI database. A short description of the participating groups, their systems, the experimental setup, and the observed results are presented.",
    booktitle = "Document Analysis and Recognition (ICDAR), 2013 12th International Conference on",
    doi = "10.1109/ICDAR.2013.289",
    isbn = "9781479901937",
    issn = "1520-5363",
    keywords = "Character recognition, Databases, Feature extraction, Hidden Markov models, Image recognition, Protocols, Text recognition, APTI Database, Arabic Text, Competition, OCR System, Ultra-Low Resolution",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    pages = "1433-1437",
    title = "{ICDAR}2013 {C}ompetition on {M}ulti-font and {M}ulti-size {D}igitally {R}epresented {A}rabic {T}ext",
    Pdf = "http://www.hennebert.org/download/publications/icdar-2013-competition-on-multi-font-and-multi-size-digitally-represented-arabic-text.pdf",
    year = "2013",
    }

    This paper describes the Arabic Recognition Competition: Multi-font Multi-size Digitally Represented Text held in the context of the 12th International Conference on Document Analysis and Recognition (ICDAR'2013), during August 25-28, 2013, Washington DC, United States of America. This competition has used the freely available Arabic Printed Text Image (APTI) database. A first edition took place in ICDAR'2011. In this edition, four groups with six systems are participating in the competition. The systems are compared using the recognition rates at character and word levels. The systems were tested in a blind manner using set 6 of APTI database. A short description of the participating groups, their systems, the experimental setup, and the observed results are presented.

  • [PDF] [DOI] N. Sokhn, R. Baltensperger, L. Bersier, J. Hennebert, and U. Ultes-Nitsche, "Identification of Chordless Cycles in Ecological Networks," in Complex Sciences, 2013, pp. 316-324.
    [Bibtex] [Abstract]
    @conference{sokhn2013:complex,
    author = "Nayla Sokhn and Richard Baltensperger and Louis-Felix Bersier and Jean Hennebert and Ulrich Ultes-Nitsche",
    abstract = "Abstract: In the last few years the studies on complex networks have gained extensive research interests. Significant impacts are made by these studies on a wide range of different areas including social networks, tech- nology networks, biological networks and others. Motivated by under- standing the structure of ecological networks we introduce in this paper a new algorithm for enumerating all chordless cycles. The proposed al- gorithm is a recursive one based on the depth-first search.
    Keywords: ecological networks, community structure, food webs, niche- overlap graphs, chordless cycles.",
    booktitle = "Complex Sciences",
    doi = "10.1007/978-3-319-03473-7_28",
    isbn = "978-3-319-03472-0",
    keywords = "graph theory, ecological networks, food webs, algorithmics, complex systems",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    pages = "316-324",
    publisher = "Springer International Publishing",
    series = "Second International Conference, COMPLEX 2012, Santa Fe, NM, USA, December 5-7, 2012, Revised Selected Papers",
    title = "{I}dentification of {C}hordless {C}ycles in {E}cological {N}etworks",
    Pdf = "http://www.hennebert.org/download/publications/complex-2012-identification-of-chordless-cycles-in-ecological-networks.pdf",
    volume = "126",
    year = "2013",
    }

    Abstract: In the last few years the studies on complex networks have gained extensive research interests. Significant impacts are made by these studies on a wide range of different areas including social networks, tech- nology networks, biological networks and others. Motivated by under- standing the structure of ecological networks we introduce in this paper a new algorithm for enumerating all chordless cycles. The proposed al- gorithm is a recursive one based on the depth-first search. Keywords: ecological networks, community structure, food webs, niche- overlap graphs, chordless cycles.

  • [PDF] N. Sokhn, R. Baltensperger, J. Hennebert, U. Ultes-Nitsche, and L. Bersier, "Structure analysis of niche-overlap graphs," in NetSci 2013 - International School and Conference on Network Science, 2013.
    [Bibtex] [Abstract]
    @conference{sokhn2013:netsci,
    author = "Nayla Sokhn and Richard Baltensperger and Jean Hennebert and Ulrich Ultes-Nitsche and Louis-FŽlix Bersier",
    abstract = "The joint analysis of the structure and dynamics of complex networks has been recently a common interest for many researchers. In this study, we focus on the structure of ecological networks, specifically on niche-overlap graphs. In these networks, two species are connected if they share at least one prey, and thus represent competition graphs. The aim of this work is to reveal if these graphs show small-world/scale free properties. To answer this question, we select a set of 14 niche-overlap graphs from highly resolved food-webs, and study in the first part their clustering coeficient and diameter.",
    booktitle = "NetSci 2013 - International School and Conference on Network Science",
    keywords = "Graph structure analysis, Complex systems, Biology, Ecological Networks, Niche-overlap graphs",
    title = "{S}tructure analysis of niche-overlap graphs",
    Pdf = "http://www.hennebert.org/download/publications/netsci-2013-structure-analysis-of-niche-overlap-graphs.pdf",
    year = "2013",
    }

    The joint analysis of the structure and dynamics of complex networks has been recently a common interest for many researchers. In this study, we focus on the structure of ecological networks, specifically on niche-overlap graphs. In these networks, two species are connected if they share at least one prey, and thus represent competition graphs. The aim of this work is to reveal if these graphs show small-world/scale free properties. To answer this question, we select a set of 14 niche-overlap graphs from highly resolved food-webs, and study in the first part their clustering coeficient and diameter.

  • [PDF] [DOI] N. Sokhn, R. Baltensperger, L. Bersier, U. Ultes-Nitsche, and J. Hennebert, "Structural Network Properties of Niche-Overlap Graphs," in International Conference on Signal-Image Technology & Internet-Based Systems (SITIS 2013), 2013, pp. 478-482.
    [Bibtex] [Abstract]
    @conference{sokhn2013:sitis,
    author = "Nayla Sokhn and Richard Baltensperger and Louis-F{\'e}lix Bersier and Ulrich Ultes-Nitsche and Jean Hennebert",
    abstract = "The structure of networks has always been interesting for researchers. Investigating their unique architecture allows to capture insights and to understand the function and evolution of these complex systems. Ecological networks such as food-webs and niche-overlap graphs are considered as complex systems. The main purpose of this work is to compare the topology of 15 real niche-overlap graphs with random ones. Five measures are treated in this study: (1) the clustering coefficient, (2) the between ness centrality, (3) the assortativity coefficient, (4) the modularity and (5) the number of chord less cycles. Significant differences between real and random networks are observed. Firstly, we show that niche-overlap graphs display a higher clustering and a higher modularity compared to random networks. Moreover we find that random networks have barely nodes that belong to a unique sub graph (i.e. between ness centrality equal to 0) and highlight the presence of a small number of chord less cycles compared to real networks. These analyses may provide new insights in the structure of these real niche-overlap graphs and may give important implications on the functional organization of species competing for some resources and on the dynamics of these systems.",
    booktitle = "International Conference on Signal-Image Technology {\&} Internet-Based Systems (SITIS 2013)",
    doi = "10.1109/SITIS.2013.83",
    editor = "IEEE",
    isbn = "9781479932115",
    keywords = "Food-webs, Niche-Overlap Graphs, Structure of Networks, Clustering Coefficient, Betweenness Centrality, Assortativity, Modularity,Chordless Cycles",
    pages = "478-482",
    title = "{S}tructural {N}etwork {P}roperties of {N}iche-{O}verlap {G}raphs",
    Pdf = "http://www.hennebert.org/download/publications/sitis-2013-structural-network-properties-of-niche-overlap-graphs.pdf",
    year = "2013",
    }

    The structure of networks has always been interesting for researchers. Investigating their unique architecture allows to capture insights and to understand the function and evolution of these complex systems. Ecological networks such as food-webs and niche-overlap graphs are considered as complex systems. The main purpose of this work is to compare the topology of 15 real niche-overlap graphs with random ones. Five measures are treated in this study: (1) the clustering coefficient, (2) the between ness centrality, (3) the assortativity coefficient, (4) the modularity and (5) the number of chord less cycles. Significant differences between real and random networks are observed. Firstly, we show that niche-overlap graphs display a higher clustering and a higher modularity compared to random networks. Moreover we find that random networks have barely nodes that belong to a unique sub graph (i.e. between ness centrality equal to 0) and highlight the presence of a small number of chord less cycles compared to real networks. These analyses may provide new insights in the structure of these real niche-overlap graphs and may give important implications on the functional organization of species competing for some resources and on the dynamics of these systems.

  • B. Wolf and P. Kuonen, "A novel approach for heuristic pairwise DNA sequence alignment: The 2013 International Conference on Bioinformatics & Computational Biology (BIOCOMP'13)," The 2013 International Conference on Bioinformatics & Computational Biology (BIOCOMP'13), 2013.
    [Bibtex]
    @article{Pierre:1604,
    author = "Wolf, Beat and Kuonen, Pierre",
    title = "A novel approach for heuristic pairwise DNA sequence
    alignment: The 2013 International Conference on
    Bioinformatics & Computational Biology (BIOCOMP'13)",
    month = "jul",
    year = "2013",
    journal = "The 2013 International Conference on Bioinformatics &
    Computational Biology (BIOCOMP'13)",
    issn = " ",
    }
  • [PDF] G. Bovet and J. Hennebert, "Le Web des objets à la conquête des bâtiments intelligents," Bulletin Electrossuisse, vol. 10s, pp. 15-18, 2012.
    [Bibtex] [Abstract]
    @article{gerome2012:electrosuisse,
    author = "G{\'e}r{\^o}me Bovet and Jean Hennebert",
    abstract = "L’am{\'e}lioration de l’efficacit{\'e} {\'e}nerg{\'e}tique des b{\^a}timents n{\'e}cessite des syst{\`e}mes automatiques de plus en plus sophistiqu{\'e}s pour optimiser le rapport entre les {\'e}cono- mies d’{\'e}nergie et le confort des usagers. La gestion conjointe du chauffage, de l’{\'e}clairage ou encore de la production locale d’{\'e}nergie est effectu{\'e}e via de v{\'e}ri- tables syst{\`e}mes d’information reposant sur une multi- tude de capteurs et d’actionneurs interconnect{\'e}s. Cette complexit{\'e} croissante exige une {\'e}volution des r{\'e}seaux de communication des b{\^a}timents.",
    issn = "1660-6728",
    journal = "Bulletin Electrossuisse",
    keywords = "wot, iot, green-it, it-for-green",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    pages = "15-18",
    title = "{L}e {W}eb des objets {\`a} la conqu{\^e}te des b{\^a}timents intelligents",
    Pdf = "http://www.hennebert.org/download/publications/electrosuisse-2012-le-web-des-objets-conquete-batiments-intelligents.pdf",
    volume = "10s",
    year = "2012",
    }

    L’amélioration de l’efficacité énergétique des bâtiments nécessite des systèmes automatiques de plus en plus sophistiqués pour optimiser le rapport entre les écono- mies d’énergie et le confort des usagers. La gestion conjointe du chauffage, de l’éclairage ou encore de la production locale d’énergie est effectuée via de véri- tables systèmes d’information reposant sur une multi- tude de capteurs et d’actionneurs interconnectés. Cette complexité croissante exige une évolution des réseaux de communication des bâtiments.

  • [PDF] G. Bovet and J. Hennebert, "Communicating With Things - An Energy Consumption Analysis," in Pervasive, Newcastle, UK, 2012, pp. 1-4.
    [Bibtex] [Abstract]
    @conference{bove12:pervasive,
    author = "G{\'e}r{\^o}me Bovet and Jean Hennebert",
    abstract = "In this work we report on the analysis, from an energy consumption point of view, of two communication methods in the Web-of-Things (WoT) framework. The use of WoT is seducing regarding the standardization of the access to things. It also allows leveraging on existing web application frameworks and speed up development. However, in some contexts such as smart buildings where the objective is to control the equipments to save energy, the underlying WoT framework including hardware, communication and APIs must itself be energy efficient. More specifically, the WoT proposes to use HTTP callbacks or WebSockets based on TCP for exchanging data. In this paper we introduce both methods and then analyze their power consumption in a test environment. We also discuss what future research can be conducted from our preliminary findings.",
    address = "Newcastle, UK",
    booktitle = "Pervasive",
    keywords = "web-of-things; smart building; RESTful services; green-computing",
    month = "June",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    pages = "1-4",
    title = "{C}ommunicating {W}ith {T}hings - {A}n {E}nergy {C}onsumption {A}nalysis",
    Pdf = "http://www.hennebert.org/download/publications/pervasive-2012-communicating-with-things-an-energy-consumption-analysis.pdf",
    year = "2012",
    }

    In this work we report on the analysis, from an energy consumption point of view, of two communication methods in the Web-of-Things (WoT) framework. The use of WoT is seducing regarding the standardization of the access to things. It also allows leveraging on existing web application frameworks and speed up development. However, in some contexts such as smart buildings where the objective is to control the equipments to save energy, the underlying WoT framework including hardware, communication and APIs must itself be energy efficient. More specifically, the WoT proposes to use HTTP callbacks or WebSockets based on TCP for exchanging data. In this paper we introduce both methods and then analyze their power consumption in a test environment. We also discuss what future research can be conducted from our preliminary findings.

  • I. S. Comsa, M. Aydin, S. Zhang, P. Kuonen, and J. Wagen, "Multi Objective Resource Scheduling using LTE-A Simulator: 3rd IC1004 Action of Cooperative Radio Communications for Green Smart Environments," 3rd IC1004 Action of Cooperative Radio Communications for Green Smart Environments, 2012.
    [Bibtex]
    @article{Sorin:1495,
    Author = {Ioan Sorin Comsa and Mehmet Aydin and Sijing Zhang and Pierre Kuonen and Jean--Fr{\'e}d{\'e}ric Wagen},
    Journal = {3rd IC1004 Action of Cooperative Radio Communications for Green Smart Environments},
    Month = {f{\'e}v},
    Title = {Multi Objective Resource Scheduling using LTE-A Simulator: 3rd IC1004 Action of Cooperative Radio Communications for Green Smart Environments},
    Year = {2012}}
  • [PDF] C. Gisler, G. Barchi, G. Bovet, E. Mugellini, and J. Hennebert, "Demonstration Of A Monitoring Lamp To Visualize The Energy Consumption In Houses," in The 10th International Conference on Pervasive Computing (Pervasive2012), Newcastle, 2012.
    [Bibtex] [Abstract]
    @conference{gisl12:pervasive,
    author = "Christophe Gisler and Grazia Barchi and G{\'e}r{\^o}me Bovet and Elena Mugellini and Jean Hennebert",
    abstract = "We report on the development of a wireless lamp dedicated to the feedback of energy consumption. The principle is to provide a simple and intuitive feedback to residents through color variations of the lamp depending on the amount of energy consumed in a house. Our system is demonstrated on the basis of inexpensive components piloted by a gateway storing and processing the energy data in a WoT framework. Different versions of the color choosing algorithm are also presented.",
    address = "Newcastle",
    booktitle = "The 10th International Conference on Pervasive Computing (Pervasive2012)",
    keywords = "Web of Things; Energy feedback; Green Computing; IT-for-Green",
    month = "jun",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    title = "{D}emonstration {O}f {A} {M}onitoring {L}amp {T}o {V}isualize {T}he {E}nergy {C}onsumption {I}n {H}ouses",
    Pdf = "http://www.hennebert.org/download/publications/pervasive-2012-demonstration-of-a-monitoring-lamp-to-visualize-the-energy-consumption-in-houses.pdf",
    year = "2012",
    }

    We report on the development of a wireless lamp dedicated to the feedback of energy consumption. The principle is to provide a simple and intuitive feedback to residents through color variations of the lamp depending on the amount of energy consumed in a house. Our system is demonstrated on the basis of inexpensive components piloted by a gateway storing and processing the energy data in a WoT framework. Different versions of the color choosing algorithm are also presented.

  • [PDF] J. Hennebert, A. Schmoutz, S. Baudin, L. Zambon, and A. Delley, "Le projet ePark - Solutions technologiques pour la gestion des véhicules électriques et de leur charge," Electro Suisse Bulletin SEV/AES, vol. 4, pp. 34-36, 2012.
    [Bibtex] [Abstract]
    @article{henn12:electrosuisse,
    author = "Jean Hennebert and Alain Schmoutz and S{\'e}bastien Baudin and Loic Zambon and Antoine Delley",
    abstract = "le projet ePark vise {\`a} amener sur le march{\'e} une solution technologique globale et ouverte pour la gestion des v{\'e}hicules {\'e}lectriques et de leur charge. Il comprend l’{\'e}laboration d’un mod{\`e}le de march{\'e}, ainsi que la r{\'e}alisation d’une borne de charge low-cost et d’un syst{\`e}me d’information {\'e}volutif. La solution inclura des services de gestion de flottes et de parkings, de planification de la charge, d’authentification des usagers et de facturation, qui seront accessibles via des interfaces Web ou des clients mobiles de type smartphone.",
    issn = "1660-6728",
    journal = "Electro Suisse Bulletin SEV/AES",
    keywords = "Sustainable ICT, Green Computing, EV, IT for Efficiency",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    pages = "34-36",
    title = "{L}e projet e{P}ark - {S}olutions technologiques pour la gestion des v{\'e}hicules {\'e}lectriques et de leur charge",
    Pdf = "http://www.hennebert.org/download/publications/electrosuisse-2012-le-projet-epark-solutions-technologiques-pour-la-gestion-des-vehicules-electriques-et-de-leur-charge.pdf",
    volume = "4",
    year = "2012",
    }

    le projet ePark vise à amener sur le marché une solution technologique globale et ouverte pour la gestion des véhicules électriques et de leur charge. Il comprend l’élaboration d’un modèle de marché, ainsi que la réalisation d’une borne de charge low-cost et d’un système d’information évolutif. La solution inclura des services de gestion de flottes et de parkings, de planification de la charge, d’authentification des usagers et de facturation, qui seront accessibles via des interfaces Web ou des clients mobiles de type smartphone.

  • C. S. Ioan, M. Aydin, S. Zhang, P. Kuonen, and J. Wagen, "Multi Objective Resource Scheduling in LTE Networks using Reinforcement Learning: International Journal of Distributed Systems and Technologies," International Journal of Distributed Systems and Technologies, vol. 3, 2012.
    [Bibtex]
    @article{Comsa:1134,
    Author = {Comsa Sorin Ioan and Mehmet Aydin and Sijing Zhang and Pierre Kuonen and Jean-Fr{\'e}d{\'e}ric Wagen},
    Journal = {International Journal of Distributed Systems and Technologies},
    Title = {Multi Objective Resource Scheduling in LTE Networks using Reinforcement Learning: International Journal of Distributed Systems and Technologies},
    Volume = {3},
    Year = {2012}}
  • [PDF] F. Slimane, S. Kanoun, J. Hennebert, R. Ingold, and A. M. Alimi, "A New Baseline Estimation Method Applied to Arabic Word Recognition," in 10th IAPR International Workshop on Document Analysis Systems (DAS 2012), Goldquest, Queensland, 2012.
    [Bibtex]
    @conference{fouad2012:das,
    author = "Fouad Slimane and Slim Kanoun and Jean Hennebert and Rolf Ingold and Adel M. Alimi",
    address = "Goldquest, Queensland",
    booktitle = "10th IAPR International Workshop on Document Analysis Systems (DAS 2012)",
    month = "March",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    title = "{A} {N}ew {B}aseline {E}stimation {M}ethod {A}pplied to {A}rabic {W}ord {R}ecognition",
    Pdf = "http://www.ict.griffith.edu.au/das2012/attachments/ShortPaperProceedings/S10.pdf",
    year = "2012",
    }
  • [PDF] F. Slimane, O. Zayene, S. Kanoun, A. M. Alimi, J. Hennebert, and R. Ingold, "New Features for Complex Arabic Fonts in Cascading Recognition System," in Proc. of 21th International Conference on Pattern Recognition (ICPR 2012), Tsukuba, Japan, 2012, pp. 738-741.
    [Bibtex] [Abstract]
    @conference{fouad2012:icpr,
    author = "Fouad Slimane and Oussema Zayene and Slim Kanoun and Adel M. Alimi and Jean Hennebert and Rolf Ingold",
    abstract = "We propose in this work an approach for automatic recognition of printed Arabic text in open vocabulary mode and ultra low resolution (72 dpi). This system is based on Hidden Markov Models using the HTK toolkit. The novelty of our work is in the analysis of three complex fonts presenting strong ligatures: DiwaniLetter, DecoTypeNaskh and DecoTypeThuluth. We propose a feature extraction based on statistical and structural primitives allowing a robust description of the different morphological variability of the considered fonts. The system is benchmarked on the Arabic Printed Text Image (APTI) database.",
    address = "Tsukuba, Japan",
    booktitle = "Proc. of 21th International Conference on Pattern Recognition (ICPR 2012)",
    isbn = "978-1-4673-2216-4",
    issn = "1051-4651",
    keywords = "Character and Text Recognition, Handwriting Recognition, Performance Evaluation, Machine Learning",
    month = "November",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    pages = "738-741",
    publisher = "IEEE",
    title = "{N}ew {F}eatures for {C}omplex {A}rabic {F}onts in {C}ascading {R}ecognition {S}ystem",
    Pdf = "http://www.hennebert.org/download/publications/icpr-2012-new-features-for-complex-arabic-fonts-in-cascading-recognition-system.pdf",
    year = "2012",
    }

    We propose in this work an approach for automatic recognition of printed Arabic text in open vocabulary mode and ultra low resolution (72 dpi). This system is based on Hidden Markov Models using the HTK toolkit. The novelty of our work is in the analysis of three complex fonts presenting strong ligatures: DiwaniLetter, DecoTypeNaskh and DecoTypeThuluth. We propose a feature extraction based on statistical and structural primitives allowing a robust description of the different morphological variability of the considered fonts. The system is benchmarked on the Arabic Printed Text Image (APTI) database.

  • [PDF] [DOI] F. Slimane, S. Kanoun, J. Hennebert, R. Ingold, and A. M. Alimi, "Benchmarking Strategy for Arabic Screen-Rendered Word Recognition," in Guide to OCR for Arabic Scripts, V. Märgner and H. El Abed, Eds., Springer London, 2012, pp. 423-450.
    [Bibtex] [Abstract]
    @inbook{slim12:guideocr,
    author = "Fouad Slimane and Slim Kanoun and Jean Hennebert and Rolf Ingold and Adel M. Alimi",
    abstract = "This chapter presents a new benchmarking strategy for Arabic screen- based word recognition. Firstly, we report on the creation of the new APTI (Arabic Printed Text Image) database. This database is a large-scale benchmarking of open-vocabulary, multi-font, multi-size and multi-style word recognition systems in Arabic. Such systems take as input a text image and compute as output a character string corresponding to the text included in the image. The challenges that are addressed by the database are in the variability of the sizes, fonts and styles used to generate the images. A focus is also given on low resolution images where anti-aliasing is generating noise on the characters being recognized. The database contains 45,313,600 single word images totalling more than 250 million characters. Ground truth annotation is provided for each image from an XML file. The annotation includes the number of characters, the number of pieces of Arabic words (PAWs), the sequence of characters, the size, the style, the font used to generate each image, etc. Secondly, we describe the Arabic Recognition Competition: Multi-Font Multi-Size Digitally Represented Text held in the context of the 11th International Conference on Document Analysis and Recognition (ICDAR’2011), during September 18–21, 2011, Beijing, China. This first edition of the competition used the freely available APTI database. Two groups with three systems participated in the competition. The systems were compared using the recognition rates at the character and word levels. The systems were tested on one test dataset which is unknown to all participants (set 6 of APTI database). The systems were compared on the ground of the most important characteristic of classification systems: the recognition rate. A short description of the participating groups, their systems, the experimental setup and the observed results are presented. Thirdly, we present our DIVA-REGIM system (out of competition at ICDAR’2011) with all results of the Arabic recognition competition protocols.",
    booktitle = "Guide to OCR for Arabic Scripts",
    doi = "10.1007/978-1-4471-4072-6_18",
    editor = "M{\"a}rgner, Volker and El Abed, Haikal",
    isbn = "978-1-4471-4071-9",
    keywords = "arabic, ocr, recognition, database, benchmarking",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    pages = "423-450",
    publisher = "Springer London",
    series = "Guide to OCR for Arabic Scripts",
    title = "{B}enchmarking {S}trategy for {A}rabic {S}creen-{R}endered {W}ord {R}ecognition",
    Pdf = "http://www.hennebert.org/download/publications/guide-to-ocr-for-arabic-scripts-2012-benchmarking-strategy-for-arabic-screen-rendered-word-recognition.pdf",
    year = "2012",
    }

    This chapter presents a new benchmarking strategy for Arabic screen- based word recognition. Firstly, we report on the creation of the new APTI (Arabic Printed Text Image) database. This database is a large-scale benchmarking of open-vocabulary, multi-font, multi-size and multi-style word recognition systems in Arabic. Such systems take as input a text image and compute as output a character string corresponding to the text included in the image. The challenges that are addressed by the database are in the variability of the sizes, fonts and styles used to generate the images. A focus is also given on low resolution images where anti-aliasing is generating noise on the characters being recognized. The database contains 45,313,600 single word images totalling more than 250 million characters. Ground truth annotation is provided for each image from an XML file. The annotation includes the number of characters, the number of pieces of Arabic words (PAWs), the sequence of characters, the size, the style, the font used to generate each image, etc. Secondly, we describe the Arabic Recognition Competition: Multi-Font Multi-Size Digitally Represented Text held in the context of the 11th International Conference on Document Analysis and Recognition (ICDAR’2011), during September 18–21, 2011, Beijing, China. This first edition of the competition used the freely available APTI database. Two groups with three systems participated in the competition. The systems were compared using the recognition rates at the character and word levels. The systems were tested on one test dataset which is unknown to all participants (set 6 of APTI database). The systems were compared on the ground of the most important characteristic of classification systems: the recognition rate. A short description of the participating groups, their systems, the experimental setup and the observed results are presented. Thirdly, we present our DIVA-REGIM system (out of competition at ICDAR’2011) with all results of the Arabic recognition competition protocols.

  • [PDF] [DOI] D. Zufferey, C. Gisler, O. A. Khaled, and J. Hennebert, "Machine Learning Approaches for Electric Appliance Classification," in The 11th International Conference on Information Sciences, Signal Processing and their Applications: Main Tracks (ISSPA2012 - Tracks), Montreal, Canada, Canada, 2012, pp. 740-745.
    [Bibtex] [Abstract]
    @conference{zuff12:isspa,
    author = "Damien Zufferey and Christophe Gisler and Omar Abou Khaled and Jean Hennebert",
    abstract = "We report on the development of an innovative system which can automatically recognize home appliances based on their electric consumption profiles. The purpose of our system is to apply adequate rules to control electric appliance in order to save energy and money. The novelty of our approach is in the use of plug-based low-end sensors that measure the electric consumption at low frequency, typically every 10 seconds. Another novelty is the use of machine learning approaches to perform the classification of the appliances. In this paper, we present the system architecture, the data acquisition protocol and the evaluation framework. More details are also given on the feature extraction and classification models being used. The evaluation showed promising results with a correct rate of identification of 85\%.",
    address = "Montreal, Canada, Canada",
    booktitle = "The 11th International Conference on Information Sciences, Signal Processing and their Applications: Main Tracks (ISSPA2012 - Tracks)",
    comments = "9781467303811",
    doi = "10.1109/ISSPA.2012.6310651",
    editor = "IEEE",
    isbn = "978-1-4673-0381-1",
    keywords = "Signal processing, machine learning algorithms, power system analysis computing, energy consumption, energy efficiency, sustainable development, green-computing, it-for-green",
    month = "jul",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    pages = "740-745",
    title = "{M}achine {L}earning {A}pproaches for {E}lectric {A}ppliance {C}lassification",
    Pdf = "http://www.hennebert.org/download/publications/isspa-2012-machine-learning-approaches-for-electrical-appliance-classification.pdf",
    year = "2012",
    }

    We report on the development of an innovative system which can automatically recognize home appliances based on their electric consumption profiles. The purpose of our system is to apply adequate rules to control electric appliance in order to save energy and money. The novelty of our approach is in the use of plug-based low-end sensors that measure the electric consumption at low frequency, typically every 10 seconds. Another novelty is the use of machine learning approaches to perform the classification of the appliances. In this paper, we present the system architecture, the data acquisition protocol and the evaluation framework. More details are also given on the feature extraction and classification models being used. The evaluation showed promising results with a correct rate of identification of 85\%.

  • N. Bessis, Y. Huang, P. Norrington, A. Brown, P. Kuonen, and B. Hirsbrunner, "Modelling of a self-led critical friend topology in inter-cooperative grid communities," Simulation Modelling Practice and Theory, vol. 19, pp. 5-16, 2011.
    [Bibtex]
    @article{Bessis:866,
    Author = {N. Bessis and Y. Huang and P. Norrington and A. Brown and Pierre Kuonen and Beat Hirsbrunner},
    Issn = {1569-190},
    Journal = {Simulation Modelling Practice and Theory},
    Month = {jan},
    Pages = {5-16},
    Title = {Modelling of a self-led critical friend topology in inter-cooperative grid communities},
    Volume = {19},
    Year = {2011}}
  • I. S. Comsa, M. Aydin, S. Zhang, P. Kuonen, and J. Wagen, "Reinforcement Learning based Radio Resource Scheduling in LTE-Advanced," 17th International Conference on Automation and Computing, vol. Proceedings, 2011.
    [Bibtex]
    @article{Sorin:1127,
    Author = {Ioan Sorin Comsa and Mehmet Aydin and Sijing Zhang and Pierre Kuonen and Jean-Fr{\'e}d{\'e}ric Wagen},
    Journal = {17th International Conference on Automation and Computing},
    Month = {sep},
    Title = {Reinforcement Learning based Radio Resource Scheduling in LTE-Advanced},
    Volume = {Proceedings},
    Year = {2011}}
  • [PDF] H. Gaddour, H. Guesmi, F. Slimane, S. Kanoun, and J. Hennebert, "A New Method for Ranking of Word Hypotheses generated from OCR: The Application on the Arabic Word Recognition," in Proceedings of The Twelfth IAPR Conference on Machine Vision Applications (MVA 2011), Nara (Japan), 2011.
    [Bibtex] [Abstract]
    @conference{gadd11:mva,
    author = "Houda Gaddour and Han{\`e}ne Guesmi and Fouad Slimane and Slim Kanoun and Jean Hennebert",
    abstract = "In this paper, we propose a new method for the best ranking of OCR word hypotheses in order to increase the chances that the correct hypothesis will be ranked in the first position. This method is based on the images construction of the OCR word hypotheses and the calculation of the dissimilarity scores between these last constructed images and the image to recognize. To evaluate the new proposed method, we compare them with a classic method which is based on the ranking of OCR word hypotheses under the recognition process. The experimental results of these two methods on the database of 1000 word images show that the new proposed method led to the best ranking of OCR word hypotheses.",
    address = "Nara (Japan)",
    booktitle = "Proceedings of The Twelfth IAPR Conference on Machine Vision Applications (MVA 2011)",
    keywords = "arabic, HMM, image processing, machine learning, OCR",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    title = "{A} {N}ew {M}ethod for {R}anking of {W}ord {H}ypotheses generated from {OCR}: {T}he {A}pplication on the {A}rabic {W}ord {R}ecognition",
    Pdf = "http://www.hennebert.org/download/publications/mva-2011-new-method-ranking-word-hypotheses-generated-from-ocr-application-on-arabic-word-recognition.pdf",
    year = "2011",
    }

    In this paper, we propose a new method for the best ranking of OCR word hypotheses in order to increase the chances that the correct hypothesis will be ranked in the first position. This method is based on the images construction of the OCR word hypotheses and the calculation of the dissimilarity scores between these last constructed images and the image to recognize. To evaluate the new proposed method, we compare them with a classic method which is based on the ranking of OCR word hypotheses under the recognition process. The experimental results of these two methods on the database of 1000 word images show that the new proposed method led to the best ranking of OCR word hypotheses.

  • Y. Huang, N. Bessis, P. Kuonen, and B. Hirsbrunner, "CASP: a community-aware scheduling protocol," International Journal of Grid and Utility Computing, vol. 2, pp. 11-24, 2011.
    [Bibtex] [Abstract]
    @article{Huang:1130,
    Abstract = {The existing resource and topology heterogeneity has
    divided the scheduling solutions into local schedulers and
    high-level schedulers (a.k.a. meta-schedulers). Although
    much work has been proposed to optimise job queue based
    scheduling, seldom has attention been put on the job sharing
    behaviours between decentralised distributed resource pools,
    which in turn raises a notable opportunity to exploit and
    optimise the process of job sharing between reachable grid
    dynamically and proactively. In our work, we introduce a
    novel scheduling protocol named the community-aware
    scheduling protocol (CASP), which dedicates to disseminate
    scheduling events of each participating node to as many
    remote nodes as possible. By means of the proposed protocol,
    the scheduling process of each received job consists of two
    phases with awareness of grid volatility. The implemented
    prototype and evaluated results have shown the introduced
    CASP is able to cooperate with a variety of local scheduling
    algorithms as well as diverse types of grids.},
    Author = {Ye Huang and Nik Bessis and Pierre Kuonen and Beat Hirsbrunner},
    Journal = {International Journal of Grid and Utility Computing},
    Keywords = {local scheduling.},
    Pages = {11 - 24},
    Title = {CASP: a community-aware scheduling protocol},
    Volume = {2},
    Year = {2011}}

    The existing resource and topology heterogeneity has divided the scheduling solutions into local schedulers and high-level schedulers (a.k.a. meta-schedulers). Although much work has been proposed to optimise job queue based scheduling, seldom has attention been put on the job sharing behaviours between decentralised distributed resource pools, which in turn raises a notable opportunity to exploit and optimise the process of job sharing between reachable grid dynamically and proactively. In our work, we introduce a novel scheduling protocol named the community-aware scheduling protocol (CASP), which dedicates to disseminate scheduling events of each participating node to as many remote nodes as possible. By means of the proposed protocol, the scheduling process of each received job consists of two phases with awareness of grid volatility. The implemented prototype and evaluated results have shown the introduced CASP is able to cooperate with a variety of local scheduling algorithms as well as diverse types of grids.

  • Y. Huang, N. Bessis, P. Norrington, P. Kuonen, and B. Hirsbrunner, "Exploring decentralized dynamic scheduling for grids and clouds using the Community-Aware Scheduling Algorithm," Future Generation Computer Systems (FGCS), Elsevier, 2011.
    [Bibtex]
    @article{Huang:1128,
    Author = {Ye Huang and Nik Bessis and Peter Norrington and Pierre Kuonen and Beat Hirsbrunner},
    Issn = {0167-739X},
    Journal = {Future Generation Computer Systems (FGCS), Elsevier},
    Keywords = {Meta-scheduling},
    Title = {Exploring decentralized dynamic scheduling for grids and clouds using the Community-Aware Scheduling Algorithm},
    Year = {2011}}
  • Z. Lai, N. Bessis, D. G. Laroche, P. Kuonen, J. Zhang, and G. Clapworthy, "The Development of a Parallel Ray Launching Algorithm for Wireless Network Planning," International Journal of Distributed Systems and Technologies, vol. 2, 2011.
    [Bibtex]
    @article{Lai:867,
    Author = {Z. Lai and N. Bessis and G. De Laroche and Pierre Kuonen and J. Zhang and G. Clapworthy},
    Journal = {International Journal of Distributed Systems and Technologies},
    Title = {The Development of a Parallel Ray Launching Algorithm for Wireless Network Planning},
    Volume = {2},
    Year = {2011}}
  • [PDF] [DOI] F. Slimane, S. Kanoun, H. E. Abed, A. Alimi, R. Ingold, and J. Hennebert, "ICDAR 2011 - Arabic Recognition Competition: Multi-font Multi-size Digitally Represented Text," in Document Analysis and Recognition (ICDAR), 2011 International Conference on, 2011, pp. 1449-1453.
    [Bibtex] [Abstract]
    @conference{fouad10:icdar,
    author = "Fouad Slimane and Slim Kanoun and Haikal El Abed and Adel Alimi and Rolf Ingold and Jean Hennebert",
    abstract = "This paper describes the Arabic Recognition Competition: Multi-font Multi-size Digitally Represented Text held in the context of the 11th International Conference on Document Analysis and Recognition (ICDAR2011), during September 18-21, 2011, Beijing, China. This first competition used the freely available Arabic Printed Text Image (APTI) database. Several research groups have started using the APTI database and this year, 2 groups with 3 systems are participating in the competition. The systems are compared using the recognition rates at the character and word levels. The systems were tested on one test dataset which is unknown to all participants (set 6 of APTI database). The systems are compared on the most important characteristic of classification systems, the recognition rate. A short description of the participating groups, their systems, the experimental setup, and the observed results are presented.",
    booktitle = "Document Analysis and Recognition (ICDAR), 2011 International Conference on",
    doi = "10.1109/ICDAR.2011.288",
    isbn = "9781457713507",
    issn = "1520-5363",
    keywords = "APTI database;Arabic printed text image database;Arabic recognition competition;ICDAR2011;character recognition;classification system;document analysis;document recognition;multifont multisize digitally text representation;character recognition;document i",
    month = "sept.",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    pages = "1449 -1453",
    title = "{ICDAR} 2011 - {A}rabic {R}ecognition {C}ompetition: {M}ulti-font {M}ulti-size {D}igitally {R}epresented {T}ext",
    Pdf = "http://www.hennebert.org/download/publications/icdar-2011-arabic-recognition-competition-multi-font-multi-size-digitally-represented-text.pdf",
    year = "2011",
    }

    This paper describes the Arabic Recognition Competition: Multi-font Multi-size Digitally Represented Text held in the context of the 11th International Conference on Document Analysis and Recognition (ICDAR2011), during September 18-21, 2011, Beijing, China. This first competition used the freely available Arabic Printed Text Image (APTI) database. Several research groups have started using the APTI database and this year, 2 groups with 3 systems are participating in the competition. The systems are compared using the recognition rates at the character and word levels. The systems were tested on one test dataset which is unknown to all participants (set 6 of APTI database). The systems are compared on the most important characteristic of classification systems, the recognition rate. A short description of the participating groups, their systems, the experimental setup, and the observed results are presented.

  • [PDF] J. Hennebert, J. Rey, and Y. Bocchi, Les méthodes agiles en action, 2010.
    [Bibtex] [Abstract]
    @misc{rey2010:nouvelliste,
    author = "Jean Hennebert and Jean-Pierre Rey and Yann Bocchi",
    abstract = "L’{\'e}volution rapide de notre monde met en lumi{\`e}re les limites d’une gestion de projet traditionnelle. La solution de la HES-SO. Les m{\'e}thodes classiques de gestion de projet comprennent g{\'e}n{\'e}ralement les phases de sp{\'e}cification compl{\`e}te du produit {\`a} d{\'e}velopper, d’estimation et de planification, de mod{\'e}lisation, de r{\'e}alisation, de tests, de d{\'e}ploiement et de maintenance. Elles sont de moins en moins bien adapt{\'e}es aux nombreux changements qui ne manquent pas de se produire en cours de projet: facteurs externes (concurrence, nouvelles technologies {\'e}mergentes, nouveaux besoins, etc.) ou internes (changement d’organisation, ressources cl{\'e}s qui quittent l’entreprise, etc.), complexit{\'e} sous-{\'e}valu{\'e}e, attentes et besoins des clients qui {\'e}voluent au cours du temps, etc.
    Ces changements impliquent une m{\'e}thode de travail plus souple et plus r{\'e}active afin qu’un projet se termine {\`a} satisfaction pour toutes les parties engag{\'e}es. Le principe fondamental de l’agilit{\'e} est en effet de consid{\'e}rer le changement non plus comme une source de probl{\`e}mes mais comme un param{\`e}tre inh{\'e}rent {\`a} tous les projets. Le changement devient ainsi partie prenante du projet, qui s’organise et se rythme en fonction de celui-ci.",
    howpublished = "Le Nouvelliste",
    keywords = "Agile, IT project management, SCRUM",
    month = "oct#{27th}",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    title = "{L}es m{\'e}thodes agiles en action",
    Pdf = "http://www.hennebert.org/download/publications/nouvelliste-20101027_Methodes_Agiles_Entrent_En_Action.PDF",
    year = "2010",
    }

    L’évolution rapide de notre monde met en lumière les limites d’une gestion de projet traditionnelle. La solution de la HES-SO. Les méthodes classiques de gestion de projet comprennent généralement les phases de spécification complète du produit à développer, d’estimation et de planification, de modélisation, de réalisation, de tests, de déploiement et de maintenance. Elles sont de moins en moins bien adaptées aux nombreux changements qui ne manquent pas de se produire en cours de projet: facteurs externes (concurrence, nouvelles technologies émergentes, nouveaux besoins, etc.) ou internes (changement d’organisation, ressources clés qui quittent l’entreprise, etc.), complexité sous-évaluée, attentes et besoins des clients qui évoluent au cours du temps, etc. Ces changements impliquent une méthode de travail plus souple et plus réactive afin qu’un projet se termine à satisfaction pour toutes les parties engagées. Le principe fondamental de l’agilité est en effet de considérer le changement non plus comme une source de problèmes mais comme un paramètre inhérent à tous les projets. Le changement devient ainsi partie prenante du projet, qui s’organise et se rythme en fonction de celui-ci.

  • Y. Huang, A. Broccos, M. Courant, B. Hirsbrunner, and P. Kuonen, "MaGate: An Interoperable, Decentralized and Modular High-Level Grid Scheduler," International Journal of Distributed Systems and Technologies, vol. 1, pp. 24-39, 2010.
    [Bibtex]
    @article{Huang:864,
    Author = {Ye Huang and Amos Broccos and Michel Courant and Beat Hirsbrunner and Pierre Kuonen},
    Journal = {International Journal of Distributed Systems and Technologies},
    Pages = {24-39},
    Title = {MaGate: An Interoperable, Decentralized and Modular High-Level Grid Scheduler},
    Volume = {1},
    Year = {2010}}
  • Y. Huang, A. Broccos, P. Kuonen, and B. Hirsbrunner, "Critical Friend Model: A Vision Towards Inter-cooperative Grid Communities," Journal On Advances in Intelligent Systems, vol. 3, pp. 24-33, 2010.
    [Bibtex]
    @article{Huang:863,
    Author = {Ye Huang and Amos Broccos and Pierre Kuonen and Beat Hirsbrunner},
    Issn = {1942-2679},
    Journal = {Journal On Advances in Intelligent Systems},
    Pages = {24-33},
    Title = {Critical Friend Model: A Vision Towards Inter-cooperative Grid Communities},
    Volume = {3},
    Year = {2010}}
  • P. Kuonen and M. Sawley, "GUEST EDITORIAL PREFACE: The Crystal Ball in HPC has Never Been More Exciting, nor More Important," International Journal of Distributed systems and Technologies, vol. 1, 2010.
    [Bibtex]
    @article{Kuonen:865,
    Author = {Pierre Kuonen and Marie-Christine Sawley},
    Journal = {International Journal of Distributed systems and Technologies},
    Month = {jun},
    Title = {GUEST EDITORIAL PREFACE: The Crystal Ball in HPC has Never Been More Exciting, nor More Important},
    Volume = {1},
    Year = {2010}}
  • P. Kuonen, "Community-Aware Scheduling Protocol for Grids," 24th IEEE International Conference on Advanced Information Networking and Applications (AINA), pp. 334-341, 2010.
    [Bibtex]
    @article{Kuonen:870,
    Author = {Pierre Kuonen},
    Journal = {24th IEEE International Conference on Advanced Information Networking and Applications (AINA)},
    Month = {avr},
    Pages = {334-341},
    Title = {Community-Aware Scheduling Protocol for Grids},
    Year = {2010}}
  • Z. Lai, N. Bessis, D. G. Laroche, P. Kuonen, J. Zhang, and G. Clapworthy, "On the Use of an Ray Launching for Indoor Scenarios," 4th European Conference on Antennas and Propagation (EuCAP, IEEE), 2010.
    [Bibtex]
    @article{Lai:869,
    Author = {Z. Lai and Nik Bessis and G. De Laroche and Pierre Kuonen and J. Zhang and G. Clapworthy},
    Journal = {4th European Conference on Antennas and Propagation (EuCAP, IEEE)},
    Month = {avr},
    Title = {On the Use of an Ray Launching for Indoor Scenarios},
    Year = {2010}}
  • Z. Lai, N. Bessis, D. G. Laroche, P. Kuonen, J. Zang, and G. Clapworthy, "The Characterization and Human-Body Influence on Indoor 3.25 GHz Path Loss Measurement," International Workshop on Planning and Optimization of Wireless Communication Networks (IEEE WCNC2010 Workshop), 2010.
    [Bibtex]
    @article{Lai:868,
    Author = {Z. Lai and N. Bessis and G. De Laroche and Pierre Kuonen and J. Zang and G. Clapworthy},
    Journal = {International Workshop on Planning and Optimization of Wireless Communication Networks (IEEE WCNC2010 Workshop)},
    Month = {avr},
    Title = {The Characterization and Human-Body Influence on Indoor 3.25 GHz Path Loss Measurement},
    Year = {2010}}
  • [PDF] [DOI] J. Ortega-Garcia, J. Fierrez, F. Alonso-Fernandez, J. Galbally, M. R. Freire, J. Gonzalez-Rodriguez, C. Garcia-Mateo, J. Alba-Castro, E. Gonzalez-Agulla, E. Otero-Muras, S. Garcia-Salicetti, L. Allano, B. Ly-Van, B. Dorizzi, J. Kittler, T. Bourlai, N. Poh, F. Deravi, M. W. R. Ng, M. Fairhurst, J. Hennebert, A. Humm, M. Tistarelli, L. Brodo, J. Richiardi, A. Drygajlo, H. Ganster, F. M. Sukno, S. Pavani, A. Frangi, L. Akarun, and A. Savran, "The Multiscenario Multienvironment BioSecure Multimodal Database (BMDB)," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, pp. 1097-1111, 2010.
    [Bibtex] [Abstract]
    @article{ortega10:tpami,
    author = "Javier Ortega-Garcia and Julian Fierrez and Fernando Alonso-Fernandez and Javier Galbally and Manuel R. Freire and Joaquin Gonzalez-Rodriguez and Carmen Garcia-Mateo and Jose-Luis Alba-Castro and Elisardo Gonzalez-Agulla and Enrique Otero-Muras and Sonia Garcia-Salicetti and Lorene Allano and Bao Ly-Van and Bernadette Dorizzi and Josef Kittler and Thirimachos Bourlai and Norman Poh and Farzin Deravi and Ming W. R. Ng and Michael Fairhurst and Jean Hennebert and Andreas Humm and Massimo Tistarelli and Linda Brodo and Jonas Richiardi and Andrzej Drygajlo and Harald Ganster and Federico M. Sukno and Sri-Kaushik Pavani and Alejandro Frangi and Lale Akarun and Arman Savran",
    abstract = "A new multimodal biometric database designed and acquired within the framework of the European BioSecure Network of Excellence (NoE) is presented. It comprises more than 600 individuals acquired simultaneously in three scenarios: i) over the Internet, ii) in an office environment with desktop PC, and iii) in indoor/outdoor environments with mobile portable hardware. Data has been acquired over two acquisition sessions and using different sensors in certain modalities. The three scenarios include a common part of audio and video data (face still images and talking face videos). Also, signature and fingerprint data has been acquired both with desktop PC and mobile portable hardware. Additionally, hand and iris data was acquired in the second scenario using desktop PC. Acquisition has been conducted by 11 European institutions taking part in the BioSecure NoE. Additional features of the BioSecure Multimodal Database (BMDB) are: balanced gender and age distributions, multimodal realistic scenarios with simple and quick tasks per modality, cross-European diversity (language, face, etc.), availability of demographic data (age, gender, handedness, visual aids, manual worker and English proficiency) and compatibility with other multimodal databases. The novel acquisition conditions of the BMDB database allow to perform new challenging research and evaluation of either monomodal or multimodal biometric systems, as in the recent BioSecure Multimodal Evaluation Campaign. A description of this campaign including baseline results of individual modalities from the new database is also given. The database is expected to be available for research purposes through the BioSecure Association during 2008.",
    crossref = " ",
    doi = "10.1109/TPAMI.2009.76",
    issn = "0162-8828",
    journal = "IEEE Transactions on Pattern Analysis and Machine Intelligence",
    keywords = "Benchmarking, Biometrics, machine learning",
    month = " ",
    note = "http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=4815263
    Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    number = " ",
    pages = "1097-1111",
    title = "{T}he {M}ultiscenario {M}ultienvironment {B}io{S}ecure {M}ultimodal {D}atabase ({BMDB})",
    Pdf = "http://www.hennebert.org/download/publications/tpami-2010-multiscenario-multienvironment-biosecure-multimodal-database-bmdb.pdf",
    volume = "32",
    year = "2010",
    }

    A new multimodal biometric database designed and acquired within the framework of the European BioSecure Network of Excellence (NoE) is presented. It comprises more than 600 individuals acquired simultaneously in three scenarios: i) over the Internet, ii) in an office environment with desktop PC, and iii) in indoor/outdoor environments with mobile portable hardware. Data has been acquired over two acquisition sessions and using different sensors in certain modalities. The three scenarios include a common part of audio and video data (face still images and talking face videos). Also, signature and fingerprint data has been acquired both with desktop PC and mobile portable hardware. Additionally, hand and iris data was acquired in the second scenario using desktop PC. Acquisition has been conducted by 11 European institutions taking part in the BioSecure NoE. Additional features of the BioSecure Multimodal Database (BMDB) are: balanced gender and age distributions, multimodal realistic scenarios with simple and quick tasks per modality, cross-European diversity (language, face, etc.), availability of demographic data (age, gender, handedness, visual aids, manual worker and English proficiency) and compatibility with other multimodal databases. The novel acquisition conditions of the BMDB database allow to perform new challenging research and evaluation of either monomodal or multimodal biometric systems, as in the recent BioSecure Multimodal Evaluation Campaign. A description of this campaign including baseline results of individual modalities from the new database is also given. The database is expected to be available for research purposes through the BioSecure Association during 2008.

  • [PDF] G. Rudaz, J. Hennebert, and H. Müller, "A 3D Object Retrieval System Using Text and Simple Visual Information," HES-SO, Technical Report , 2010.
    [Bibtex] [Abstract]
    @techreport{rudaz2010:3dobject,
    author = "Gilles Rudaz and Jean Hennebert and Henning M\"uller",
    abstract = "3D objects are being increasingly produced and used by a broad public. Tools are thus required to manage collections of 3D objects in a similar way to the management of image collections including the possibility to search using text queries. The system we are presenting in this paper goes one step further allowing to search for 3D objects not only using text queries but also using very simple similarity metrics inferred from the 3D spatial description. The multimodal search is stepwise. First, the user searches for a set of relevant objects using a classical text–based query engine. Second, the user selects a subset of the returned objects to perform a new query in the database according to a relevance feedback method. The relevance is computed using different geometric criteria that can be activated or deactivated. External objects can also be submitted directly for a similarity query. The current visual features are very simple but are planned to be extended in the future.",
    address = " ",
    institution = "HES-SO",
    keywords = "3D information retrieval, visual information retrieval",
    month = "jul",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    number = " ",
    title = "{A} 3{D} {O}bject {R}etrieval {S}ystem {U}sing {T}ext and {S}imple {V}isual {I}nformation",
    type = "Technical Report",
    Pdf = "http://www.hennebert.org/download/publications/10-01-3d_object_retrieval_system_using_text_and_simple_visual_information-rudaz-hennebert-muller.pdf",
    year = "2010",
    }

    3D objects are being increasingly produced and used by a broad public. Tools are thus required to manage collections of 3D objects in a similar way to the management of image collections including the possibility to search using text queries. The system we are presenting in this paper goes one step further allowing to search for 3D objects not only using text queries but also using very simple similarity metrics inferred from the 3D spatial description. The multimodal search is stepwise. First, the user searches for a set of relevant objects using a classical text–based query engine. Second, the user selects a subset of the returned objects to perform a new query in the database according to a relevance feedback method. The relevance is computed using different geometric criteria that can be activated or deactivated. External objects can also be submitted directly for a similarity query. The current visual features are very simple but are planned to be extended in the future.

  • [PDF] [DOI] F. Slimane, R. Ingold, S. Kanoun, A. Alimi, and J. Hennebert, "Impact of Character Models Choice on Arabic Text Recognition Performance," in 12th International Conference on Frontiers in Handwriting Recognition, ICFHR 2010, 2010, pp. 670-675.
    [Bibtex] [Abstract]
    @conference{fouad10:icfhr,
    author = "Fouad Slimane and Rolf Ingold and Slim Kanoun and Adel Alimi and Jean Hennebert",
    abstract = "We analyze in this paper the impact of sub-models choice for automatic Arabic printed text recognition based on Hidden Markov Models (HMM). In our approach, sub-models correspond to characters shapes assembled to compose words models. One of the peculiarities of Arabic writing is to present various character shapes according to their position in the word. With 28 basic characters, there are over 120 different shapes. Ideally, there should be one sub-model for each different shape. However, some shapes are less frequent than others and, as training databases are finite, the learning process leads to less reliable models for the infrequent shapes. We show in this paper that an optimal set of models has then to be found looking for the trade-off between having more models capturing the intricacies of shapes and grouping the models of similar shapes with other. We propose in this paper different sets of sub-models that have been evaluated using the Arabic Printed Text Image (APTI) Database freely available for the scientific community.",
    address = " ",
    booktitle = "12th International Conference on Frontiers in Handwriting Recognition, ICFHR 2010",
    crossref = " ",
    doi = "10.1109/ICFHR.2010.110",
    editor = " ",
    isbn = "9781424483532",
    keywords = "arabic, HMM, machine learning, OCR",
    month = "nov",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    number = " ",
    organization = " ",
    pages = "670-675",
    publisher = " ",
    series = " ",
    title = "{I}mpact of {C}haracter {M}odels {C}hoice on {A}rabic {T}ext {R}ecognition {P}erformance",
    Pdf = "http://www.hennebert.org/download/publications/icfhr-2010-Impact-of-Character-Models-Choice-on-Arabic-Text-Recognition-Performance.pdf",
    volume = " ",
    year = "2010",
    }

    We analyze in this paper the impact of sub-models choice for automatic Arabic printed text recognition based on Hidden Markov Models (HMM). In our approach, sub-models correspond to characters shapes assembled to compose words models. One of the peculiarities of Arabic writing is to present various character shapes according to their position in the word. With 28 basic characters, there are over 120 different shapes. Ideally, there should be one sub-model for each different shape. However, some shapes are less frequent than others and, as training databases are finite, the learning process leads to less reliable models for the infrequent shapes. We show in this paper that an optimal set of models has then to be found looking for the trade-off between having more models capturing the intricacies of shapes and grouping the models of similar shapes with other. We propose in this paper different sets of sub-models that have been evaluated using the Arabic Printed Text Image (APTI) Database freely available for the scientific community.

  • [PDF] [DOI] F. Slimane, S. Kanoun, A. Alimi, J. Hennebert, and R. Ingold, "Comparison of Global and Cascading Recognition Systems Applied to Multi-font Arabic Text," in 10th ACM Symposium on Document Engineering, DocEng'10, 2010, pp. 161-164.
    [Bibtex] [Abstract]
    @conference{fouad10:doceng,
    author = "Fouad Slimane and Slim Kanoun and Adel Alimi and Jean Hennebert and Rolf Ingold",
    abstract = "A known difficulty of Arabic text recognition is in the large variability of printed representation from one font to the other. In this paper, we present a comparative study be- tween two strategies for the recognition of multi-font Arabic text. The first strategy is to use a global recognition system working independently on all the fonts. The second strategy is to use a so-called cascade built from a font identification system followed by font-dependent systems. In order to reach a fair comparison, the feature extraction and the modeling algorithms based on HMMs are kept as similar as possible between both approaches. The evaluation is carried out on the large and publicly available APTI (Arabic Printed Text Image) database with 10 different fonts. The results are showing a clear advantage of performance for the cascading approach. However, the cascading system is more costly in terms of cpu and memory.",
    address = " ",
    booktitle = "10th ACM Symposium on Document Engineering, DocEng'10",
    crossref = " ",
    doi = "10.1145/1860559.1860591",
    editor = " ",
    isbn = "9781450302319",
    keywords = "arabic, HMM, image processing, machine learning, OCR",
    month = "sep",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    number = " ",
    organization = " ",
    pages = "161-164",
    publisher = " ",
    series = "DocEng '10",
    title = "{C}omparison of {G}lobal and {C}ascading {R}ecognition {S}ystems {A}pplied to {M}ulti-font {A}rabic {T}ext",
    Pdf = "http://www.hennebert.org/download/publications/doceng-2010-Comparison-of-Global-and-Cascading-Recognition-Systems-Applied-to-Multi-font-Arabic-Text.pdf",
    volume = " ",
    year = "2010",
    }

    A known difficulty of Arabic text recognition is in the large variability of printed representation from one font to the other. In this paper, we present a comparative study be- tween two strategies for the recognition of multi-font Arabic text. The first strategy is to use a global recognition system working independently on all the fonts. The second strategy is to use a so-called cascade built from a font identification system followed by font-dependent systems. In order to reach a fair comparison, the feature extraction and the modeling algorithms based on HMMs are kept as similar as possible between both approaches. The evaluation is carried out on the large and publicly available APTI (Arabic Printed Text Image) database with 10 different fonts. The results are showing a clear advantage of performance for the cascading approach. However, the cascading system is more costly in terms of cpu and memory.

  • [PDF] [DOI] F. Slimane, S. Kanoun, A. Alimi, R. Ingold, and J. Hennebert, "Gaussian Mixture Models for Arabic Font Recognition," in 20th International Conference on Pattern Recognition, ICPR 2010, Istanbul (Turkey), 2010, pp. 2174-2177.
    [Bibtex] [Abstract]
    @conference{fouad10:icpr,
    author = "Fouad Slimane and Slim Kanoun and Adel Alimi and Rolf Ingold and Jean Hennebert",
    abstract = "We present in this paper a new approach for Arabic font recognition. Our proposal is to use a fixed- length sliding window for the feature extraction and to model feature distributions with Gaussian Mixture Models (GMMs). This approach presents a double advantage. First, we do not need to perform a priori segmentation into characters, which is a difficult task for arabic text. Second, we use versatile and powerful GMMs able to model finely distributions of features in large multi-dimensional input spaces. We report on the evaluation of our system on the APTI (Arabic Printed Text Image) database using 10 different fonts and 10 font sizes. Considering the variability of the different font shapes and the fact that our system is independent of the font size, the obtained results are convincing and compare well with competing systems.",
    address = "Istanbul (Turkey)",
    booktitle = "20th International Conference on Pattern Recognition, ICPR 2010",
    crossref = " ",
    doi = "10.1109/ICPR.2010.532",
    editor = " ",
    isbn = "9781424475421",
    keywords = "arabic, GMM, machine learning, OCR",
    month = "aug",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    number = " ",
    organization = " ",
    pages = "2174-2177",
    publisher = " ",
    series = " ",
    title = "{G}aussian {M}ixture {M}odels for {A}rabic {F}ont {R}ecognition",
    Pdf = "http://www.hennebert.org/download/publications/icpr-2010-Gaussian-Mixture-Models-for-Arabic-Font-Recognition.pdf",
    volume = " ",
    year = "2010",
    }

    We present in this paper a new approach for Arabic font recognition. Our proposal is to use a fixed- length sliding window for the feature extraction and to model feature distributions with Gaussian Mixture Models (GMMs). This approach presents a double advantage. First, we do not need to perform a priori segmentation into characters, which is a difficult task for arabic text. Second, we use versatile and powerful GMMs able to model finely distributions of features in large multi-dimensional input spaces. We report on the evaluation of our system on the APTI (Arabic Printed Text Image) database using 10 different fonts and 10 font sizes. Considering the variability of the different font shapes and the fact that our system is independent of the font size, the obtained results are convincing and compare well with competing systems.

  • [PDF] F. Verdet, D. Matrouf, J. Bonastre, and J. Hennebert, "Channel Detectors for System Fusion in the Context of NIST LRE 2009," in 11th Annual Conference of the International Speech Communication Association, Interspeech 2010, 2010, pp. 733-736.
    [Bibtex] [Abstract]
    @conference{verdet10:interspeech,
    author = "Florian Verdet and Driss Matrouf and Jean-Fran{\c{c}}ois Bonastre and Jean Hennebert",
    abstract = "One of the difficulties in Language Recognition is the variability of the speech signal due to speakers and channels. If channel mismatch is too big and when different categories of channels can be identified, one possibility is to build a separate language recognition system for each category and then to fuse them together. This article uses a system selector that takes, for each utterance, the scores of one of the channel-category dependent systems. This selection is guided by a channel detector. We analyze different ways to design such channel detectors: based on cepstral features or on the Factor Analysis channel variability term. The systems are evaluated in the context of NIST’s LRE 2009 and run at 1.65\% minCavg for a subset of 8 languages and at 3.85\% minCavg for the 23 language setup.",
    address = " ",
    booktitle = "11th Annual Conference of the International Speech Communication Association, Interspeech 2010",
    crossref = " ",
    editor = " ",
    keywords = "channel, channel category, channel detector, factor analysis, fusion, Language Identification, machine learning",
    month = "sep",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    number = " ",
    organization = " ",
    pages = " 733-736",
    publisher = " ",
    series = " ",
    title = "{C}hannel {D}etectors for {S}ystem {F}usion in the {C}ontext of {NIST} {LRE} 2009",
    Pdf = "http://www.hennebert.org/download/publications/interspeech-2010_Channel-Detectors-for-System-Fusion-in-the-Context-of-NIST-LRE-2009.pdf",
    volume = " ",
    year = "2010",
    }

    One of the difficulties in Language Recognition is the variability of the speech signal due to speakers and channels. If channel mismatch is too big and when different categories of channels can be identified, one possibility is to build a separate language recognition system for each category and then to fuse them together. This article uses a system selector that takes, for each utterance, the scores of one of the channel-category dependent systems. This selection is guided by a channel detector. We analyze different ways to design such channel detectors: based on cepstral features or on the Factor Analysis channel variability term. The systems are evaluated in the context of NIST’s LRE 2009 and run at 1.65\% minCavg for a subset of 8 languages and at 3.85\% minCavg for the 23 language setup.

  • [PDF] F. Verdet, D. Matrouf, J. Bonastre, and J. Hennebert, "Coping with Two Different Transmission Channels in Language Recognition," in Odyssey 2010, The Speaker and Language Recognition Workshop, 2010, pp. 230-237.
    [Bibtex] [Abstract]
    @conference{verdet10:odyssey,
    author = "Florian Verdet and Driss Matrouf and Jean-Fran{\c{c}}ois Bonastre and Jean Hennebert",
    abstract = "This paper confirms the huge benefits of Factor Analysis over Maximum A-Posteriori adaptation for language recognition (up to 87\% relative gain). We investigate ways to cope with the particularity of NIST’s LRE 2009, containing Conversational Telephone Speech (CTS) and phone bandwidth segments of radio broadcasts (Voice Of America, VOA). We analyze GMM systems using all data pooled together, eigensession matrices estimated on a per condition basis and systems using a concatenation of these matrices. Results are presented on all LRE 2009 test segments, as well as only on the CTS or only on the VOA test utterances. Since performances on all 23 languages are not trivial to compare, due to lacking language–channel combinations in the training and also in the testing data, all systems are also evaluated in the context of the subset of 8 common languages. Addressing the question if a fusion of two channel specific systems may be more beneficial than putting all data together, we study an oracle based system selector. On the 8 language subset, a pure CTS system performs at a minimal average cost of 2.7\% and pure VOA at 1.9\% minCavg on their respective test conditions. The fusion of these two systems runs at 2.0\% minCavg. As main observation, we see that the way we estimate the session compensation matrix has not a big influence, as long as the language–channel combinations cover those used for training the language models. Far more crucial is the kind of data used for model estimation.",
    address = " ",
    booktitle = "Odyssey 2010, The Speaker and Language Recognition Workshop",
    crossref = " ",
    editor = " ",
    keywords = "Benchmarking, Biometrics, Speaker Verification",
    month = "jun",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    number = " ",
    organization = " ",
    pages = "230-237",
    publisher = " ",
    series = " ",
    title = "{C}oping with {T}wo {D}ifferent {T}ransmission {C}hannels in {L}anguage {R}ecognition",
    Pdf = "http://www.hennebert.org/download/publications/odyssey10_Coping-with-Two-Different-Transmission-Channels-in-Language-Recognition.pdf",
    volume = " ",
    year = "2010",
    }

    This paper confirms the huge benefits of Factor Analysis over Maximum A-Posteriori adaptation for language recognition (up to 87\% relative gain). We investigate ways to cope with the particularity of NIST’s LRE 2009, containing Conversational Telephone Speech (CTS) and phone bandwidth segments of radio broadcasts (Voice Of America, VOA). We analyze GMM systems using all data pooled together, eigensession matrices estimated on a per condition basis and systems using a concatenation of these matrices. Results are presented on all LRE 2009 test segments, as well as only on the CTS or only on the VOA test utterances. Since performances on all 23 languages are not trivial to compare, due to lacking language–channel combinations in the training and also in the testing data, all systems are also evaluated in the context of the subset of 8 common languages. Addressing the question if a fusion of two channel specific systems may be more beneficial than putting all data together, we study an oracle based system selector. On the 8 language subset, a pure CTS system performs at a minimal average cost of 2.7\% and pure VOA at 1.9\% minCavg on their respective test conditions. The fusion of these two systems runs at 2.0\% minCavg. As main observation, we see that the way we estimate the session compensation matrix has not a big influence, as long as the language–channel combinations cover those used for training the language models. Far more crucial is the kind of data used for model estimation.

  • [PDF] [DOI] M. E. Betjali, J. Bloeche, A. Humm, R. Ingold, and J. Hennebert, "Labeled Images Verification Using Gaussian Mixture Models," in 24th Annual ACM Symposium on Applied Computing (ACM SAC 09), Honolulu, USA, March 8 - 12, 2009, p. 1331–1336.
    [Bibtex] [Abstract]
    @conference{betj09:acmsac,
    author = "Micheal El Betjali and Jean-Luc Bloeche and Andreas Humm and Rolf Ingold and Jean Hennebert",
    abstract = "We are proposing in this paper an automated system to verify that images are correctly associated to labels. The novelty of the system is in the use of Gaussian Mixture Models (GMMs) as statistical modeling scheme as well as in several improvements introduced specifically for the verification task. Our approach is evaluated using the Caltech 101 database. Starting from an initial baseline system providing an equal error rate of 27.4\%, we show that the rate of errors can be reduced down to 13\% by introducing several optimizations of the system. The advantage of the approach lies in the fact that basically any object can be generically and blindly modeled with limited supervision. A potential target application could be a post-filtering of images returned by search engines to prune out or reorder less relevant images.",
    address = " ",
    booktitle = "24th Annual ACM Symposium on Applied Computing (ACM SAC 09), Honolulu, USA, March 8 - 12",
    crossref = " ",
    doi = "10.1145/1529282.1529581",
    editor = " ",
    isbn = "9781605581668",
    keywords = "image recognition, gmm",
    month = " March",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    number = " ",
    organization = " ",
    pages = "1331--1336",
    publisher = " ",
    series = " ",
    title = "{L}abeled {I}mages {V}erification {U}sing {G}aussian {M}ixture {M}odels",
    Pdf = "http://www.hennebert.org/download/publications/sac-acm-2009-labeled-images-verification-using-gaussian-mixture-models.pdf",
    volume = " ",
    year = "2009",
    }

    We are proposing in this paper an automated system to verify that images are correctly associated to labels. The novelty of the system is in the use of Gaussian Mixture Models (GMMs) as statistical modeling scheme as well as in several improvements introduced specifically for the verification task. Our approach is evaluated using the Caltech 101 database. Starting from an initial baseline system providing an equal error rate of 27.4\%, we show that the rate of errors can be reduced down to 13\% by introducing several optimizations of the system. The advantage of the approach lies in the fact that basically any object can be generically and blindly modeled with limited supervision. A potential target application could be a post-filtering of images returned by search engines to prune out or reorder less relevant images.

  • [PDF] J. Hennebert, Vers La Business Intelligence Environnementale, 2009.
    [Bibtex] [Abstract]
    @misc{henn09:ibcom,
    author = "Jean Hennebert",
    abstract = "Les compagnies ne connaissent g{\'e}n{\'e}ralement pas leur impact direct et indirect sur l'environnement. De telles informations deviennent aujourd'hui strat{\'e}giques pour trois raisons. Premi{\`e}rement, les soci{\'e}t{\'e}s recherchent des solutions pour identifier les priorit{\'e}s d'action. Deuxi{\`e}mement, elles veulent pr{\'e}venir les risques, par exemple en anticipant les d{\'e}cisions des l{\'e}gislations qui sont en train de se mettre en place. Finalement, elles veulent pouvoir communiquer l'efficience de leurs actions et se comparer {\`a} la concurrence.",
    howpublished = "IBCOM market.ch, p. 7",
    keywords = "BI, Business-Intelligence, green-it",
    month = "sep",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    title = "{V}ers {L}a {B}usiness {I}ntelligence {E}nvironnementale",
    Pdf = "http://www.hennebert.org/download/publications/articleIBCOM-2009-Vers-La-Business-Intelligence-Environnementale_jean-hennebert.pdf",
    year = "2009",
    }

    Les compagnies ne connaissent généralement pas leur impact direct et indirect sur l'environnement. De telles informations deviennent aujourd'hui stratégiques pour trois raisons. Premièrement, les sociétés recherchent des solutions pour identifier les priorités d'action. Deuxièmement, elles veulent prévenir les risques, par exemple en anticipant les décisions des législations qui sont en train de se mettre en place. Finalement, elles veulent pouvoir communiquer l'efficience de leurs actions et se comparer à la concurrence.

  • [PDF] J. Hennebert, "Encyclopedia of Biometrics, Speaker Recognition Overview," , S. Li, Ed., Springer, 2009, vol. 2, pp. 1262-1270.
    [Bibtex] [Abstract]
    @inbook{henn09:enc,
    author = "Jean Hennebert",
    abstract = "Speaker recognition is the task of recognizing people from their voices. Speaker recognition is based on the extraction and modeling of acoustic features of speech that can differentiate individuals. These features conveys two kinds of biometric information: physiological properties (anatomical configuration of the vocal apparatus) and behavioral traits (speaking style). Automatic speaker recognition technology declines into four major tasks, speaker identification, speaker verification, speaker
    segmentation and speaker tracking. While these tasks are quite different by their potential applications, the underlying technologies are yet closely related.",
    address = " ",
    chapter = " ",
    edition = " ",
    editor = "Li, Stan",
    isbn = "9780387730028",
    keywords = "Biometrics, Speaker Verification",
    month = " ",
    note = "http://www.springer.com/computer/computer+imaging/book/978-0-387-73002-8
    Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.
    http://www.hennebert.org/download/publications/encyclopedia-of-biometrics-2009-speaker-verification.pdf",
    number = " ",
    pages = "1262-1270",
    publisher = "Springer",
    series = "Springer Reference",
    title = "{E}ncyclopedia of {B}iometrics, {S}peaker {R}ecognition {O}verview",
    type = " ",
    Pdf = "http://www.hennebert.org/download/publications/encyclopedia-of-biometrics-2009-speaker-verification.pdf",
    volume = "2",
    year = "2009",
    }

    Speaker recognition is the task of recognizing people from their voices. Speaker recognition is based on the extraction and modeling of acoustic features of speech that can differentiate individuals. These features conveys two kinds of biometric information: physiological properties (anatomical configuration of the vocal apparatus) and behavioral traits (speaking style). Automatic speaker recognition technology declines into four major tasks, speaker identification, speaker verification, speaker segmentation and speaker tracking. While these tasks are quite different by their potential applications, the underlying technologies are yet closely related.

  • Y. Huang, N. Bessis, A. Brocco, P. Kuonen, S. Sotiriadis, M. Courant, and B. Hirsbrunner, "Towards an integrated vision across inter-cooperative grid virtual organizations," Future Generation Information Technology (FGIT), pp. 120-128, 2009.
    [Bibtex]
    @article{Huang:738,
    Author = {Ye Huang and Nik Bessis and Amos Brocco and Pierre Kuonen and Stelios Sotiriadis and Michele Courant and Beat Hirsbrunner},
    Journal = {Future Generation Information Technology (FGIT)},
    Month = {December},
    Pages = {120-128},
    Title = {Towards an integrated vision across inter-cooperative grid virtual organizations},
    Year = {2009}}
  • [PDF] A. Humm, R. Ingold, and J. Hennebert, "Spoken Handwriting for User Authentication using Joint Modelling Systems," in Proceedings of the 6th International Symposium on Image and Signal Processing and Analysis (ISPA 09), Salzburg, Austria, September 16 - 18, 2009, pp. 505-510.
    [Bibtex] [Abstract]
    @conference{humm09:ispa,
    author = "Andreas Humm and Rolf Ingold and Jean Hennebert",
    abstract = "We report on results obtained with a new user authentication system based on a combined acquisition of online pen and speech signals. In our approach, the two modalities are recorded by simply asking the user to say what she or he is simultaneously writing. The main benefit of this methodology lies in the simultaneous acquisition of two sources of biometric information with a better accuracy at no extra cost in terms of time or inconvenience. Another benefit comes from an increased difficulty for forgers willing to perform imitation attacks as two signals need to be reproduced. Our first strategy was to model independently both streams of data and to perform a fusion at the score level using state-of-the-art modelling tools and training algorithms. We report here on a second strategy, complementing the first one and aiming at modelling both streams of data jointly. This approach uses a recognition system to compute the forced alignment of Hidden Markov Models (HMMs). The system then tries to determine synchronization patterns using these two alignments of handwriting and speech and computes a new score according to these patterns. In this paper, we present these authentication systems with the focus on the joint modelling. The evaluation is performed on MyIDea, a realistic multimodal biometric database. Results show that a combination of the different modelling strategies (independent and joint) can improve the system performance on spoken handwriting data.",
    address = " ",
    booktitle = "Proceedings of the 6th International Symposium on Image and Signal Processing and Analysis (ISPA 09), Salzburg, Austria, September 16 - 18",
    crossref = " ",
    editor = " ",
    isbn = "9789531841351",
    issn = "1845-5921",
    keywords = "biometrics, speech, writer, fusion",
    month = " ",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    number = " ",
    organization = " ",
    pages = " 505-510",
    publisher = " ",
    series = " ",
    title = "{S}poken {H}andwriting for {U}ser {A}uthentication using {J}oint {M}odelling {S}ystems",
    Pdf = "http://www.hennebert.org/download/publications/ispa-2009-chasm_spoken-handwriting-for-user-authentication-using-joint-modelling-systems.pdf",
    volume = " ",
    year = "2009",
    }

    We report on results obtained with a new user authentication system based on a combined acquisition of online pen and speech signals. In our approach, the two modalities are recorded by simply asking the user to say what she or he is simultaneously writing. The main benefit of this methodology lies in the simultaneous acquisition of two sources of biometric information with a better accuracy at no extra cost in terms of time or inconvenience. Another benefit comes from an increased difficulty for forgers willing to perform imitation attacks as two signals need to be reproduced. Our first strategy was to model independently both streams of data and to perform a fusion at the score level using state-of-the-art modelling tools and training algorithms. We report here on a second strategy, complementing the first one and aiming at modelling both streams of data jointly. This approach uses a recognition system to compute the forced alignment of Hidden Markov Models (HMMs). The system then tries to determine synchronization patterns using these two alignments of handwriting and speech and computes a new score according to these patterns. In this paper, we present these authentication systems with the focus on the joint modelling. The evaluation is performed on MyIDea, a realistic multimodal biometric database. Results show that a combination of the different modelling strategies (independent and joint) can improve the system performance on spoken handwriting data.

  • [PDF] [DOI] A. Humm, J. Hennebert, and R. Ingold, "Combined Handwriting And Speech Modalities For User Authentication, TSMCA," IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans, vol. 39, iss. 1, p. 25–35, 2009.
    [Bibtex] [Abstract]
    @article{humm09:TSMCA,
    author = "Andreas Humm and Jean Hennebert and Rolf Ingold",
    abstract = "In this paper we report on the development of an efficient user authentication system based on a combined acquisition of online pen and speech signals. The novelty of our approach is in the simultaneous recording of these two modalities, simply asking the user to utter what she/he is writing. The main benefit of this multimodal approach is a better accuracy at no extra costs in terms of access time or inconvenience. Another benefit comes from an increased difficulty for forgers willing to perform imitation attacks as two signals need to be reproduced. We are comparing here two potential scenarios of use. The first one is called spoken signatures where the user signs and says the content of the signature. The second scenario is based on spoken handwriting where the user is prompted to write and read the content of sentences randomly extracted from a text. Data according to these two scenarios have been recorded from a set of 70 users. In the first part of the paper, we describe the acquisition procedure and we comment on the viability and usability of such simultaneous recordings. Our conclusions are supported by a short survey performed with the users. In the second part, we present the authentication systems that we have developed for both scenarios. More specifically, our strategy was to model independently both streams of data and to perform a fusion at the score level. Starting from a state-of-the-art modelling algorithm based on Gaussian Mixture Models (GMMs) trained with an Expectation Maximization (EM) procedure, we report on several significant improvements that are brought. As a general observation, the use of both modalities outperforms significantly the modalities used alone.",
    crossref = " ",
    doi = "10.1109/TSMCA.2008.2007978",
    issn = "1083-4427",
    journal = "IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans",
    keywords = "biometrics, handwriting, speech",
    month = "January",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers. ",
    number = "1",
    pages = "25--35",
    title = "{C}ombined {H}andwriting {A}nd {S}peech {M}odalities {F}or {U}ser {A}uthentication, {TSMCA}",
    Pdf = "http://www.hennebert.org/download/publications/smc-a-ieee-2009-combined_handwriting_and_speech_modalities_for_user_authentication.pdf",
    volume = "39",
    year = "2009",
    }

    In this paper we report on the development of an efficient user authentication system based on a combined acquisition of online pen and speech signals. The novelty of our approach is in the simultaneous recording of these two modalities, simply asking the user to utter what she/he is writing. The main benefit of this multimodal approach is a better accuracy at no extra costs in terms of access time or inconvenience. Another benefit comes from an increased difficulty for forgers willing to perform imitation attacks as two signals need to be reproduced. We are comparing here two potential scenarios of use. The first one is called spoken signatures where the user signs and says the content of the signature. The second scenario is based on spoken handwriting where the user is prompted to write and read the content of sentences randomly extracted from a text. Data according to these two scenarios have been recorded from a set of 70 users. In the first part of the paper, we describe the acquisition procedure and we comment on the viability and usability of such simultaneous recordings. Our conclusions are supported by a short survey performed with the users. In the second part, we present the authentication systems that we have developed for both scenarios. More specifically, our strategy was to model independently both streams of data and to perform a fusion at the score level. Starting from a state-of-the-art modelling algorithm based on Gaussian Mixture Models (GMMs) trained with an Expectation Maximization (EM) procedure, we report on several significant improvements that are brought. As a general observation, the use of both modalities outperforms significantly the modalities used alone.

  • [DOI] S. Kanoun, F. Slimane, H. Guesmi, R. Ingold, A. Alimi, and J. Hennebert, "Affixal Approach versus Analytical Approach for Off-Line Arabic Decomposable Vocabulary Recognition," in International Conference on Document Analysis and Recognition (ICDAR 09), July 26 - 29, Barcelona, Spain, 2009, p. 661–665.
    [Bibtex] [Abstract]
    @conference{kano09:icdar,
    author = "Slim Kanoun and Fouad Slimane and Han{\^e}ne Guesmi and Rolf Ingold and Adel Alimi and Jean Hennebert",
    abstract = "In this paper, we propose a comparative study between the affixal approach and the analytical approach for off-line Arabic decomposable word recognition. The analytical approach is based on the modeling of alphabetical letters. The affixal approach is based on the modeling of the linguistic entity namely prefix, infix, suffix and root. The experimental results obtained by these two last approaches are presented on the basis of the printed decomposable word data set in mono-font nature by varying the character sizes. We achieve then our paper by the current improvements of our works concerning the Arabic multi-font, multi-style and multi-size word recognition.",
    address = " ",
    booktitle = "International Conference on Document Analysis and Recognition (ICDAR 09), July 26 - 29, Barcelona, Spain",
    crossref = " ",
    doi = "10.1109/ICDAR.2009.264",
    editor = " ",
    isbn = "9781424445004",
    issn = "1520-5363",
    keywords = "Character recognition , Image analysis , Informatics , Information analysis , Information systems , Machine intelligence , Neural networks , Shape , Text analysis , Vocabulary",
    month = "July",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    number = " ",
    organization = " ",
    pages = "661--665",
    publisher = " ",
    series = " ",
    title = "{A}ffixal {A}pproach versus {A}nalytical {A}pproach for {O}ff-{L}ine {A}rabic {D}ecomposable {V}ocabulary {R}ecognition",
    url = "http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=5277473",
    volume = " ",
    year = "2009",
    }

    In this paper, we propose a comparative study between the affixal approach and the analytical approach for off-line Arabic decomposable word recognition. The analytical approach is based on the modeling of alphabetical letters. The affixal approach is based on the modeling of the linguistic entity namely prefix, infix, suffix and root. The experimental results obtained by these two last approaches are presented on the basis of the printed decomposable word data set in mono-font nature by varying the character sizes. We achieve then our paper by the current improvements of our works concerning the Arabic multi-font, multi-style and multi-size word recognition.

  • P. Kuonen, Y. Huang, A. Brocco, M. Courant, and B. Hirsbrunner, "MaGate Simulator: a simulation environment for a decentralized grid scheduler," Proceedings of the 8th International Symposium on Advanced Parallel Processing Technologies, pp. 273-287, 2009.
    [Bibtex]
    @article{Kuonen:741,
    Author = {Pierre Kuonen and Ye Huang and Amos Brocco and Michele Courant and Beat Hirsbrunner},
    Journal = {Proceedings of the 8th International Symposium on Advanced Parallel Processing Technologies},
    Pages = {273-287},
    Title = {MaGate Simulator: a simulation environment for a decentralized grid scheduler},
    Year = {2009}}
  • P. Kuonen, L. Zhihua, N. Bessis, G. D. la Roche, Z. Jie, and G. Clapworthy, "A Performance Evaluation of a Grid-Enabled Object-Oriented Parallel Outdoor Ray Launching for Wireless Network Coverage Prediction," IEEE Fifth International Conference on Wireless and Mobile Communications, pp. 38-43, 2009.
    [Bibtex]
    @article{Kuonen:740,
    Author = {Pierre Kuonen and Lai Zhihua and Nik Bessis and Guillaume De la Roche and Zhang Jie and Gordon Clapworthy},
    Journal = {IEEE Fifth International Conference on Wireless and Mobile Communications},
    Month = {August},
    Pages = {38-43},
    Title = {A Performance Evaluation of a Grid-Enabled Object-Oriented Parallel Outdoor Ray Launching for Wireless Network Coverage Prediction},
    Year = {2009}}
  • P. Kuonen, Y. Huang, N. Bessis, G. D. la Roche, and G. Clapworthy, "Using Metadata Snapshots for Extending Ant-based Resource Discovery Functionality in Inter-cooperative Grid Communities," 1st International Conference on Evolving Internet, p. 89-94 (Best Paper Award), 2009.
    [Bibtex]
    @article{Kuonen:739,
    Author = {Pierre Kuonen and Ye Huang and Nik Bessis and Guillaume De la Roche and Gordon Clapworthy},
    Journal = {1st International Conference on Evolving Internet},
    Month = {August},
    Pages = {89-94 (Best Paper Award)},
    Title = {Using Metadata Snapshots for Extending Ant-based Resource Discovery Functionality in Inter-cooperative Grid Communities},
    Year = {2009}}
  • P. Kuonen, Z. Lai, N. Bessis, G. D. la Roche, J. Zhang, and G. Clapworthy, "A new approach to solve angular dispersion of discrete ray launching for urban scenarios," IEEE Antennas & Propagation Conference (LAPC), p. pp. 133-136, 2009.
    [Bibtex]
    @article{Kuonen:737,
    Author = {Pierre Kuonen and Zhihua Lai and Nik Bessis and Guillaume De la Roche and Jie Zhang and Gordon Clapworthy},
    Journal = {IEEE Antennas & Propagation Conference (LAPC)},
    Month = {nov},
    Pages = {pp. 133-136},
    Title = {A new approach to solve angular dispersion of discrete ray launching for urban scenarios},
    Year = {2009}}
  • [DOI] A. Mayoue, B. Dorizzi, L. Allano, G. Chollet, J. Hennebert, D. Petrovska, and F. Verdet, "BioSecure Multimodal Evaluation Campaign 2007 (BMEC 2007)," in Guide to Biometric Reference Systems and Performance Evaluation, D. Petrovska, G. Chollet, and B. Dorizzi, Eds., Springer, 2009, pp. 327-371.
    [Bibtex] [Abstract]
    @inbook{mayo09:bmec,
    author = "Aur{\'e}lien Mayoue and Bernadette Dorizzi and Lorene Allano and G{\'e}rard Chollet and Jean Hennebert and Dijana Petrovska and Florian Verdet",
    abstract = "Chapter about a large-scale Multimodal Evaluation Campaign held in 2007 in the framework of the European BioSecure project. The book title is "Guide to Biometric Reference Systems and Performance Evaluation"",
    address = " ",
    booktitle = "Guide to Biometric Reference Systems and Performance Evaluation",
    chapter = "11",
    crossref = " ",
    doi = "10.1007/978-1-84800-292-0_11",
    editor = "Petrovska, Dijana and Chollet, G{\'e}rard and Dorizzi, Bernadette",
    isbn = "9781848002913",
    key = " ",
    keywords = "Benchmarking, Biometrics",
    month = " ",
    note = "http://www.springerlink.com/content/jpr1h7
    Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    organization = " ",
    pages = "327-371",
    publisher = "Springer",
    series = "Guide to Biometric Reference Systems and Performance Evaluation",
    title = "{B}io{S}ecure {M}ultimodal {E}valuation {C}ampaign 2007 ({BMEC} 2007)",
    url = "http://rd.springer.com/chapter/10.1007/978-1-84800-292-0_11",
    year = "2009",
    }

    Chapter about a large-scale Multimodal Evaluation Campaign held in 2007 in the framework of the European BioSecure project. The book title is "Guide to Biometric Reference Systems and Performance Evaluation"

  • [PDF] [DOI] F. Slimane, R. Ingold, S. Kanoun, A. Alimi, and J. Hennebert, "A New Arabic Printed Text Image Database and Evaluation Protocols," in International Conference on Document Analysis and Recognition (ICDAR 09), July 26 - 29, Barcelona, Spain, 2009, p. 946–950.
    [Bibtex] [Abstract]
    @conference{slim09:icdar,
    author = "Fouad Slimane and Rolf Ingold and Slim Kanoun and Adel Alimi and Jean Hennebert",
    abstract = "We report on the creation of a database composed of images of Arabic Printed words. The purpose of this database is the large-scale benchmarking of open-vocabulary, multi-font, multi-size and multi-style text recognition systems in Arabic. The challenges that are addressed by the database are in the variability of the sizes, fonts and style used to generate the images. A focus is also given on low-resolution images where anti-aliasing is generating noise on the characters to recognize. The database is synthetically generated using a lexicon of 113’284 words, 10 Arabic fonts, 10 font sizes and 4 font styles. The database contains 45’313’600 single word images totaling to more than 250 million characters. Ground truth annotation is provided for each image. The database is called APTI for Arabic Printed Text Images.",
    address = " ",
    booktitle = "International Conference on Document Analysis and Recognition (ICDAR 09), July 26 - 29, Barcelona, Spain",
    crossref = " ",
    doi = "10.1109/ICDAR.2009.155",
    editor = " ",
    isbn = "9781424445004",
    issn = "1520-5363",
    keywords = "arabic, machine learning, OCR",
    month = " ",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    number = " ",
    organization = " ",
    pages = "946--950",
    publisher = " ",
    series = " ",
    title = "{A} {N}ew {A}rabic {P}rinted {T}ext {I}mage {D}atabase and {E}valuation {P}rotocols",
    Pdf = "http://www.hennebert.org/download/publications/icdar-2009_A-New-Arabic-Printed-Text-Image-Database-and-Evaluation-Protocols.pdf",
    volume = " ",
    year = "2009",
    }

    We report on the creation of a database composed of images of Arabic Printed words. The purpose of this database is the large-scale benchmarking of open-vocabulary, multi-font, multi-size and multi-style text recognition systems in Arabic. The challenges that are addressed by the database are in the variability of the sizes, fonts and style used to generate the images. A focus is also given on low-resolution images where anti-aliasing is generating noise on the characters to recognize. The database is synthetically generated using a lexicon of 113’284 words, 10 Arabic fonts, 10 font sizes and 4 font styles. The database contains 45’313’600 single word images totaling to more than 250 million characters. Ground truth annotation is provided for each image. The database is called APTI for Arabic Printed Text Images.

  • [PDF] F. Slimane, S. Kanoun, J. Hennebert, A. M. Alimi, and R. Ingold, "Modèles de Markov Cachés et Modèle de Longueur pour la Reconnaissance de l'Ecriture Arabe à Basse Résolution," in Proceedings of MAnifestation des JEunes Chercheurs en Sciences et Technologies de l'Information et de la Communication (MajecSTIC 2009), Avignon (France), 2009.
    [Bibtex] [Abstract]
    @conference{slim09:majestic,
    author = "Fouad Slimane and Slim Kanoun and Jean Hennebert and Adel M. Alimi and Rolf Ingold",
    abstract = "Nous pr{\'e}sentons dans ce papier un syst{\`e}me de reconnaissance automatique de l’{\'e}criture arabe {\`a} vocabulaire ouvert, basse r{\'e}solution, bas{\'e} sur les Mod{\`e}les de Markov Cach{\'e}s. De tels mod{\`e}les sont tr{\`e}s performants lorsqu’il s’agit de r{\'e}soudre le double probl{\`e}me de segmentation et de reconnaissance pour des signaux correspondant {\`a} des s{\'e}quences d’{\'e}tats diff{\'e}rents, par exemple en reconnaissance de la parole ou de l’{\'e}criture cursive. La sp{\'e}cificit{\'e} de notre ap- proche est dans l’introduction des mod{\`e}les de longueurs pour la reconnaissance de l’Arabe imprim{\'e}. Ces derniers sont inf{\'e}r{\'e}s automatiquement pendant la phase d’entra{\^i}nement et leur impl{\'e}mentation est r{\'e}alis{\'e}e par une simple alt{\'e}ration des mod{\`e}les de chaque caract{\`e}re composant les mots. Dans notre approche, chaque mot est repr{\'e}sent{\'e} par une s{\'e}quence des sous mod{\`e}les, ces derniers {\'e}tant repr{\'e}sent{\'e}s par des {\'e}tats dont le nombre est proportionnel {\`a} la longueur de chaque caract{\`e}re. Cette am{\'e}lioration, nous a permis d’augmenter de fa{\c{c}}on significative les performances de reconnaissance et de d{\'e}velopper un syst{\`e}me de reconnaissance {\`a} vocabulaire ouvert. L’{\'e}valuation du syst{\`e}me a {\'e}t{\'e} effectu{\'e}e en utilisant la boite {\`a} outils HTK sur une base de donn{\'e}es d’images synth{\'e}tique {\`a} basse r{\'e}solution.",
    address = "Avignon (France)",
    booktitle = "Proceedings of MAnifestation des JEunes Chercheurs en Sciences et Technologies de l'Information et de la Communication (MajecSTIC 2009)",
    isbn = "9782953423310",
    keywords = "HMM, arabic recognition, image recognition, HMM, duration model",
    month = "nov",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    title = "{M}od{\`e}les de {M}arkov {C}ach{\'e}s et {M}od{\`e}le de {L}ongueur pour la {R}econnaissance de l'{E}criture {A}rabe {\`a} {B}asse {R}{\'e}solution",
    Pdf = "http://www.hennebert.org/download/publications/majestic-2009-modeles-de-markov-caches-et-modele-de-longueur-pour-la-reconnaissance-ecriture-arabe-basse-resolution.pdf",
    year = "2009",
    }

    Nous présentons dans ce papier un système de reconnaissance automatique de l’écriture arabe à vocabulaire ouvert, basse résolution, basé sur les Modèles de Markov Cachés. De tels modèles sont très performants lorsqu’il s’agit de résoudre le double problème de segmentation et de reconnaissance pour des signaux correspondant à des séquences d’états différents, par exemple en reconnaissance de la parole ou de l’écriture cursive. La spécificité de notre ap- proche est dans l’introduction des modèles de longueurs pour la reconnaissance de l’Arabe imprimé. Ces derniers sont inférés automatiquement pendant la phase d’entraînement et leur implémentation est réalisée par une simple altération des modèles de chaque caractère composant les mots. Dans notre approche, chaque mot est représenté par une séquence des sous modèles, ces derniers étant représentés par des états dont le nombre est proportionnel à la longueur de chaque caractère. Cette amélioration, nous a permis d’augmenter de façon significative les performances de reconnaissance et de développer un système de reconnaissance à vocabulaire ouvert. L’évaluation du système a été effectuée en utilisant la boite à outils HTK sur une base de données d’images synthétique à basse résolution.

  • [PDF] F. Slimane, R. Ingold, S. Kanoun, A. Alimi, and J. Hennebert, "Database and Evaluation Protocols for Arabic Printed Text Recognition," University of Fribourg, Department of Informatics, 296-09-01, 2009.
    [Bibtex] [Abstract]
    @techreport{slim09:tr296,
    author = "Fouad Slimane and Rolf Ingold and Slim Kanoun and Adel Alimi and Jean Hennebert",
    abstract = "We report on the creation of a database composed of images of Arabic Printed Text. The purpose of this database is the large-scale benchmarking of open-vocabulary, multi-font, multi-size and multi-style text recognition systems in Arabic. Such systems take as input a text image and compute as output a character string corresponding to the text included in the image. The database is called APTI for Arabic Printed Text Image. The challenges that are addressed by the database are in the variability of the sizes, fonts and style used to generate the images. A focus is also given on low-resolution images where anti-aliasing is generating noise on the characters to recognize. The database is synthetically generated using a lexicon of 113’284 words, 10 Arabic fonts, 10 font sizes and 4 font styles. The database contains 45’313’600 single word images totaling to more than 250 million characters. Ground truth annotation is provided for each image thanks to a XML file. The annotation includes the number of characters, the number of PAWs (Pieces of Arabic Word), the sequence of characters, the size, the style, the font used to generate each image, etc.",
    address = " ",
    institution = "University of Fribourg, Department of Informatics",
    keywords = "database, arabic, image, text recognition",
    month = " ",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    number = "296-09-01",
    title = "{D}atabase and {E}valuation {P}rotocols for {A}rabic {P}rinted {T}ext {R}ecognition",
    type = " ",
    Pdf = "http://www.hennebert.org/download/publications/unifr-tech-report-296-09-01_database_and_evaluation_protocols_for_arabic_printed_text_recognition_apti.pdf",
    year = "2009",
    }

    We report on the creation of a database composed of images of Arabic Printed Text. The purpose of this database is the large-scale benchmarking of open-vocabulary, multi-font, multi-size and multi-style text recognition systems in Arabic. Such systems take as input a text image and compute as output a character string corresponding to the text included in the image. The database is called APTI for Arabic Printed Text Image. The challenges that are addressed by the database are in the variability of the sizes, fonts and style used to generate the images. A focus is also given on low-resolution images where anti-aliasing is generating noise on the characters to recognize. The database is synthetically generated using a lexicon of 113’284 words, 10 Arabic fonts, 10 font sizes and 4 font styles. The database contains 45’313’600 single word images totaling to more than 250 million characters. Ground truth annotation is provided for each image thanks to a XML file. The annotation includes the number of characters, the number of PAWs (Pieces of Arabic Word), the sequence of characters, the size, the style, the font used to generate each image, etc.

  • [PDF] F. Verdet, D. Matrouf, J. Bonastre, and J. Hennebert, "Factor Analysis and SVM for Language Recognition," in 10th Annual Conference of the International Speech Communication Association, InterSpeech, 2009, p. 164–167.
    [Bibtex] [Abstract]
    @conference{verd09:interspeech,
    author = "Florian Verdet and Driss Matrouf and Jean-Fran{\c{c}}ois Bonastre and Jean Hennebert",
    abstract = "Statistic classifiers operate on features that generally include both, useful and useless information. These two types of information are difficult to separate in feature domain. Recently, a new paradigm based on Factor Analysis (FA) proposed a model decomposition into useful and useless components. This method has successfully been applied to speaker recognition tasks. In this paper, we study the use of FA for language recognition. We propose a classification method based on SDC features and Gaussian Mixture Models (GMM). We present well performing systems using Factor Analysis and FA-based Support Vector Machine (SVM) classifiers. Experiments are conducted using NIST LRE 2005’s primary condition. The relative equal error rate reduction obtained by the best factor analysis configuration with respect to baseline GMM-UBM system is over 60 \%, corresponding to an EER of 6.59 \%.",
    address = " ",
    booktitle = "10th Annual Conference of the International Speech Communication Association, InterSpeech",
    crossref = " ",
    editor = " ",
    issn = "1990-9772",
    keywords = "Language Identification, Speech Processing",
    month = "sep",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    number = " ",
    organization = " ",
    pages = "164--167",
    publisher = " ",
    series = " ",
    title = "{F}actor {A}nalysis and {SVM} for {L}anguage {R}ecognition",
    Pdf = "http://www.hennebert.org/download/publications/interspeech-2009-Factor_Analysis_and_SVM_for_Language_Recognition.PDF",
    volume = " ",
    year = "2009",
    }

    Statistic classifiers operate on features that generally include both, useful and useless information. These two types of information are difficult to separate in feature domain. Recently, a new paradigm based on Factor Analysis (FA) proposed a model decomposition into useful and useless components. This method has successfully been applied to speaker recognition tasks. In this paper, we study the use of FA for language recognition. We propose a classification method based on SDC features and Gaussian Mixture Models (GMM). We present well performing systems using Factor Analysis and FA-based Support Vector Machine (SVM) classifiers. Experiments are conducted using NIST LRE 2005’s primary condition. The relative equal error rate reduction obtained by the best factor analysis configuration with respect to baseline GMM-UBM system is over 60 \%, corresponding to an EER of 6.59 \%.

  • [PDF] [DOI] F. Einsele, R. Ingold, and J. Hennebert, "A Language-Independent, Open-Vocabulary System Based on HMMs for Recognition of Ultra Low Resolution Words," Journal of Universal Computer Science, vol. 14, iss. 18, p. 2982–2997, 2008.
    [Bibtex] [Abstract]
    @article{einse08:jucs,
    author = "Farshideh Einsele and Rolf Ingold and Jean Hennebert",
    abstract = "In this paper, we introduce and evaluate a system capable of recognizing words extracted from ultra low resolution images such as those frequently embedded on web pages. The design of the system has been driven by the following constraints. First, the system has to recognize small font sizes between 6-12 points where anti-aliasing and resampling filters are applied. Such procedures add noise between adjacent characters in the words and complicate any a priori segmentation of the characters. Second, the system has to be able to recognize any words in an open vocabulary setting, potentially mixing different languages in Latin alphabet. Finally, the training procedure must be automatic, i.e. without requesting to extract, segment and label manually a large set of data. These constraints led us to an architecture based on ergodic HMMs where states are associated to the characters. We also introduce several improvements of the performance increasing the order of the emission probability estimators, including minimum and maximum width constraints on the character models and a training set consisting all possible adjacency cases of Latin characters. The proposed system is evaluated on different font sizes and families, showing good robustness for sizes down to 6 points.",
    crossref = " ",
    doi = "10.3217/jucs-014-18-2982",
    issn = "0948-6968",
    journal = "Journal of Universal Computer Science",
    keywords = "Text recognition, low-resolution images",
    month = " October",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    number = "18",
    pages = "2982--2997",
    title = "{A} {L}anguage-{I}ndependent, {O}pen-{V}ocabulary {S}ystem {B}ased on {HMM}s for {R}ecognition of {U}ltra {L}ow {R}esolution {W}ords",
    Pdf = "http://www.hennebert.org/download/publications/jucs-2008-14-language_independent_open_vocabulary_system_based_on_HMMS_for_recognition_of_ULRW.pdf",
    volume = "14",
    year = "2008",
    }

    In this paper, we introduce and evaluate a system capable of recognizing words extracted from ultra low resolution images such as those frequently embedded on web pages. The design of the system has been driven by the following constraints. First, the system has to recognize small font sizes between 6-12 points where anti-aliasing and resampling filters are applied. Such procedures add noise between adjacent characters in the words and complicate any a priori segmentation of the characters. Second, the system has to be able to recognize any words in an open vocabulary setting, potentially mixing different languages in Latin alphabet. Finally, the training procedure must be automatic, i.e. without requesting to extract, segment and label manually a large set of data. These constraints led us to an architecture based on ergodic HMMs where states are associated to the characters. We also introduce several improvements of the performance increasing the order of the emission probability estimators, including minimum and maximum width constraints on the character models and a training set consisting all possible adjacency cases of Latin characters. The proposed system is evaluated on different font sizes and families, showing good robustness for sizes down to 6 points.

  • [PDF] [DOI] F. Einsele, R. Ingold, and J. Hennebert, "A Language-Independent, Open-Vocabulary System Based on HMMs for Recognition of Ultra Low Resolution Words," in 23rd Annual ACM Symposium on Applied Computing (ACM SAC 2008), Fortaleza, Ceara, Brasil, 2008, p. 429–433.
    [Bibtex] [Abstract]
    @conference{eins08:sac,
    author = "Farshideh Einsele and Rolf Ingold and Jean Hennebert",
    abstract = "In this paper, we introduce and evaluate a system capable of recognizing ultra low resolution words extracted from images such as those frequently embedded on web pages. The design of the system has been driven by the following constraints. First, the system has to recognize small font sizes where anti-aliasing and resampling procedures have been applied. Such procedures add noise on the patterns and complicate any a priori segmentation of the characters. Second, the system has to be able to recognize any words in an open vocabulary setting, potentially mixing different languages. Finally, the training procedure must be automatic, i.e. without requesting to extract, segment and label manually a large set of data. These constraints led us to an architecture based on ergodic HMMs where states are associated to the characters. We also introduce several improvements of the performance increasing the order of the emission probability estimators and including minimum and maximum duration constraints on the character models. The proposed system is evaluated on different font sizes and families, showing good robustness for sizes down to 6 points.",
    address = " ",
    booktitle = "23rd Annual ACM Symposium on Applied Computing (ACM SAC 2008), Fortaleza, Ceara, Brasil",
    crossref = " ",
    doi = "10.1145/1363686.1363791",
    editor = " ",
    isbn = "9781595937537",
    keywords = "HMM, OCR",
    month = " ",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    number = " ",
    organization = " ",
    pages = "429--433",
    publisher = " ",
    series = " ",
    title = "{A} {L}anguage-{I}ndependent, {O}pen-{V}ocabulary {S}ystem {B}ased on {HMM}s for {R}ecognition of {U}ltra {L}ow {R}esolution {W}ords",
    Pdf = "http://www.hennebert.org/download/publications/sac-acm-2008-A_Language_Independent_Open_Vocabulary_System_Based_on_HMMs_for_Recognition_of_Ultra_Low_Resolution_Words.pdf",
    volume = " ",
    year = "2008",
    }

    In this paper, we introduce and evaluate a system capable of recognizing ultra low resolution words extracted from images such as those frequently embedded on web pages. The design of the system has been driven by the following constraints. First, the system has to recognize small font sizes where anti-aliasing and resampling procedures have been applied. Such procedures add noise on the patterns and complicate any a priori segmentation of the characters. Second, the system has to be able to recognize any words in an open vocabulary setting, potentially mixing different languages. Finally, the training procedure must be automatic, i.e. without requesting to extract, segment and label manually a large set of data. These constraints led us to an architecture based on ergodic HMMs where states are associated to the characters. We also introduce several improvements of the performance increasing the order of the emission probability estimators and including minimum and maximum duration constraints on the character models. The proposed system is evaluated on different font sizes and families, showing good robustness for sizes down to 6 points.

  • [PDF] [DOI] B. Fauve, H. Bredin, W. Karam, F. Verdet, A. Mayoue, G. Chollet, J. Hennebert, R. Lewis, J. Mason, C. Mokbel, and D. Petrovska, "Some Results from the BioSecure Talking-Face Evaluation Campaign," in IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Las Vegas, Nevada, USA, 30/03/08-04/04/08, http://www.ieee.org/, 2008, p. 4137–4140.
    [Bibtex] [Abstract]
    @conference{fauv08:icassp,
    author = "Benoit Fauve and Herv{\'e} Bredin and Walid Karam and Florian Verdet and Aur{\'e}lien Mayoue and G{\'e}rard Chollet and Jean Hennebert and Richard Lewis and John Mason and Chafik Mokbel and Dijana Petrovska",
    abstract = "The BioSecure Network of Excellence1 has collected a large multi-biometric publicly available database and organized the BioSecure Multimodal Evaluation Campaigns (BMEC) in 2007. This paper reports on the Talking Faces campaign. Open source reference systems were made available to participants and four laboratories submitted executable code to the organizer who performed tests on sequestered data. Several deliberate impostures were tested. It is demonstrated that forgeries are a real threat for such systems. A technological race is ongoing between deliberate impostors and system developers.",
    address = "http://www.ieee.org/",
    booktitle = "IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Las Vegas, Nevada, USA, 30/03/08-04/04/08",
    crossref = " ",
    doi = "10.1109/ICASSP.2008.4518565",
    editor = " ",
    isbn = "9781424414833",
    keywords = "biometrics, talking face, evaluation, benchmarking",
    month = " ",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    number = " ",
    organization = " ",
    pages = "4137--4140",
    publisher = "IEEE",
    series = " ",
    title = "{S}ome {R}esults from the {B}io{S}ecure {T}alking-{F}ace {E}valuation {C}ampaign",
    Pdf = "http://www.hennebert.org/download/publications/icassp-2008-some_results_from_the_biosecure_talking_face_evaluation_campaign.pdf",
    volume = " ",
    year = "2008",
    }

    The BioSecure Network of Excellence1 has collected a large multi-biometric publicly available database and organized the BioSecure Multimodal Evaluation Campaigns (BMEC) in 2007. This paper reports on the Talking Faces campaign. Open source reference systems were made available to participants and four laboratories submitted executable code to the organizer who performed tests on sequestered data. Several deliberate impostures were tested. It is demonstrated that forgeries are a real threat for such systems. A technological race is ongoing between deliberate impostors and system developers.

  • [PDF] A. E. Hannani and J. Hennebert, "A Review of the Benefits and Issues of Speaker Verification Evaluation Campaigns," in Proceedings of the ELRA Workshop on Evaluation at LREC 08, Marrakech, Morocco, 2008, p. 29–34.
    [Bibtex] [Abstract]
    @conference{elha08:elra,
    author = "Asmaa El Hannani and Jean Hennebert",
    abstract = "Evaluating speaker verification algorithms on relevant speech corpora is a key issue for measuring the progress and discovering the remaining difficulties of speaker verification systems. A common evaluation framework is also a key point when comparing systems produced by different labs. The speech group of the National Institute of Standards and Technology (NIST) has been organizing evaluations of text-independent telephony speaker verification technologies since 1997, with an increasing success and number of participants over the years. These NIST evaluations have been recognized by the speaker verification scientific community as a key factor for the improvement of the algorithms over the last decade. However, these evaluations measure exclusively the effectiveness in term of performance of the systems, assuming some conditions of use that are sometimes far away from any real-life commercial context for telephony applications. Other important aspects of speaker verification systems are also ignored by such evaluations, such as the efficiency, the usability and the robustness of the systems against impostor attacks. In this paper we present a review of the current NIST speaker verification evaluation methods, trying to put objectively into evidence their current benefits and limitations. We also propose some concrete solutions for going beyond these limitations.",
    address = " ",
    booktitle = "Proceedings of the ELRA Workshop on Evaluation at LREC 08, Marrakech, Morocco",
    crossref = " ",
    editor = " ",
    keywords = "speaker verification, benchmarks",
    month = " ",
    note = "http://www.lrec-conf.org/proceedings/lrec2008/
    Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    number = " ",
    organization = " ",
    pages = "29--34",
    publisher = " ",
    series = " ",
    title = "{A} {R}eview of the {B}enefits and {I}ssues of {S}peaker {V}erification {E}valuation {C}ampaigns",
    Pdf = "http://www.hennebert.org/download/publications/elra-elrec-2008-A_Review_of_the_Benefits_and_Issues_of_Speaker_Verification_Evaluation_Campaigns.pdf",
    volume = " ",
    year = "2008",
    }

    Evaluating speaker verification algorithms on relevant speech corpora is a key issue for measuring the progress and discovering the remaining difficulties of speaker verification systems. A common evaluation framework is also a key point when comparing systems produced by different labs. The speech group of the National Institute of Standards and Technology (NIST) has been organizing evaluations of text-independent telephony speaker verification technologies since 1997, with an increasing success and number of participants over the years. These NIST evaluations have been recognized by the speaker verification scientific community as a key factor for the improvement of the algorithms over the last decade. However, these evaluations measure exclusively the effectiveness in term of performance of the systems, assuming some conditions of use that are sometimes far away from any real-life commercial context for telephony applications. Other important aspects of speaker verification systems are also ignored by such evaluations, such as the efficiency, the usability and the robustness of the systems against impostor attacks. In this paper we present a review of the current NIST speaker verification evaluation methods, trying to put objectively into evidence their current benefits and limitations. We also propose some concrete solutions for going beyond these limitations.

  • J. Hennebert, "Speaker Verification," in Biometrics And Human Identity, V. M. Roman Rak and Z. Riha, Eds., Grada, 2008.
    [Bibtex] [Abstract]
    @incollection{henn08:speak,
    author = "Jean Hennebert",
    abstract = "Speaking is the most natural mean of communication between humans. Driven by a great deal of potential applications in human-machine interaction, systems have been developed to automatically extract the different pieces of information conveyed in the speech signal. There are three major tasks. In speech recognition tasks, the automatic system aims at discovering the sequence of words forming the spoken message. In language recognition tasks, the system attempts to identify the language used in a given piece of speech signal. Finally, speaker recognition systems aim to discover information about the identity of the speaker. Speaker recognition finds applications in many different areas such as access control, transaction authentication, law enforcement, speech data management and personalization. As for other biometric technologies the prime motivation of speaker recognition is to achieve a more usable and reliable personal identification than by using artifacts such as keys, badges, magnetic cards or memorized passwords. Interestingly, speaker recognition is one of the few biometric approach which is not based on image processing. Speaker recognition systems are often said to be performance-based since the user has to produce a sequence of sound. This is also a major difference with other passive biometrics for which the cooperation of the authenticated person is not requested, such as for fingerprints, iris or face recognition systems. Speaker recognition technologies are often ranked as less accurate than other biometric technologies such as finger print or iris scan. However, there are two main factors that make voice a compelling biometric. First, there is a proliferation of automated telephony services for which speaker recognition can be directly applied. Telephone handsets are indeed available basically everywhere and provide the required sensors for the speech signal. Second, talking is a very natural gesture, often considered as lowly intrusive by users as no physical contact is requested. These two factors, added to the recent scientific progresses, made voice biometric converge into a mature technology. Commercial products offering voice biometric are now available from different vendors. However, many technical and non-technical issues, discussed later in this chapter, still remain open and need to be tackled.",
    address = " ",
    booktitle = "Biometrics And Human Identity",
    crossref = " ",
    editor = "Roman Rak, V{\'a}clav Maty{\'a}s and Riha, Zdenek",
    isbn = "9788024723655",
    key = " ",
    keywords = "Biometrics, machine learning, Speaker Verification",
    month = " ",
    note = "Book title: Biometrics And Human Identity
    ISBN-13: 978-80-247-2365-5",
    organization = " ",
    pages = " ",
    publisher = "Grada",
    title = "{S}peaker {V}erification",
    url = "http://spolecenskeknihy.cz/?id=978-80-247-2365-5{{{\&}}}p=4",
    year = "2008",
    }

    Speaking is the most natural mean of communication between humans. Driven by a great deal of potential applications in human-machine interaction, systems have been developed to automatically extract the different pieces of information conveyed in the speech signal. There are three major tasks. In speech recognition tasks, the automatic system aims at discovering the sequence of words forming the spoken message. In language recognition tasks, the system attempts to identify the language used in a given piece of speech signal. Finally, speaker recognition systems aim to discover information about the identity of the speaker. Speaker recognition finds applications in many different areas such as access control, transaction authentication, law enforcement, speech data management and personalization. As for other biometric technologies the prime motivation of speaker recognition is to achieve a more usable and reliable personal identification than by using artifacts such as keys, badges, magnetic cards or memorized passwords. Interestingly, speaker recognition is one of the few biometric approach which is not based on image processing. Speaker recognition systems are often said to be performance-based since the user has to produce a sequence of sound. This is also a major difference with other passive biometrics for which the cooperation of the authenticated person is not requested, such as for fingerprints, iris or face recognition systems. Speaker recognition technologies are often ranked as less accurate than other biometric technologies such as finger print or iris scan. However, there are two main factors that make voice a compelling biometric. First, there is a proliferation of automated telephony services for which speaker recognition can be directly applied. Telephone handsets are indeed available basically everywhere and provide the required sensors for the speech signal. Second, talking is a very natural gesture, often considered as lowly intrusive by users as no physical contact is requested. These two factors, added to the recent scientific progresses, made voice biometric converge into a mature technology. Commercial products offering voice biometric are now available from different vendors. However, many technical and non-technical issues, discussed later in this chapter, still remain open and need to be tackled.

  • [PDF] [DOI] A. Humm, J. Hennebert, and R. Ingold, "Spoken Signature For User Authentication," SPIE Journal of Electronic Imaging, Special Section on Biometrics: ASUI January-March 2008, vol. 17, iss. 1, p. 011013-1–011013-11, 2008.
    [Bibtex] [Abstract]
    @article{humm08:spie,
    author = "Andreas Humm and Jean Hennebert and Rolf Ingold",
    abstract = "We are proposing a new user authentication system based on spoken signatures where online signature and speech signals are acquired simultaneously. The main benefit of this multimodal approach is a better accuracy at no extra costs for the user in terms of access time or inconvenience. Another benefit lies in a better robustness against intentional forgeries due to the extra difficulty for the forger to produce both signals. We have set up an experimental framework to measure these benefits on MyIDea, a realistic multimodal biometric database publicly available. More specifically, we evaluate the performance of state-of-the-art modelling systems based on GMM and HMM applied independently to the pen and voice signal where a simple rule-based score fusion procedure is used. We conclude that the best performance is achieved by the HMMs, provided that their topology is optimized on a per user basis. Furthermore, we show that more precise models can be obtained through the use of Maximum a posteriori probability (MAP) training instead of the classically used Expectation Maximization (EM). We also measure the impact of multi-session scenarios versus mono-session scenarios and the impact of skilled versus unskilled signature forgeries attacks.",
    crossref = " ",
    doi = "10.1117/1.2898526",
    issn = "1017-9909",
    journal = "SPIE Journal of Electronic Imaging, Special Section on Biometrics: ASUI January-March 2008",
    keywords = "biometrics, speech, signature",
    month = "April",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    number = "1",
    pages = "011013-1--011013-11",
    title = "{S}poken {S}ignature {F}or {U}ser {A}uthentication",
    Pdf = "http://www.hennebert.org/download/publications/spie-jei-2008_spoken_signature_for_user_authentication.pdf",
    volume = "17",
    year = "2008",
    }

    We are proposing a new user authentication system based on spoken signatures where online signature and speech signals are acquired simultaneously. The main benefit of this multimodal approach is a better accuracy at no extra costs for the user in terms of access time or inconvenience. Another benefit lies in a better robustness against intentional forgeries due to the extra difficulty for the forger to produce both signals. We have set up an experimental framework to measure these benefits on MyIDea, a realistic multimodal biometric database publicly available. More specifically, we evaluate the performance of state-of-the-art modelling systems based on GMM and HMM applied independently to the pen and voice signal where a simple rule-based score fusion procedure is used. We conclude that the best performance is achieved by the HMMs, provided that their topology is optimized on a per user basis. Furthermore, we show that more precise models can be obtained through the use of Maximum a posteriori probability (MAP) training instead of the classically used Expectation Maximization (EM). We also measure the impact of multi-session scenarios versus mono-session scenarios and the impact of skilled versus unskilled signature forgeries attacks.

  • [PDF] [DOI] F. Slimane, R. Ingold, A. M. Alimi, and J. Hennebert, "Duration Models for Arabic Text Recognition using Hidden Markov Models," in International Conference on Computational Intelligence for Modelling, Control and Automation (CIMCA 08), Vienna, Austria, 2008, p. 838–843.
    [Bibtex] [Abstract]
    @conference{slim08:cimca,
    author = "Fouad Slimane and Rolf Ingold and Adel Mohamed Alimi and Jean Hennebert",
    abstract = "We present in this paper a system for recognition of printed Arabic text based on Hidden Markov Models (HMM). While HMMs have been successfully used in the past for such a task, we report here on significant improvements of the recognition performance with the introduction of minimum and maximum duration models. The improvements allow us to build a system working in open vocabulary mode, i.e., without any limitations on the size of the vocabulary. The evaluation of our system is performed using HTK (Hidden Markov Model Toolkit) on a database of word images that are synthetically generated",
    address = " ",
    booktitle = "International Conference on Computational Intelligence for Modelling, Control and Automation (CIMCA 08), Vienna, Austria",
    crossref = " ",
    doi = "10.1109/CIMCA.2008.229",
    editor = " ",
    isbn = "9780769535142",
    keywords = "hidden Markov models , image recognition , text analysis , visual databases",
    month = "December",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    number = " ",
    organization = " ",
    pages = "838--843",
    publisher = " ",
    series = " ",
    title = "{D}uration {M}odels for {A}rabic {T}ext {R}ecognition using {H}idden {M}arkov {M}odels",
    Pdf = "http://www.hennebert.org/download/publications/cimca-2008-Duration_Models_for_Arabic_Text_Recognition_using_Hidden_Markov_Models.pdf",
    volume = " ",
    year = "2008",
    }

    We present in this paper a system for recognition of printed Arabic text based on Hidden Markov Models (HMM). While HMMs have been successfully used in the past for such a task, we report here on significant improvements of the recognition performance with the introduction of minimum and maximum duration models. The improvements allow us to build a system working in open vocabulary mode, i.e., without any limitations on the size of the vocabulary. The evaluation of our system is performed using HTK (Hidden Markov Model Toolkit) on a database of word images that are synthetically generated

  • [PDF] [DOI] F. Verdet and J. Hennebert, "Impostures of Talking Face Systems Using Automatic Face Animation," in IEEE Conference on Biometrics: Theory, Applications and Systems (BTAS 08), Arlington, Virginia, USA, 2008, pp. 1-4.
    [Bibtex] [Abstract]
    @conference{verd08:btas,
    author = "Florian Verdet and Jean Hennebert",
    abstract = "We present in this paper a new forgery scenario for the evaluation of face verification systems. The scenario is a replay-attack where we assume that the forger has got access to a still picture of the genuine user. The forger is then using a dedicated software to realistically animate the face image, reproducing head and lip movements according to a given speech waveform. The resulting forged video sequence is finally replayed to the sensor. Such attacks are nowadays quite easy to realize for potential forgers and can be opportunities to attempt to forge text-prompted challenge-response configurations of the verification system. We report the evaluation of such forgeries on the BioSecure BMEC talking face database where a set of 430 users are forged according to this face animation procedure. As expected, results show that these forgeries generate much more false acceptation in comparison to the classically used random forgeries. These results clearly show that such kind of forgery attack potentially represents a critical security breach for talking-face verification systems.",
    address = " ",
    booktitle = "IEEE Conference on Biometrics: Theory, Applications and Systems (BTAS 08), Arlington, Virginia, USA",
    crossref = " ",
    doi = "10.1109/BTAS.2008.4699367",
    editor = " ",
    isbn = "9781424427291",
    keywords = "biometrics, talking face, benchmarking, forgeries",
    month = "September",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    number = " ",
    organization = " ",
    pages = "1-4",
    publisher = " ",
    series = " ",
    title = "{I}mpostures of {T}alking {F}ace {S}ystems {U}sing {A}utomatic {F}ace {A}nimation",
    Pdf = "http://www.hennebert.org/download/publications/btas-2008-impostures_of_talking_face_systems_using_automatic_face_animation.pdf",
    volume = " ",
    year = "2008",
    }

    We present in this paper a new forgery scenario for the evaluation of face verification systems. The scenario is a replay-attack where we assume that the forger has got access to a still picture of the genuine user. The forger is then using a dedicated software to realistically animate the face image, reproducing head and lip movements according to a given speech waveform. The resulting forged video sequence is finally replayed to the sensor. Such attacks are nowadays quite easy to realize for potential forgers and can be opportunities to attempt to forge text-prompted challenge-response configurations of the verification system. We report the evaluation of such forgeries on the BioSecure BMEC talking face database where a set of 430 users are forged according to this face animation procedure. As expected, results show that these forgeries generate much more false acceptation in comparison to the classically used random forgeries. These results clearly show that such kind of forgery attack potentially represents a critical security breach for talking-face verification systems.

  • N. T. Anh and P. Kuonen, "Programming the Grid with POP-C++," Future Generation Computer Systems (FGCS), vol. 23, pp. 23-30, 2007.
    [Bibtex]
    @article{Nguyen:862,
    Author = {Nguyen Tuan Anh and Pierre Kuonen},
    Journal = {Future Generation Computer Systems (FGCS)},
    Month = {jan},
    Pages = {23-30},
    Title = {Programming the Grid with POP-C++},
    Volume = {23},
    Year = {2007}}
  • [PDF] [DOI] F. Einsele, J. Hennebert, and R. Ingold, "Towards Identification Of Very Low Resolution, Anti-Aliased Characters," in IEEE International Symposium on Signal Processing and its Applications (ISSPA'07), Sharjah, United Arab Emirates, 2007, pp. 1-4.
    [Bibtex] [Abstract]
    @conference{eins07:isspa,
    author = "Farshideh Einsele and Jean Hennebert and Rolf Ingold",
    abstract = "Current Web indexing technologies suffer from a severe drawback due to the fact that web documents often present textual information that is encapsulated in digital images and therefore not available as actual coded text. Moreover such images are not suited to be processed by existing OCR software, since they are generally designed for recognizing binary document images produced by scanners with resolutions between 200-600 dpi, whereas text embedded in web images is often anti-aliased and has generally a resolution between 72 and 90 dpi. The presented paper describes two preliminary studies about character identification at very low resolution (72 dpi) and small font sizes (3-12 pts). The proposed character identification system delivers identification rates up to 99.93 percents for 12'600 isolated character samples and up to 99.89 percents for 300'000 character samples in context.",
    address = " ",
    booktitle = "IEEE International Symposium on Signal Processing and its Applications (ISSPA'07), Sharjah, United Arab Emirates",
    crossref = " ",
    doi = "10.1109/ISSPA.2007.4555324",
    editor = " ",
    isbn = "9781424407781",
    keywords = "OCR;Web indexing technology;antialaised character identification;binary document image recognition;low resolution character;textual information encapsulation;Internet;antialiasing;data encapsulation;document image processing;image resolution;indexing;",
    month = "February",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    number = " ",
    organization = " ",
    pages = " 1-4",
    publisher = " ",
    series = " ",
    title = "{T}owards {I}dentification {O}f {V}ery {L}ow {R}esolution, {A}nti-{A}liased {C}haracters",
    Pdf = "http://www.hennebert.org/download/publications/isspa-2007-identification-of-very-low-resolution-anti-aliased-characters.pdf",
    volume = " ",
    year = "2007",
    }

    Current Web indexing technologies suffer from a severe drawback due to the fact that web documents often present textual information that is encapsulated in digital images and therefore not available as actual coded text. Moreover such images are not suited to be processed by existing OCR software, since they are generally designed for recognizing binary document images produced by scanners with resolutions between 200-600 dpi, whereas text embedded in web images is often anti-aliased and has generally a resolution between 72 and 90 dpi. The presented paper describes two preliminary studies about character identification at very low resolution (72 dpi) and small font sizes (3-12 pts). The proposed character identification system delivers identification rates up to 99.93 percents for 12'600 isolated character samples and up to 99.89 percents for 300'000 character samples in context.

  • [PDF] [DOI] F. Einsele, R. Ingold, and J. Hennebert, "A HMM-Based Approach to Recognize Ultra Low Resolution Anti-Aliased Words," in Pattern Recognition and Machine Intelligence, A. Ghosh, R. De, and S. Pal, Eds., Springer Verlag, 2007, vol. 4815, pp. 511-518.
    [Bibtex] [Abstract]
    @inbook{eins07:premi,
    author = "Farshideh Einsele and Rolf Ingold and Jean Hennebert",
    abstract = "In this paper, we present a HMM based system that is used to recognize ultra low resolution text such as those frequently embedded in images available on the web. We propose a system that takes specifically the challenges of recognizing text in ultra low resolution images into account. In addition to this, we show in this paper that word models can be advantageously built connecting together sub-HMM-character models and inter-character state. Finally we report on the promising performance of the system using HMM topologies which have been improved to take into account the presupposed minimum length of each character.",
    booktitle = "Pattern Recognition and Machine Intelligence",
    doi = "10.1007/978-3-540-77046-6_63",
    editor = "Ghosh, Ashish and De, Rajat and Pal, Sankar",
    isbn = "9783540770459",
    keywords = "HMM; OCR; Ultra-low resolution",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    pages = "511-518",
    publisher = "Springer Verlag",
    series = "Lecture Notes in Computer Science, Pattern Recognition and Machine Intelligence",
    title = "{A} {HMM}-{B}ased {A}pproach to {R}ecognize {U}ltra {L}ow {R}esolution {A}nti-{A}liased {W}ords",
    Pdf = "http://www.hennebert.org/download/publications/premi-2007-hmm-based-approach-to-recognize-ultra-low-resolution-anti-aliased-words.pdf",
    volume = "4815",
    year = "2007",
    }

    In this paper, we present a HMM based system that is used to recognize ultra low resolution text such as those frequently embedded in images available on the web. We propose a system that takes specifically the challenges of recognizing text in ultra low resolution images into account. In addition to this, we show in this paper that word models can be advantageously built connecting together sub-HMM-character models and inter-character state. Finally we report on the promising performance of the system using HMM topologies which have been improved to take into account the presupposed minimum length of each character.

  • [PDF] J. Hennebert, "Please repeat: my voice is my password. From the basics to real-life implementations of speaker verification technologies," in Invited lecture at the Information Security Summit (IS2 2007), Prague, 2007.
    [Bibtex] [Abstract]
    @conference{henn07:iss,
    author = "Jean Hennebert",
    abstract = "Speaker verification finds applications in many different areas such as access control, transaction authentication, law enforcement, speech data management and personalization. As for other biometric technologies the prime motivation of speaker recognition is to achieve a more usable and reliable personal identification than by using artifacts such as keys, badges, magnetic cards or memorized passwords. Speaker verification technologies are often ranked as less accurate than other biometric technologies such as iris scan or fingerprints. However, there are two main factors that make voice a compelling biometric. First, there is a proliferation of automated telephony services for which speaker recognition can be directly applied. Second, talking is a very natural gesture, often considered as lowly intrusive by users as no physical contact is requested. These two factors, added to the recent scientific progresses, made voice biometric converge into a mature technology.",
    address = " ",
    booktitle = "Invited lecture at the Information Security Summit (IS2 2007), Prague",
    crossref = " ",
    editor = " ",
    keywords = "Biometrics; Speaker Verification",
    month = "May",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    number = " ",
    organization = " ",
    pages = " ",
    publisher = " ",
    series = " ",
    title = "{P}lease repeat: my voice is my password. {F}rom the basics to real-life implementations of speaker verification technologies",
    Pdf = "http://www.hennebert.org/download/publications/iss-2007-please-repeat-my-voice-is-my-password-speaker-verification-technologies.pdf",
    volume = " ",
    year = "2007",
    }

    Speaker verification finds applications in many different areas such as access control, transaction authentication, law enforcement, speech data management and personalization. As for other biometric technologies the prime motivation of speaker recognition is to achieve a more usable and reliable personal identification than by using artifacts such as keys, badges, magnetic cards or memorized passwords. Speaker verification technologies are often ranked as less accurate than other biometric technologies such as iris scan or fingerprints. However, there are two main factors that make voice a compelling biometric. First, there is a proliferation of automated telephony services for which speaker recognition can be directly applied. Second, talking is a very natural gesture, often considered as lowly intrusive by users as no physical contact is requested. These two factors, added to the recent scientific progresses, made voice biometric converge into a mature technology.

  • [PDF] [DOI] J. Hennebert, A. Humm, and R. Ingold, "Modelling Spoken Signatures With Gaussian Mixture Model Adaptation," in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 07), 2007, pp. 229-232.
    [Bibtex] [Abstract]
    @conference{henn07:icassp,
    author = "Jean Hennebert and Andreas Humm and Rolf Ingold",
    abstract = "We report on our developments towards building a novel user authentication system using combined acquisition of online handwritten signature and speech modalities. In our approach, signatures are recorded by asking the user to say what she/he is writing, leading to the so-called spoken signatures. We have built a verification system composed of two Gaussian Mixture Models (GMMs) sub-systems that model independently the pen and voice signal. We report on results obtained with two algorithms used for training the GMMs, respectively Expectation Maximization and Maximum A Posteriori Adaptation. Different algorithms are also compared for fusing the scores of each modality. The evaluations are conducted on spoken signatures taken from the MyIDea multimodal database, accordingly to the protocols provided with the database. Results are in favor of using MAP adaptation with a simple weighted sum fusion. Results show also clearly the impact of time variability and of skilled versus unskilled forgeries attacks.",
    address = " ",
    booktitle = "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 07)",
    crossref = " ",
    doi = "10.1109/ICASSP.2007.366214",
    editor = " ",
    isbn = "1424407273",
    issn = "1520-6149",
    keywords = "Biometrics; Signature; Speech; Handwriting; Multimodal; GMM",
    month = " April",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    number = " ",
    organization = " ",
    pages = " 229-232",
    publisher = " ",
    series = " ",
    title = "{M}odelling {S}poken {S}ignatures {W}ith {G}aussian {M}ixture {M}odel {A}daptation",
    Pdf = "http://www.hennebert.org/download/publications/icassp-2007-modelling-spoken-signatures-with-gaussian-mixture-model-adaptation.pdf",
    volume = "2 ",
    year = "2007",
    }

    We report on our developments towards building a novel user authentication system using combined acquisition of online handwritten signature and speech modalities. In our approach, signatures are recorded by asking the user to say what she/he is writing, leading to the so-called spoken signatures. We have built a verification system composed of two Gaussian Mixture Models (GMMs) sub-systems that model independently the pen and voice signal. We report on results obtained with two algorithms used for training the GMMs, respectively Expectation Maximization and Maximum A Posteriori Adaptation. Different algorithms are also compared for fusing the scores of each modality. The evaluations are conducted on spoken signatures taken from the MyIDea multimodal database, accordingly to the protocols provided with the database. Results are in favor of using MAP adaptation with a simple weighted sum fusion. Results show also clearly the impact of time variability and of skilled versus unskilled forgeries attacks.

  • [PDF] [DOI] J. Hennebert, R. Loeffel, A. Humm, and R. Ingold, "A New Forgery Scenario Based On Regaining Dynamics Of Signature," in Advances in Biometrics, S. L. S. L. S. Verlag, Ed., Lecture Notes in Computer Science, Advances in Biometrics, 2007, vol. 4642, pp. 366-375.
    [Bibtex] [Abstract]
    @inbook{henn07:icb,
    author = "Jean Hennebert and Renato Loeffel and Andreas Humm and Rolf Ingold",
    abstract = "We present in this paper a new forgery scenario for dynamic signature verification systems. In this scenario, we assume that the forger has got access to a static version of the genuine signature, is using a dedicated software to automatically recover dynamics of the signature and is using these regained signatures to break the verification system. We also show that automated procedures can be built to regain signature dynamics, making some simple assumptions on how signatures are performed. We finally report on the evaluation of these procedures on the MCYT-100 signature database on which regained versions of the signatures are generated. This set of regained signatures is used to evaluate the rejection performance of a baseline dynamic signature verification system. Results show that the regained forgeries generate much more false acceptation in comparison to the random and low-force forgeries available in the MCYT-100 database. These results clearly show that such kind of forgery attacks can potentially represent a critical security breach for signature verification systems.",
    address = " ",
    booktitle = "Advances in Biometrics",
    chapter = "ICB 2007",
    doi = "10.1007/978-3-540-74549-5",
    editor = "Seong-Whan Lee; Stan Li; Springer Verlag",
    isbn = "9783540745488",
    keywords = "Biometrics; Signature; Forgeries",
    month = "August",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    pages = "366-375",
    publisher = "Lecture Notes in Computer Science, Advances in Biometrics",
    title = "{A} {N}ew {F}orgery {S}cenario {B}ased {O}n {R}egaining {D}ynamics {O}f {S}ignature",
    Pdf = "http://www.hennebert.org/download/publications/icb-2007-new-forgery-scenario-based-on-regaining-dynamics-signature.pdf",
    volume = "4642",
    year = "2007",
    }

    We present in this paper a new forgery scenario for dynamic signature verification systems. In this scenario, we assume that the forger has got access to a static version of the genuine signature, is using a dedicated software to automatically recover dynamics of the signature and is using these regained signatures to break the verification system. We also show that automated procedures can be built to regain signature dynamics, making some simple assumptions on how signatures are performed. We finally report on the evaluation of these procedures on the MCYT-100 signature database on which regained versions of the signatures are generated. This set of regained signatures is used to evaluate the rejection performance of a baseline dynamic signature verification system. Results show that the regained forgeries generate much more false acceptation in comparison to the random and low-force forgeries available in the MCYT-100 database. These results clearly show that such kind of forgery attacks can potentially represent a critical security breach for signature verification systems.

  • [PDF] [DOI] A. Humm, J. Hennebert, and R. Ingold, "Spoken Handwriting Verification using Statistical Models," in Proceedings of the Ninth International Conference on Document Analysis and Recognition - Volume 02, ICDAR'07, Washington, DC, USA, 2007, pp. 999-1003.
    [Bibtex] [Abstract]
    @conference{humm07:icdar,
    author = "Andreas Humm and Jean Hennebert and Rolf Ingold",
    abstract = "We are proposing a novel and efficient user authentication system using combined acquisition of online handwriting and speech signals. In our approach, signals are recorded by asking the user to say what she or he is simultaneously writing. This methodology has the clear advantage of acquiring two sources of biometric information at no extra cost in terms of time or inconvenience. We have built a straightforward verification system to model these signals using statistical models. It is composed of two Gaussian Mixture Models (GMMs) sub-systems that takes as input features extracted from the pen and voice signals. The system is evaluated on MyIdea, a realistic multimodal biometric database. Results show that the use of both speech and handwriting modalities outperforms significantly these modalities used alone. We also report on the evaluations of different training algorithms and fusion strategies.",
    address = " Washington, DC, USA",
    booktitle = "Proceedings of the Ninth International Conference on Document Analysis and Recognition - Volume 02, ICDAR'07",
    crossref = " ",
    doi = "10.1109/ICDAR.2007.4377065",
    editor = " ",
    isbn = "9780769528229",
    issn = "1520-5363",
    keywords = "Biometrics; Signature; Speech; Handwriting; Multimodal",
    month = " September",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    number = " ",
    organization = " ",
    pages = "999-1003",
    publisher = "IEEE Computer Society",
    title = "{S}poken {H}andwriting {V}erification using {S}tatistical {M}odels",
    Pdf = "http://www.hennebert.org/download/publications/icdar-2007-spoken-handwriting-verification-using-statistical-models.pdf",
    volume = "2",
    year = "2007",
    }

    We are proposing a novel and efficient user authentication system using combined acquisition of online handwriting and speech signals. In our approach, signals are recorded by asking the user to say what she or he is simultaneously writing. This methodology has the clear advantage of acquiring two sources of biometric information at no extra cost in terms of time or inconvenience. We have built a straightforward verification system to model these signals using statistical models. It is composed of two Gaussian Mixture Models (GMMs) sub-systems that takes as input features extracted from the pen and voice signals. The system is evaluated on MyIdea, a realistic multimodal biometric database. Results show that the use of both speech and handwriting modalities outperforms significantly these modalities used alone. We also report on the evaluations of different training algorithms and fusion strategies.

  • [PDF] [DOI] A. Humm, J. Hennebert, and R. Ingold, "Hidden Markov Models for Spoken Signature Verification," in Biometrics: Theory, Applications, and Systems, 2007. BTAS 2007. First IEEE International Conference on, 2007, pp. 1-6.
    [Bibtex] [Abstract]
    @conference{humm07:btas,
    author = "Andreas Humm and Jean Hennebert and Rolf Ingold",
    abstract = "In this paper we report on the developments of an efficient user authentication system using combined acquisition of online signature and speech modalities. In our project, these two modalities are simultaneously recorded by asking the user to utter what she/he is writing. The main benefit of this multimodal approach is a better accuracy at no extra costs in terms of access time or inconvenience. More specifically, we report in this paper on significant improvements of our initial system that was based on Gaussian Mixture Models (GMMs) applied independently to the pen and voice signal. We show that the GMMs can be advantageously replaced by Hidden Markov Models (HMMs) provided that the number of state used for the topology is optimized and provided that the model parameters are trained with a Maximum a Posteriori (MAP) adaptation procedure instead of the classically used Expectation Maximization (EM). The evaluations are conducted on spoken signatures taken from the MyIDea multimodal database. Consistently with our previous evaluation of the GMM system, we observe for the HMM system that the use of both speech and handwriting modalities outperforms significantly these modalities used alone. We also report on the evaluations of different score fusion strategies.",
    booktitle = "Biometrics: Theory, Applications, and Systems, 2007. BTAS 2007. First IEEE International Conference on",
    doi = "10.1109/BTAS.2007.4401960",
    isbn = "9781424415977",
    keywords = "MAP adaptation procedure;hidden Markov models;maximum a posteriori adaptation procedure;multimodal approach;online signature;speech modalities;spoken signature verification;user authentication system;biometrics",
    month = "September",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    pages = "1 -6",
    title = "{H}idden {M}arkov {M}odels for {S}poken {S}ignature {V}erification",
    Pdf = "http://www.hennebert.org/download/publications/btas-2007-hidden-markov-models-spoken-signature-verification.pdf",
    year = "2007",
    }

    In this paper we report on the developments of an efficient user authentication system using combined acquisition of online signature and speech modalities. In our project, these two modalities are simultaneously recorded by asking the user to utter what she/he is writing. The main benefit of this multimodal approach is a better accuracy at no extra costs in terms of access time or inconvenience. More specifically, we report in this paper on significant improvements of our initial system that was based on Gaussian Mixture Models (GMMs) applied independently to the pen and voice signal. We show that the GMMs can be advantageously replaced by Hidden Markov Models (HMMs) provided that the number of state used for the topology is optimized and provided that the model parameters are trained with a Maximum a Posteriori (MAP) adaptation procedure instead of the classically used Expectation Maximization (EM). The evaluations are conducted on spoken signatures taken from the MyIDea multimodal database. Consistently with our previous evaluation of the GMM system, we observe for the HMM system that the use of both speech and handwriting modalities outperforms significantly these modalities used alone. We also report on the evaluations of different score fusion strategies.

  • [PDF] [DOI] A. Humm, J. Hennebert, and R. Ingold, "Modelling Combined Handwriting And Speech Modalities," in International Conference on Biometrics (ICB 2007), Seoul Korea, S. Verlag, Ed., Lecture Notes in Computer Science, Advances in Biometrics, 2007, vol. 4642, pp. 1025-1034.
    [Bibtex] [Abstract]
    @inbook{humm07:icb,
    author = "Andreas Humm and Jean Hennebert and Rolf Ingold",
    abstract = "We are reporting on consolidated results obtained with a new user authentication system based on combined acquisition of online handwriting and speech signals. In our approach, signals are recorded by asking the user to say what she or he is simultaneously writing. This methodology has the clear advantage of acquiring two sources of biometric information at no extra cost in terms of time or inconvenience. We are proposing here two scenarios of use: spoken signature where the user signs and speaks at the same time and spoken handwriting where the user writes and says what is written. These two scenarios are implemented and fully evaluated using a verification system based on Gaussian Mixture Models (GMMs). The evaluation is performed on MyIdea, a realistic multimodal biometric database. Results show that the use of both speech and handwriting modalities outperforms significantly these modalities used alone, for both scenarios. Comparisons between the spoken signature and spoken handwriting scenarios are also drawn.",
    address = " ",
    booktitle = "International Conference on Biometrics (ICB 2007), Seoul Korea",
    chapter = "ICB 2007",
    crossref = " ",
    doi = "10.1007/978-3-540-74549-5",
    editor = " Springer Verlag",
    isbn = "9783540745488",
    keywords = "Biometrics; Signature; Speech; Handwriting; Multimodal",
    month = "August",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    number = " ",
    organization = " ",
    pages = "1025-1034",
    publisher = "Lecture Notes in Computer Science, Advances in Biometrics",
    title = "{M}odelling {C}ombined {H}andwriting {A}nd {S}peech {M}odalities",
    Pdf = "http://www.hennebert.org/download/publications/icb-2007-modelling-combine-handwriting-speech-modalities.pdf",
    volume = "4642",
    year = "2007",
    }

    We are reporting on consolidated results obtained with a new user authentication system based on combined acquisition of online handwriting and speech signals. In our approach, signals are recorded by asking the user to say what she or he is simultaneously writing. This methodology has the clear advantage of acquiring two sources of biometric information at no extra cost in terms of time or inconvenience. We are proposing here two scenarios of use: spoken signature where the user signs and speaks at the same time and spoken handwriting where the user writes and says what is written. These two scenarios are implemented and fully evaluated using a verification system based on Gaussian Mixture Models (GMMs). The evaluation is performed on MyIdea, a realistic multimodal biometric database. Results show that the use of both speech and handwriting modalities outperforms significantly these modalities used alone, for both scenarios. Comparisons between the spoken signature and spoken handwriting scenarios are also drawn.

  • D. J. Clovis, C. Kevin, P. Kuonen, M. Pierre, and N. T. Anh, "Parallel sparse matrix-vector multiplication in heterogeneous platform using parallel object model POP-C++: Parallel/High-performance Object-Oriented Scientific Computing Workshop. POOSC'2006," Parallel/High-performance Object-Oriented Scientific Computing Workshop, 2006.
    [Bibtex]
    @article{Dongmo:612,
    Author = {Dongmo Jiogo Clovis and Cristiano Kevin and Pierre Kuonen and Manneback Pierre and Nguyen Tuan Anh},
    Journal = {Parallel/High-performance Object-Oriented Scientific Computing Workshop},
    Month = {jul},
    Title = {Parallel sparse matrix-vector multiplication in heterogeneous platform using parallel object model POP-C++: Parallel/High-performance Object-Oriented Scientific Computing Workshop. POOSC'2006},
    Year = {2006}}
  • D. J. Clovis, P. Kuonen, and M. Pierre, "Well balanced sparse matrix-vector multiplication on a parallel heterogeneous system. 2006 IEEE," International Conference on Cluster Computing, 2006.
    [Bibtex]
    @article{Dongmo:611,
    Author = {Dongmo Jiogo Clovis and Pierre Kuonen and Manneback Pierre},
    Journal = {International Conference on Cluster Computing},
    Month = {sep},
    Title = {Well balanced sparse matrix-vector multiplication on a parallel heterogeneous system. 2006 IEEE},
    Year = {2006}}
  • J. C. Dongmo, K. Cristiano, P. Kuonen, P. Manneback, and T. A. Nguyen, "Parallel Object Programming in POP-C++: A Case Study for Sparse Matrix-vector Multiplication," 20th European Conference on Object-Oriented Programming (ECOOP06), 2006.
    [Bibtex]
    @article{Dongmo:613,
    Author = {Jiogo Clovis Dongmo and Kevin Cristiano and Pierre Kuonen and Pierre Manneback and Tuan Anh Nguyen},
    Journal = {20th European Conference on Object-Oriented Programming (ECOOP06)},
    Month = {jul},
    Title = {Parallel Object Programming in POP-C++: A Case Study for Sparse Matrix-vector Multiplication},
    Year = {2006}}
  • [PDF] [DOI] A. E. Hannani, D. Toledano, D. Petrovska, A. Montero-Asenjo, and J. Hennebert, "Using Data-driven and Phonetic Units for Speaker Verification," in IEEE Speaker and Language Recognition Workshop (Odyssey 2006), Puerto Rico, 2006, pp. 1-6.
    [Bibtex] [Abstract]
    @conference{elha06:odis,
    author = "Asmaa El Hannani and Doroteo Toledano and Dijana Petrovska and Alberto Montero-Asenjo and Jean Hennebert",
    abstract = "Recognition of speaker identity based on modeling the streams produced by phonetic decoders (phonetic speaker recognition) has gained popularity during the past few years. Two of the major problems that arise when phone based systems are being developed are the possible mismatches between the development and evaluation data and the lack of transcribed databases. Data-driven segmentation techniques provide a potential solution to these problems because they do not use transcribed data and can easily be applied on development data minimizing the mismatches. In this paper we compare speaker recognition results using phonetic and data-driven decoders. To this end, we have compared the results obtained with a speaker recognition system based on data-driven acoustic units and phonetic speaker recognition systems trained on Spanish and English data. Results obtained on the NIST 2005 Speaker Recognition Evaluation data show that the data-driven approach outperforms the phonetic one and that further improvements can be achieved by combining both approaches.",
    address = " ",
    booktitle = "IEEE Speaker and Language Recognition Workshop (Odyssey 2006), Puerto Rico",
    crossref = " ",
    doi = "10.1109/ODYSSEY.2006.248134",
    editor = " ",
    isbn = "142440472X",
    keywords = "Biometrics; Speaker Verification",
    month = " June",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    number = " ",
    organization = " ",
    pages = " 1-6",
    publisher = " ",
    series = " ",
    title = "{U}sing {D}ata-driven and {P}honetic {U}nits for {S}peaker {V}erification",
    Pdf = "http://www.hennebert.org/download/publications/odyssey-2006-using-data-driven-phonetic-units-speaker-verification.pdf",
    volume = " ",
    year = "2006",
    }

    Recognition of speaker identity based on modeling the streams produced by phonetic decoders (phonetic speaker recognition) has gained popularity during the past few years. Two of the major problems that arise when phone based systems are being developed are the possible mismatches between the development and evaluation data and the lack of transcribed databases. Data-driven segmentation techniques provide a potential solution to these problems because they do not use transcribed data and can easily be applied on development data minimizing the mismatches. In this paper we compare speaker recognition results using phonetic and data-driven decoders. To this end, we have compared the results obtained with a speaker recognition system based on data-driven acoustic units and phonetic speaker recognition systems trained on Spanish and English data. Results obtained on the NIST 2005 Speaker Recognition Evaluation data show that the data-driven approach outperforms the phonetic one and that further improvements can be achieved by combining both approaches.

  • [PDF] J. Hennebert, A. Humm, and R. Ingold, "Vérification d'Identité par Ecriture et Parole Combinées," in Colloque International Francophone sur l'Ecrit et le Document, Fribourg, Suisse (CIFED 2006), 2006.
    [Bibtex] [Abstract]
    @conference{henn06:cifed,
    author = "Jean Hennebert and Andreas Humm and Rolf Ingold",
    abstract = "Nous rapportons les premiers d{\'e}veloppements d'un syst{\`e}me de v{\'e}rification d'identit{\'e} par utilisation combin{\'e}e de l'{\'e}criture et de la parole. La nouveaut{\'e} de notre approche r{\'e}side dans l'enregistrement simultan{\'e} de ces deux modalit{\'e}s en demandant {\`a} l'utilisateur d'{\'e}noncer ce qu'il est en train d'{\'e}crire. Nous pr{\'e}sentons et analysons deux sc{\'e}narii: la signature lue o{\`u} l'utilisateur {\'e}nonce le contenu de sa signature et l'{\'e}criture lue. Nous d{\'e}crivons le syst{\`e}me d'acquisition, l'enregistrement d'une base de donn{\'e}es d'{\'e}valuation, les r{\'e}sultats d'une enqu{\^e}te d'acceptabilit{\'e}, le syst{\`e}me de v{\'e}rification {\`a} base de multi-gaussiennes et les r{\'e}sultats de ce dernier obtenus pour le sc{\'e}nario signature.",
    address = " ",
    booktitle = "Colloque International Francophone sur l'Ecrit et le Document, Fribourg, Suisse (CIFED 2006)",
    crossref = " ",
    editor = " ",
    keywords = "Biometrics; Signature; Speech; Handwriting",
    month = "Septembre",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    number = " ",
    organization = " ",
    pages = " ",
    publisher = " ",
    series = " ",
    title = "{V}{\'e}rification d'{I}dentit{\'e} par {E}criture et {P}arole {C}ombin{\'e}es",
    Pdf = "http://www.hennebert.org/download/publications/cifed-2006-verification-identite-ecriture-parole-combinee.pdf",
    volume = " ",
    year = "2006",
    }

    Nous rapportons les premiers développements d'un système de vérification d'identité par utilisation combinée de l'écriture et de la parole. La nouveauté de notre approche réside dans l'enregistrement simultané de ces deux modalités en demandant à l'utilisateur d'énoncer ce qu'il est en train d'écrire. Nous présentons et analysons deux scénarii: la signature lue où l'utilisateur énonce le contenu de sa signature et l'écriture lue. Nous décrivons le système d'acquisition, l'enregistrement d'une base de données d'évaluation, les résultats d'une enquête d'acceptabilité, le système de vérification à base de multi-gaussiennes et les résultats de ce dernier obtenus pour le scénario signature.

  • [PDF] J. Hennebert, A. Wahl, and A. Humm, Video of Sign4J, a Novel Tool to Generate Brute-Force Signature Forgeries, 2006.
    [Bibtex] [Abstract]
    @misc{henn06:sign,
    author = "Jean Hennebert and Alain Wahl and Andreas Humm",
    abstract = "In this video, we present a procedure to create brute-force signature forgeries using Sign4J, a dynamic signature imitation training software that was specifically built to help people learn to imitate the dynamics of signatures. The main novelty of the procedure lies in a feedback mechanism that is provided to let the user know how good the imitation is and on what part of the signature the user has still to improve. A scientific publication has been done to describe the procedure implemented in the Sign4J software: A. Wahl, J. Hennebert, A. Humm and R. Ingold. "Generation and Evaluation of Brute-Force Signature Forgeries". International Workshop on Multimedia Content Representation, Classification and Security (MRCS'06), Istanbul, Turkey. 2006. pp. 2-9. In this publication, we report about a large scale test done on the MCYT-100 database. The procedure and the software are used to generate a set of brute-force signatures on the MCYT-100 database. This set of forged signatures is used to evaluate the rejection performance of a baseline dynamic signature verification system. As expected, the brute-force forgeries generate more false acceptation in comparison to the random and low-force forgeries available in the MCYT-100 database.",
    howpublished = "http://www.hennebert.org/download/movies/2006-signature-imitation-training-program.avi",
    keywords = "biometrics; Signature; Forgeries",
    month = "September",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    title = "{V}ideo of {S}ign4{J}, a {N}ovel {T}ool to {G}enerate {B}rute-{F}orce {S}ignature {F}orgeries",
    Pdf = "http://www.hennebert.org/download/movies/2006-signature-imitation-training-program.avi",
    year = "2006",
    }

    In this video, we present a procedure to create brute-force signature forgeries using Sign4J, a dynamic signature imitation training software that was specifically built to help people learn to imitate the dynamics of signatures. The main novelty of the procedure lies in a feedback mechanism that is provided to let the user know how good the imitation is and on what part of the signature the user has still to improve. A scientific publication has been done to describe the procedure implemented in the Sign4J software: A. Wahl, J. Hennebert, A. Humm and R. Ingold. "Generation and Evaluation of Brute-Force Signature Forgeries". International Workshop on Multimedia Content Representation, Classification and Security (MRCS'06), Istanbul, Turkey. 2006. pp. 2-9. In this publication, we report about a large scale test done on the MCYT-100 database. The procedure and the software are used to generate a set of brute-force signatures on the MCYT-100 database. This set of forged signatures is used to evaluate the rejection performance of a baseline dynamic signature verification system. As expected, the brute-force forgeries generate more false acceptation in comparison to the random and low-force forgeries available in the MCYT-100 database.

  • [PDF] A. Humm, J. Hennebert, and R. Ingold, "Scenario and Survey of Combined Handwriting and Speech Modalities for User Authentication," in 6th International Conference on Recent Advances in Soft Computing (RASC 2006), Canterburry, Kent, UK, 2006, pp. 496-501.
    [Bibtex] [Abstract]
    @conference{humm06:rasc,
    author = "Andreas Humm and Jean Hennebert and Rolf Ingold",
    abstract = "We report on our developments towards building a novel user authentication system using combined handwriting and speech modalities. In our project, these modalities are simul- taneously recorded by asking the user to utter what he is writing. We introduce two potential scenarios that we have identified as candidates for applications and we describe the database recorded according to these scenarios. We then report on a usability survey that we have con- ducted while recording the database. Finally, we present preliminary performance results obtained on the database using one of the scenario.",
    address = " ",
    booktitle = "6th International Conference on Recent Advances in Soft Computing (RASC 2006), Canterburry, Kent, UK",
    crossref = " ",
    editor = " ",
    keywords = "Biometrics; Signature; Speech; Handwriting",
    month = " ",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    number = " ",
    organization = " ",
    pages = "496-501",
    publisher = " ",
    series = " ",
    title = "{S}cenario and {S}urvey of {C}ombined {H}andwriting and {S}peech {M}odalities for {U}ser {A}uthentication",
    Pdf = "http://www.hennebert.org/download/publications/rasc-2006-scenario-survey-combined-hendwriting-speech-modalities-user-authentication.pdf",
    volume = " ",
    year = "2006",
    }

    We report on our developments towards building a novel user authentication system using combined handwriting and speech modalities. In our project, these modalities are simul- taneously recorded by asking the user to utter what he is writing. We introduce two potential scenarios that we have identified as candidates for applications and we describe the database recorded according to these scenarios. We then report on a usability survey that we have con- ducted while recording the database. Finally, we present preliminary performance results obtained on the database using one of the scenario.

  • [PDF] A. Humm, J. Hennebert, and R. Ingold, "Combined Handwriting and Speech Modalities for User Authentication," University of Fribourg, Department of Informatics, 270-06-05, 2006.
    [Bibtex] [Abstract]
    @techreport{humm06:tr270,
    author = "Andreas Humm and Jean Hennebert and Rolf Ingold",
    abstract = "We report on our first developments towards building a novel user authentication system using combined handwriting and speech modalities. In our project, these modalities are simultaneously recorded by asking the user to utter what he is writing. We first report on a database that we have recorded according to this scenario. Then, we report on the results of a usability survey that we have conducted while recording the database. Finally, we present the assessment protocols for authentication systems defined on the database.",
    address = " ",
    institution = "University of Fribourg, Department of Informatics",
    keywords = "Biometrics; Signature; Speech; Handwriting; Multimodal",
    month = " ",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    number = "270-06-05",
    title = "{C}ombined {H}andwriting and {S}peech {M}odalities for {U}ser {A}uthentication",
    type = " ",
    Pdf = "http://www.hennebert.org/download/publications/tr-2006-chasm-combined-handwriting-speech-modalities-for-user-authentication.pdf",
    year = "2006",
    }

    We report on our first developments towards building a novel user authentication system using combined handwriting and speech modalities. In our project, these modalities are simultaneously recorded by asking the user to utter what he is writing. We first report on a database that we have recorded according to this scenario. Then, we report on the results of a usability survey that we have conducted while recording the database. Finally, we present the assessment protocols for authentication systems defined on the database.

  • [PDF] A. Humm, J. Hennebert, and R. Ingold, "Gaussian Mixture Models for CHASM Signature Verification," in 3rd Joint Workshop on Multimodal Interaction and Related Machine Learning Algorithms (MLMI 06), Washington, USA, 2006, pp. 102-113.
    [Bibtex] [Abstract]
    @conference{humm06:mlmi,
    author = "Andreas Humm and Jean Hennebert and Rolf Ingold",
    abstract = "In this paper we report on first experimental results of a novel multimodal user authentication system based on a combined acquisition of online handwritten signature and speech modalities. In our project, the so-called CHASM signatures are recorded by asking the user to utter what he is writing. CHASM actually stands for Combined Handwriting and Speech Modalities where the pen and voice signals are simultaneously recorded. We have built a baseline CHASM signature verification system for which we have conducted a complete experimental evaluation. This baseline system is composed of two Gaussian Mixture Models sub-systems that model independently the pen and voice signal. A simple fusion of both sub-systems is performed at the score level. The evaluation of the verification system is conducted on CHASM signatures taken from the MyIDea multimodal database, accordingly to the protocols provided with the database. This allows us to draw our first conclusions in regards to time variability impact, to skilled versus unskilled forgeries attacks and to some training parameters. Results are also reported for the two sub-systems evaluated separately and for the global system.",
    address = " ",
    booktitle = "3rd Joint Workshop on Multimodal Interaction and Related Machine Learning Algorithms (MLMI 06), Washington, USA",
    crossref = " ",
    editor = "Steve Renals; Samy Bengio; JonathanFiskus",
    isbn = "9783540692676",
    keywords = "Biometrics; Signature; Speech; Handwriting",
    month = " May",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    number = " ",
    organization = " ",
    pages = " 102-113",
    publisher = "Springer Verlag",
    series = " Lecture Notes in Computer Science",
    title = "{G}aussian {M}ixture {M}odels for {CHASM} {S}ignature {V}erification",
    Pdf = "http://www.hennebert.org/download/publications/mlmi-2006-gaussian-mixture-models-chasm-signature-verification.pdf",
    volume = " 4299",
    year = "2006",
    }

    In this paper we report on first experimental results of a novel multimodal user authentication system based on a combined acquisition of online handwritten signature and speech modalities. In our project, the so-called CHASM signatures are recorded by asking the user to utter what he is writing. CHASM actually stands for Combined Handwriting and Speech Modalities where the pen and voice signals are simultaneously recorded. We have built a baseline CHASM signature verification system for which we have conducted a complete experimental evaluation. This baseline system is composed of two Gaussian Mixture Models sub-systems that model independently the pen and voice signal. A simple fusion of both sub-systems is performed at the score level. The evaluation of the verification system is conducted on CHASM signatures taken from the MyIDea multimodal database, accordingly to the protocols provided with the database. This allows us to draw our first conclusions in regards to time variability impact, to skilled versus unskilled forgeries attacks and to some training parameters. Results are also reported for the two sub-systems evaluated separately and for the global system.

  • C. Kevin, P. Kuonen, B. Olivier, and B. Vincent, "Smart Grid Node: un noeud intelligent pour la grille," 17es Rencontres fancophones du Parallélisme, pp. 164-171, 2006.
    [Bibtex]
    @article{Cristiano:610,
    Author = {Cristiano Kevin and Pierre Kuonen and Beaumont Olivier and Boudet Vincent},
    Journal = {17es Rencontres fancophones du Parall{\'e}lisme},
    Month = {oct},
    Pages = {164-171},
    Title = {Smart Grid Node: un noeud intelligent pour la grille},
    Year = {2006}}
  • [PDF] A. Wahl, J. Hennebert, A. Humm, and R. Ingold, "Generation and Evaluation of Brute-Force Signature Forgeries," in International Workshop on Multimedia Content Representation, Classification and Security (MRCS'06), Istanbul, Turkey, 2006, pp. 2-9.
    [Bibtex] [Abstract]
    @conference{wahl06:mrcs,
    author = "Alain Wahl and Jean Hennebert and Andreas Humm and Rolf Ingold",
    abstract = "We present a procedure to create brute-force signature forgeries. The procedure is supported by Sign4J, a dynamic signature imitation training software that was specifically built to help people learn to imitate the dynamics of signatures. The main novelty of the procedure lies in a feedback mechanism that is provided to let the user know how good the imitation is and on what part of the signature the user has still to improve. The procedure and the software are used to generate a set of brute-force signatures on the MCYT-100 database. This set of forged signatures is used to evaluate the rejection performance of a baseline dynamic signature verification system. As expected, the brute-force forgeries generate more false acceptation in comparison to the random and low-force forgeries available in the MCYT-100 database.",
    address = " ",
    booktitle = "International Workshop on Multimedia Content Representation, Classification and Security (MRCS'06), Istanbul, Turkey",
    crossref = " ",
    editor = " ",
    isbn = "9783540393924",
    keywords = "Biometrics; Signature; Forgeries",
    month = " September",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    number = " ",
    organization = " ",
    pages = "2-9",
    publisher = " ",
    series = " Lecture Notes in Computer Science",
    title = "{G}eneration and {E}valuation of {B}rute-{F}orce {S}ignature {F}orgeries",
    Pdf = "http://www.hennebert.org/download/publications/iwmrcs-2006-generation-evaluation-brute-force-signature-forgeries.pdf",
    volume = "4105",
    year = "2006",
    }

    We present a procedure to create brute-force signature forgeries. The procedure is supported by Sign4J, a dynamic signature imitation training software that was specifically built to help people learn to imitate the dynamics of signatures. The main novelty of the procedure lies in a feedback mechanism that is provided to let the user know how good the imitation is and on what part of the signature the user has still to improve. The procedure and the software are used to generate a set of brute-force signatures on the MCYT-100 database. This set of forged signatures is used to evaluate the rejection performance of a baseline dynamic signature verification system. As expected, the brute-force forgeries generate more false acceptation in comparison to the random and low-force forgeries available in the MCYT-100 database.

  • [PDF] A. Wahl, J. Hennebert, A. Humm, and R. Ingold, "A novel method to generate Brute-Force Signature Forgeries," University of Fribourg, Department of Informatics, 274-06-09, 2006.
    [Bibtex] [Abstract]
    @techreport{wahl06:tr274,
    author = "Alain Wahl and Jean Hennebert and Andreas Humm and Rolf Ingold",
    abstract = "We present a procedure to create brute-force signature forgeries. The procedure is supported by Sign4J, a dynamic signature imitation training software that was specifically built to help people learn to imitate the dynamics of signatures. The main novelty of the procedure lies in a feedback mechanism that is provided to let the user know how good the imitation is and on what part of the signature the user has still to improve. The procedure and the software are used to generate a set of brute-force signatures on the MCYT-100 database. This set of forged signatures is used to evaluate the rejection performance of a baseline dynamic signature verification system. As expected, the brute-force forgeries generate more false acceptation in comparison to the random and low-force forgeries available in the MCYT-100 database.",
    address = " ",
    institution = "University of Fribourg, Department of Informatics",
    keywords = "Biometrics; Signature; Forgeries",
    month = " September",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    number = "274-06-09",
    title = "{A} novel method to generate {B}rute-{F}orce {S}ignature {F}orgeries",
    type = " ",
    Pdf = "http://www.hennebert.org/download/publications/tr-2006-sign4j-novel-method-generate-brute-force-signature-forgeries.pdf",
    year = "2006",
    }

    We present a procedure to create brute-force signature forgeries. The procedure is supported by Sign4J, a dynamic signature imitation training software that was specifically built to help people learn to imitate the dynamics of signatures. The main novelty of the procedure lies in a feedback mechanism that is provided to let the user know how good the imitation is and on what part of the signature the user has still to improve. The procedure and the software are used to generate a set of brute-force signatures on the MCYT-100 database. This set of forged signatures is used to evaluate the rejection performance of a baseline dynamic signature verification system. As expected, the brute-force forgeries generate more false acceptation in comparison to the random and low-force forgeries available in the MCYT-100 database.

  • [PDF] B. Dumas, C. Pugin, J. Hennebert, D. Petrovska, A. Humm, F. Evequoz, R. Ingold, and D. von Rotz, "MyIDea - Multimodal Biometrics Database, Description of Acquisition Protocols," in Biometrics on the Internet, 3rd COST 275 Workshop, Hatfield, UK, 2005, pp. 59-62.
    [Bibtex] [Abstract]
    @conference{duma05:cost,
    author = "Bruno Dumas and Catherine Pugin and Jean Hennebert and Dijana Petrovska and Andreas Humm and Florian Evequoz and Rolf Ingold and von Rotz, Didier",
    abstract = "This document describes the acquisition protocols of MyIDea, a new large and realistic multimodal biometric database designed to conduct research experiments in Identity Verification (IV). The key points of MyIDea are threefold: (1) it is strongly multimodal; (2) it implements realistic scenarios in an open-set framework; (3) it uses sensors of different quality to record most of the modalities. The combination of these three points makes MyIDea novel and pretty unique in comparison to existing databases. Furthermore, special care is put in the design of the acquisition procedures to allow MyIDea to complement existing databases such as BANCA, MCYT or BIOMET. MyIDea includes talking face, audio, fingerprints, signature, handwriting and hand geometry. MyIDea will be available early 2006 with an initial set of 104 subjects recorded over three sessions. Other recording sets will be potentially planned in 2006.",
    address = " ",
    booktitle = "Biometrics on the Internet, 3rd COST 275 Workshop, Hatfield, UK",
    crossref = " ",
    editor = " ",
    keywords = "Biometrics; Database; Speech; Image; Fingerprint; Hand; Handwriting; Signature",
    month = " ",
    number = " ",
    organization = " ",
    pages = "59-62",
    publisher = " ",
    series = " ",
    title = "{M}y{ID}ea - {M}ultimodal {B}iometrics {D}atabase, {D}escription of {A}cquisition {P}rotocols",
    Pdf = "http://www.hennebert.org/download/publications/cost275-2005-MyIdea-multimodal-biometrics-database-description-acquisition-protocols.pdf",
    volume = " ",
    year = "2005",
    }

    This document describes the acquisition protocols of MyIDea, a new large and realistic multimodal biometric database designed to conduct research experiments in Identity Verification (IV). The key points of MyIDea are threefold: (1) it is strongly multimodal; (2) it implements realistic scenarios in an open-set framework; (3) it uses sensors of different quality to record most of the modalities. The combination of these three points makes MyIDea novel and pretty unique in comparison to existing databases. Furthermore, special care is put in the design of the acquisition procedures to allow MyIDea to complement existing databases such as BANCA, MCYT or BIOMET. MyIDea includes talking face, audio, fingerprints, signature, handwriting and hand geometry. MyIDea will be available early 2006 with an initial set of 104 subjects recorded over three sessions. Other recording sets will be potentially planned in 2006.

  • [PDF] B. Dumas, J. Hennebert, A. Humm, R. Ingold, D. Petrovska, C. Pugin, and D. von Rotz, "MyIdea - Sensors Specifications and Acquisition Protocol," University of Fribourg, Department of Informatics, 256-05-12, 2005.
    [Bibtex] [Abstract]
    @techreport{henn05:tr256,
    author = "Bruno Dumas and Jean Hennebert and Andreas Humm and Rolf Ingold and Dijana Petrovska and Catherine Pugin and von Rotz, Didier",
    abstract = "In this document we describe the sensor specifications and acquisition protocol of MyIdea, a new large and realistic multi-modal biometric database designed to conduct research experiments in Identity Verification.",
    address = " ",
    institution = "University of Fribourg, Department of Informatics",
    keywords = "Biometrics, Database, Speech, Image, Fingerprint, Hand, Handwriting",
    month = " June",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    number = "256-05-12",
    title = "{M}y{I}dea - {S}ensors {S}pecifications and {A}cquisition {P}rotocol",
    type = " ",
    Pdf = "http://www.hennebert.org/download/publications/tr-2006-myidea-sensors-specificationsacquisition-protocol.pdf",
    year = "2005",
    }

    In this document we describe the sensor specifications and acquisition protocol of MyIdea, a new large and realistic multi-modal biometric database designed to conduct research experiments in Identity Verification.

  • J. Hennebert, A. Humm, B. Dumas, C. Pugin, and F. Evequoz, Web Site of MyIDea Multimodal Database, 2005.
    [Bibtex] [Abstract]
    @misc{henn05:myid,
    author = "Jean Hennebert and Andreas Humm and Bruno Dumas and Catherine Pugin and Florian Evequoz",
    abstract = "In the framework of the Swiss National Center of Competence in Research (NCCR) on Interactive Multimodal Information Management IM2 and of the european IST BioSecure project, the DIVA group of the informatics department of the university of Fribourg - DIUF - has recorded a multimodal biometric database called MyIDea. The recorded data that will be made available to institutes for research purposes.
    The acquisition campaign started end of 2004 and finished in December 2005. The database is now in its validation phase. Some data sets are already available for distribution (please contact us to check planned dates for availabilities or fill-in this inline formular to express your interest in the data).",
    howpublished = "http://diuf.unifr.ch/go/myidea",
    keywords = "Biometrics; Database; Speech; Image; Fingerprint; Hand; Handwriting",
    month = " ",
    title = "{W}eb {S}ite of {M}y{ID}ea {M}ultimodal {D}atabase",
    url = "http://diuf.unifr.ch/go/myidea",
    year = "2005",
    }

    In the framework of the Swiss National Center of Competence in Research (NCCR) on Interactive Multimodal Information Management IM2 and of the european IST BioSecure project, the DIVA group of the informatics department of the university of Fribourg - DIUF - has recorded a multimodal biometric database called MyIDea. The recorded data that will be made available to institutes for research purposes. The acquisition campaign started end of 2004 and finished in December 2005. The database is now in its validation phase. Some data sets are already available for distribution (please contact us to check planned dates for availabilities or fill-in this inline formular to express your interest in the data).

  • V. Keller, K. Cristiano, R. Gruber, P. Kuonen, S. Maffioletti, N. Nellari, M. Sawley, T. Tran, P. Wieder, and W. Ziegler, "Integration of ISS into the VIOLA Meta-scheduling Environment," CoreGRID Integration Workshop, pp. 357-366, 2005.
    [Bibtex]
    @article{Vincent:614,
    Author = {Vincent Keller and Kevin Cristiano and Ralf Gruber and Pierre Kuonen and Sergio Maffioletti and Nello Nellari and Marie-Christine Sawley and Trach-Minh Tran and Philipp Wieder and Wolfgang Ziegler},
    Journal = {CoreGRID Integration Workshop},
    Month = {nov},
    Pages = {357 - 366},
    Title = {Integration of ISS into the VIOLA Meta-scheduling Environment},
    Year = {2005}}
  • P. Kuonen and K. Christiano, "WOS : la Grille avant la grille: Flash Informatique," Flash Informatique, 2005.
    [Bibtex]
    @article{Kuonen:617,
    Author = {Pierre Kuonen and Kevin Christiano},
    Journal = {Flash Informatique},
    Month = {August},
    Title = {WOS : la Grille avant la grille: Flash Informatique},
    Year = {2005}}
  • P. Manneback, G. Bergere, N. Emad, R. Gruber, V. Keller, P. Kuonen, T. A. Nguyen, and S. Noël, "Towards a scheduling policy for hybrid methods on computational Grids. CoreGRID Integration Workshop," , pp. 308-316, 2005.
    [Bibtex]
    @article{Manneback:616,
    Author = {Pierre Manneback and Guy Bergere and Nahid Emad and Ralf Gruber and Vincent Keller and Pierre Kuonen and Tuan Anh Nguyen and S{\'e}bastien No{\"e}l},
    Month = {nov},
    Pages = {308 - 316},
    Title = {Towards a scheduling policy for hybrid methods on computational Grids. CoreGRID Integration Workshop},
    Year = {2005}}
  • E. Mugellini, O. A. Khaled, M. C. Pettanati, and P. Kuonen, "eGovSM Metadata Model: Towards a Flexible, Interoperable and Scalable eGovernment Service Marketplace," IEEE International Conference on e-Technology e-Commerce and e-Service (EEE'05), pp. 618-621, 2005.
    [Bibtex]
    @article{Mugellini:734,
    Author = {Elena Mugellini and Omar Abou Khaled and Maria Chiara Pettanati and Pierre Kuonen},
    Journal = {IEEE International Conference on e-Technology e-Commerce and e-Service (EEE'05)},
    Month = {mar},
    Pages = {618-621},
    Title = {eGovSM Metadata Model: Towards a Flexible, Interoperable and Scalable eGovernment Service Marketplace},
    Year = {2005}}
  • T. Nguyen and P. Kuonen, "ParoC++: Extending C++ to the Grid: International Conference on Grid Computing and Applications. GCA 2005," International Conference on Grid Computing and Applications, pp. 177-183, 2005.
    [Bibtex]
    @article{Tuan-Anh:1133,
    Author = {Tuan-Anh Nguyen and Pierre Kuonen},
    Journal = {International Conference on Grid Computing and Applications},
    Month = {jun},
    Pages = {177-183},
    Title = {ParoC++: Extending C++ to the Grid: International Conference on Grid Computing and Applications. GCA 2005},
    Year = {2005}}
  • A. K. Omar, M. Elena, P. M. Chiara, and P. Kuonen, "eGovSM Metadata Model: Towards a Flexible, Interoperable and Scalable eGovernment Service Marketplace," conference on e-Technology, e-Commerce and e-Service, 2005.
    [Bibtex]
    @article{Khaled:538,
    Author = {Abou Khaled Omar and Mugellini Elena and Pettanati Maria Chiara and Pierre Kuonen},
    Journal = {conference on e-Technology, e-Commerce and e-Service},
    Month = {ao{\^u}},
    Title = {eGovSM Metadata Model: Towards a Flexible, Interoperable and Scalable eGovernment Service Marketplace},
    Year = {2005}}
  • M. Pasin, P. Kuonen, M. Danelutto, and M. Aldinucci, "Skeleton Parallel Programming and Parallel Objects. CoreGRID Integration Workshop," , pp. 115-124, 2005.
    [Bibtex]
    @article{Marcelo:615,
    Author = {Marcelo Pasin and Pierre Kuonen and Marco Danelutto and Marco Aldinucci},
    Month = {nov},
    Pages = {115 - 124},
    Title = {Skeleton Parallel Programming and Parallel Objects. CoreGRID Integration Workshop},
    Year = {2005}}
  • P. Kuonen, R. Gruber, and M. Sawley, "ISS: The collaborative Development of an Intelligent Scheduler," SWITCHjournal, pp. 18-19, 2004.
    [Bibtex]
    @article{Kuonen:489,
    Author = {Pierre Kuonen and Ralf Gruber and Marie-Christine Sawley},
    Issn = {1422-5662},
    Journal = {SWITCHjournal},
    Pages = {18-19},
    Title = {ISS: The collaborative Development of an Intelligent Scheduler},
    Year = {2004}}
  • [PDF] J. Hennebert, E. Mosanya, G. Zanellato, F. Hambye, and U. Mosanya, EPO Patent pending: Speech Recognition Device, 2003.
    [Bibtex] [Abstract]
    @misc{henn03:epo,
    author = "Jean Hennebert and Emeka Mosanya and Georges Zanellato and Fr{\'e}d{\'e}ric Hambye and Ugo Mosanya",
    abstract = "A speech recognition device having a hidden operator communication unit and being connectable to a voice communication system having a user communication unit, said speech recognition device comprising a processing unit and a memory provided for storing speech recognition data comprising command models and at least one threshold value (T) said processing unit being provided for processing speech data, received from said voice communication system, by scoring said command models against said speech data in order to determine at least one recognition hypothesis (O), said processing unit being further provided for determining a confidence score (S) on the basis of said recognition hypothesis and for weighing said confidence score against said threshold values in order to accept or reject said received speech data, said device further comprises forwarding means provided for forwarding said speech data to said hidden operator communication unit in response to said rejection of received speech data, said hidden operator communication unit being provided for generating upon receipt of said rejection a recognition string based on said received speech data, said hidden operator communication unit being further provided for generating a target hypothesis (Ot) on the basis of said recognition string generated by said hidden operator communication unit, said device further comprising evaluation means provided for evaluating said target hypothesis with respect to said determined recognition hypothesis and for adapting said stored command models and/or threshold values on the basis of results obtained by said evaluation.",
    howpublished = " EPO EP1378886 (A1) ― 2004-01-07",
    keywords = "Speech Processing, Speech Recognition",
    month = " ",
    title = "{EPO} {P}atent pending: {S}peech {R}ecognition {D}evice",
    Pdf = "http://www.hennebert.org/download/publications/epo-2003-EP1378886A1-speech-recognition-device.pdf",
    year = "2003",
    }

    A speech recognition device having a hidden operator communication unit and being connectable to a voice communication system having a user communication unit, said speech recognition device comprising a processing unit and a memory provided for storing speech recognition data comprising command models and at least one threshold value (T) said processing unit being provided for processing speech data, received from said voice communication system, by scoring said command models against said speech data in order to determine at least one recognition hypothesis (O), said processing unit being further provided for determining a confidence score (S) on the basis of said recognition hypothesis and for weighing said confidence score against said threshold values in order to accept or reject said received speech data, said device further comprises forwarding means provided for forwarding said speech data to said hidden operator communication unit in response to said rejection of received speech data, said hidden operator communication unit being provided for generating upon receipt of said rejection a recognition string based on said received speech data, said hidden operator communication unit being further provided for generating a target hypothesis (Ot) on the basis of said recognition string generated by said hidden operator communication unit, said device further comprising evaluation means provided for evaluating said target hypothesis with respect to said determined recognition hypothesis and for adapting said stored command models and/or threshold values on the basis of results obtained by said evaluation.

  • P. Kuonen, V. Guillaume, E. Karim, and D. Panagiotis, "Composite Simulations for B3G Service and Network Management Platform," Fifth IEEE International Conference on Mobile and Wireless Communications Networks (MWCN 2003), 2003.
    [Bibtex]
    @article{Kuonen:517,
    Author = {Pierre Kuonen and Vivier Guillaume and El-Khazen Karim and Demestichas Panagiotis},
    Journal = {Fifth IEEE International Conference on Mobile and Wireless Communications Networks (MWCN 2003)},
    Month = {sep},
    Title = {Composite Simulations for B3G Service and Network Management Platform},
    Year = {2003}}
  • P. Kuonen and J. Tschopp, "Someil et ronflements," , pp. 16-18, 2003.
    [Bibtex]
    @article{Kuonen:490,
    Author = {Pierre Kuonen and Jean-Marie Tschopp},
    Month = {f{\'e}v},
    Pages = {16-18},
    Title = {Someil et ronflements},
    Year = {2003}}
  • T. Nguyen and P. Kuonen, "Parallelization Scheme for an Approximate Solution to Time Constraint Problems," International Conference on Computational Science 2003 (ICCS2003), 2003.
    [Bibtex]
    @article{Tuan-Anh:419,
    Author = {Tuan-Anh Nguyen and Pierre Kuonen},
    Journal = {International Conference on Computational Science 2003 (ICCS2003)},
    Month = {jun},
    Title = {Parallelization Scheme for an Approximate Solution to Time Constraint Problems},
    Year = {2003}}
  • T. Nguyen and P. Kuonen, "ParoC++: A requirement-driven parallel object-oriented programming language," The 8th International Workshop on High-Level Parallel Programming Models and Supportive Environments (HIPS), 2003.
    [Bibtex]
    @article{Tuan-Anh:418,
    Author = {Tuan-Anh Nguyen and Pierre Kuonen},
    Journal = {The 8th International Workshop on High-Level Parallel Programming Models and Supportive Environments (HIPS)},
    Month = {avr},
    Title = {ParoC++: A requirement-driven parallel object-oriented programming language},
    Year = {2003}}
  • T. Nguyen and P. Kuonen, "An Object-Oriented Framework for Efficient Data Access in Data Intensive Computing," The 5th workshop on Advances in Parallel and Distributed Computational Models, 2003.
    [Bibtex]
    @article{Tuan-Anh:416,
    Author = {Tuan-Anh Nguyen and Pierre Kuonen},
    Journal = {The 5th workshop on Advances in Parallel and Distributed Computational Models},
    Month = {avr},
    Title = {An Object-Oriented Framework for Efficient Data Access in Data Intensive Computing},
    Year = {2003}}
  • M. Sawley, R. Gruber, and P. Kuonen, "Le Grid à l'EPFL: Déploiement et évolution," , pp. 12-13, 2003.
    [Bibtex]
    @article{Marie-Christine:491,
    Author = {Marie-Christine Sawley and Ralf Gruber and Pierre Kuonen},
    Month = {avr},
    Pages = {12-13},
    Title = {Le Grid {\`a} l'EPFL: D{\'e}ploiement et {\'e}volution},
    Year = {2003}}
  • G. Vivier, K. El-Khazen, P. Demestichas, and P. Kuonen, "Composite Simulations for B3G Service and Network Management Platform," Proceedings of. Fifth IEEE International Conference on Mobile and Wireless Communications Networks (MWCN 2003), 2003.
    [Bibtex]
    @article{Guillaume:485,
    Author = {Guillaume Vivier and Karim El-Khazen and Panagiotis Demestichas and Pierre Kuonen},
    Journal = {Proceedings of. Fifth IEEE International Conference on Mobile and Wireless Communications Networks (MWCN 2003)},
    Month = {sep},
    Title = {Composite Simulations for B3G Service and Network Management Platform},
    Year = {2003}}
  • O. A. Khaled, H. C. Drissi, P. Kuonen, and J. Wagen, "Informatique mobile: L'EIA-FR joue le mobile," Flash Informatique, pp. 23-25, 2002.
    [Bibtex]
    @article{Khaled:476,
    Author = {Omar Abou Khaled and Houda Chabbi Drissi and Pierre Kuonen and Jean-Fr{\'e}d{\'e}ric Wagen},
    Journal = {Flash Informatique},
    Month = {ao{\^u}},
    Pages = {23-25},
    Title = {Informatique mobile: L'EIA-FR joue le mobile},
    Year = {2002}}
  • T. Nguyen and P. Kuonen, "A Model of Dynamic Parallel Objects for Metacomputing," Conference on Parallel and Distributed Processing Techniques and Applications (PDPTA'02), vol. 1, pp. 192-199, 2002.
    [Bibtex]
    @article{Tuan-Anh:417,
    Author = {Tuan-Anh Nguyen and Pierre Kuonen},
    Journal = {Conference on Parallel and Distributed Processing Techniques and Applications (PDPTA'02)},
    Month = {jun},
    Pages = {192-199},
    Title = {A Model of Dynamic Parallel Objects for Metacomputing},
    Volume = {1},
    Year = {2002}}
  • P. Demestichas, L. Papadopoulou, M. Theologou, G. Vivier, G. Martinez, F. Galliano, R. Menolascino, A. Sobrino, F. E. S. de la Gallego, D. Zeglache, P. Kuonen, S. Lorenzon, and C. Caragiuli, "Management of Networks and Services in a Diversified Radio Environment," Proceeding of IST Mobile Communications Summit 2001, pp. 406-411, 2001.
    [Bibtex]
    @article{Demestichas:291,
    Author = {P. Demestichas and L. Papadopoulou and M. Theologou and G. Vivier and G. Martinez and F. Galliano and R. Menolascino and A. Sobrino and E.S. de la Fuente Gallego and D. Zeglache and Pierre Kuonen and S. Lorenzon and C. Caragiuli},
    Journal = {Proceeding of IST Mobile Communications Summit 2001},
    Month = {sep},
    Pages = {406 - 411},
    Title = {Management of Networks and Services in a Diversified Radio Environment},
    Year = {2001}}
  • N. Abdennadher, G. Babin, P. Kropf, and P. Kuonen, "A Dynamically Configurable Environment for High Performance Computing," Proceeding of the Advanced Simulation Technologies Conference (ASTC'2000), 2000.
    [Bibtex]
    @article{Abdennadher:298,
    Author = {N. Abdennadher and G. Babin and P. Kropf and Pierre Kuonen},
    Journal = {Proceeding of the Advanced Simulation Technologies Conference (ASTC'2000)},
    Month = {avr},
    Title = {A Dynamically Configurable Environment for High Performance Computing},
    Year = {2000}}
  • N. Abdennadher, G. Babin, and P. Kuonen, "Combining Metacomputing and High Performance Computing," Proceeding of the International Conference on Parallel and Distributed Processing Techniques and Applications (PDPTA'2000), 2000.
    [Bibtex]
    @article{Abdennadher:296,
    Author = {N. Abdennadher and G. Babin and Pierre Kuonen},
    Journal = {Proceeding of the International Conference on Parallel and Distributed Processing Techniques and Applications (PDPTA'2000)},
    Month = {jun},
    Title = {Combining Metacomputing and High Performance Computing},
    Year = {2000}}
  • P. Calegari, F. Guidec, P. Kuonen, and F. Nielsen, "Combinatorial optimization algorithms for radio network planning," Theoretical Computer Science, vol. 265, 2000.
    [Bibtex]
    @article{Calegari:252,
    Author = {P. Calegari and F. Guidec and Pierre Kuonen and F. Nielsen},
    Journal = {Theoretical Computer Science},
    Title = {Combinatorial optimization algorithms for radio network planning},
    Volume = {265},
    Year = {2000}}
  • [PDF] [DOI] C. Fredouille, J. Mariethoz, C. Jaboulet, J. Hennebert, C. Mokbel, and F. Bimbot, "Behavior of a Bayesian adaptation method for incremental enrollment in speaker verification," in IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2000), Istanbul, Turkey, 2000, pp. 1197-1200.
    [Bibtex] [Abstract]
    @conference{fred00:icassp,
    author = "Corinne Fredouille and Johnny Mariethoz and C{\'e}dric Jaboulet and Jean Hennebert and Chafik Mokbel and Fr{\'e}d{\'e}ric Bimbot",
    abstract = "Classical adaptation approaches are generally used for speaker or environment adaptation of speech recognition systems. In this paper, we use such techniques for the incremental training of client models in a speaker verification system. The initial model is trained on a very limited amount of data and then progressively updated with access data, using a segmental-EM procedure. In supervised mode (i.e. when access utterances are certified), the incremental approach yields equivalent performance to the batch one. We also investigate on the impact of various scenarios of impostor attacks during the incremental enrollment phase. All results are obtained with the Picassoft platform-the state-of-the-art speaker verification system developed in the PICASSO project",
    address = " ",
    booktitle = "IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2000), Istanbul, Turkey",
    crossref = " ",
    doi = "10.1109/ICASSP.2000.859180",
    editor = " ",
    isbn = "0780362934",
    keywords = "Speaker Verification; Speech Processing; Bayesian Adaptation",
    month = " ",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    number = " ",
    organization = " ",
    pages = "1197-1200",
    publisher = " ",
    series = " ",
    title = "{B}ehavior of a {B}ayesian adaptation method for incremental enrollment in speaker verification",
    Pdf = "http://www.hennebert.org/download/publications/icassp-2000-bayesian-adaptation-incremental-enrollment-speaker-verification.pdf",
    volume = "2",
    year = "2000",
    }

    Classical adaptation approaches are generally used for speaker or environment adaptation of speech recognition systems. In this paper, we use such techniques for the incremental training of client models in a speaker verification system. The initial model is trained on a very limited amount of data and then progressively updated with access data, using a segmental-EM procedure. In supervised mode (i.e. when access utterances are certified), the incremental approach yields equivalent performance to the batch one. We also investigate on the impact of various scenarios of impostor attacks during the incremental enrollment phase. All results are obtained with the Picassoft platform-the state-of-the-art speaker verification system developed in the PICASSO project

  • [PDF] C. Fredouille, J. Mariethoz, C. Jaboulet, J. Hennebert, C. Mokbel, and F. Bimbot, "Behavior of a Bayesian adaptation method for incremental enrollment in speaker verification - Technical Report," IDIAP, 02, 2000.
    [Bibtex] [Abstract]
    @techreport{fred00:idiap,
    author = "Corinne Fredouille and Johnny Mariethoz and C{\'e}dric Jaboulet and Jean Hennebert and Chafik Mokbel and Fr{\'e}d{\'e}ric Bimbot",
    abstract = "Classical adaptation approaches are generally used for speaker or environment adaptation of speech recognition systems. In this paper, we use such techniques for the incremental training of client models in a speaker verification system. The initial model is trained on a very limited amount of data and then progressively updated with access data, using a segmental-EM procedure. In supervised mode (i.e. when access utterances are certified), the incremental approach yields equivalent performance to the batch one. We also investigate on the impact of various scenarios of impostor attacks during the incremental enrollment phase. All results are obtained with the Picassoft platform - the state-of-the-art speaker verification system developed in the PICASSO project.",
    address = " ",
    institution = "IDIAP",
    keywords = "Speaker Verification; Speech Processing; Bayesian Adaptation",
    month = " January",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    number = "02",
    title = "{B}ehavior of a {B}ayesian adaptation method for incremental enrollment in speaker verification - {T}echnical {R}eport",
    type = " ",
    Pdf = "http://www.hennebert.org/download/publications/tr-2000-bayesian-adaptation-incremental-enrollment-speaker-verification.pdf",
    year = "2000",
    }

    Classical adaptation approaches are generally used for speaker or environment adaptation of speech recognition systems. In this paper, we use such techniques for the incremental training of client models in a speaker verification system. The initial model is trained on a very limited amount of data and then progressively updated with access data, using a segmental-EM procedure. In supervised mode (i.e. when access utterances are certified), the incremental approach yields equivalent performance to the batch one. We also investigate on the impact of various scenarios of impostor attacks during the incremental enrollment phase. All results are obtained with the Picassoft platform - the state-of-the-art speaker verification system developed in the PICASSO project.

  • [PDF] [DOI] J. Hennebert, H. Melin, D. Petrovska, and D. Genoud, "POLYCOST: A telephone-speech database for speaker recognition," Speech Communication, vol. 31, iss. 2-3, pp. 265-270, 2000.
    [Bibtex] [Abstract]
    @article{henn00:spec,
    author = "Jean Hennebert and Hakan Melin and Dijana Petrovska and Dominique Genoud",
    abstract = "This article presents an overview of the POLYCOST database dedicated to speaker recognition applications over the telephone network. The main characteristics of this database are: medium mixed speech corpus size (>100 speakers), English spoken by foreigners, mainly digits with some free speech, collected through international telephone lines, and minimum of nine sessions for 85\% of the speakers. Cet article pr{\'e}sente une description de la base de donn{\'e}es POLYCOST qui est d{\'e}di{\'e}e aux applications de reconnaissance du locuteur {\`a} travers les lignes t{\'e}l{\'e}phoniques. Les caract{\'e}ristiques de la base de donn{\'e}es sont: corpus moyen {\`a} contenu vari{\'e} (>100 locuteurs), anglais parl{\'e} par des {\'e}trangers, chiffres lus et parole libre, enregistrement {\`a} travers des lignes de t{\'e}l{\'e}phone internationales, minimum de neuf sessions d'enregistrement pour 85% des locuteurs.",
    crossref = " ",
    doi = "10.1016/S0167-6393(99)00082-5",
    issn = "0167-6393",
    journal = "Speech Communication",
    keywords = "Speaker Verification; Database; Speech Processing",
    month = "June",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    number = "2-3",
    pages = "265-270",
    title = "{POLYCOST}: {A} telephone-speech database for speaker recognition",
    Pdf = "http://www.hennebert.org/download/publications/specom-2000-polycost-telephone-speech-database-speaker-recognition.pdf",
    volume = "31",
    year = "2000",
    }

    This article presents an overview of the POLYCOST database dedicated to speaker recognition applications over the telephone network. The main characteristics of this database are: medium mixed speech corpus size (>100 speakers), English spoken by foreigners, mainly digits with some free speech, collected through international telephone lines, and minimum of nine sessions for 85\% of the speakers. Cet article présente une description de la base de données POLYCOST qui est dédiée aux applications de reconnaissance du locuteur à travers les lignes téléphoniques. Les caractéristiques de la base de données sont: corpus moyen à contenu varié (>100 locuteurs), anglais parlé par des étrangers, chiffres lus et parole libre, enregistrement à travers des lignes de téléphone internationales, minimum de neuf sessions d'enregistrement pour 85% des locuteurs.

  • P. Kuonen, G. Babin, N. Abdennadher, and P. J. Cagnard, "Intensional High Performance Computing," Proceding of Distributed Communities on the Web (DCW'2000) workshop, 2000.
    [Bibtex]
    @article{Kuonen:297,
    Author = {Pierre Kuonen and G. Babin and N. Abdennadher and P.J. Cagnard},
    Journal = {Proceding of Distributed Communities on the Web (DCW'2000) workshop},
    Month = {jun},
    Title = {Intensional High Performance Computing},
    Year = {2000}}
  • P. Kuonen, N. Abdennadher, and G. Babin, "Le MetaComputing au Service du Calcul de Haute Performance," Technique et Sciences Informatiques (TSI), vol. 19, pp. 743-765, 2000.
    [Bibtex]
    @article{Kuonen:295,
    Author = {Pierre Kuonen and N. Abdennadher and G. Babin},
    Issn = {2-7462-0164-X},
    Journal = {Technique et Sciences Informatiques (TSI)},
    Pages = {743-765},
    Title = {Le MetaComputing au Service du Calcul de Haute Performance},
    Volume = {19},
    Year = {2000}}
  • [PDF] [DOI] B. Nedic, F. Bimbot, R. Blouet, J. Bonastre, G. Caloz, J. Cernocky, G. Chollet, G. Durou, C. Fredouille, D. Genoud, G. Gravier, J. Hennebert, J. Kharroubi, I. Magrin-Chagnolleau, T. Merlin, C. Mokbel, D. Petrovska, S. Pigeon, M. Seck, P. Verlinde, and M. Zouhal, "The ELISA Systems for the NIST'99 Evaluation in Speaker Detection and Tracking," Digital Signal Processing Journal, vol. 10, iss. 1-3, pp. 143-153, 2000.
    [Bibtex] [Abstract]
    @article{nedic00:dsp,
    author = "Bojan Nedic and Fr{\'e}d{\'e}ric Bimbot and Rapha{\"e}l Blouet and Jean-Fran{\c{c}}ois Bonastre and Gilles Caloz and Jan Cernocky and G{\'e}rard Chollet and Geoffrey Durou and Corinne Fredouille and Dominique Genoud and Guillaume Gravier and Jean Hennebert and Jamal Kharroubi and Ivan Magrin-Chagnolleau and Teva Merlin and Chafik Mokbel and Dijana Petrovska and St{\'e}phane Pigeon and Mouhamadou Seck and Patrick Verlinde and Meriem Zouhal",
    abstract = "This article presents the text-independent speaker detection and tracking systems developed by the members of the ELISA Consortium for the NIST'99 speaker recognition evaluation campaign. ELISA is a consortium grouping researchers of several laboratories sharing software modules, resources and experimental protocols. Each system is briefly described, and comparative results on the NIST'99 evaluation tasks are discussed.",
    crossref = " ",
    doi = "10.1006/dspr.1999.0365",
    issn = "1051-2004",
    journal = "Digital Signal Processing Journal",
    keywords = "text-independent; speaker verification; speaker detection; speaker tracking; NIST evaluation campaign",
    month = "January",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    number = "1-3",
    pages = "143-153",
    title = "{T}he {ELISA} {S}ystems for the {NIST}'99 {E}valuation in {S}peaker {D}etection and {T}racking",
    Pdf = "http://www.hennebert.org/download/publications/dsp-2000-elisa-systems-nist99-evaluation-speaker-detection-tracking.pdf",
    volume = " 10",
    year = "2000",
    }

    This article presents the text-independent speaker detection and tracking systems developed by the members of the ELISA Consortium for the NIST'99 speaker recognition evaluation campaign. ELISA is a consortium grouping researchers of several laboratories sharing software modules, resources and experimental protocols. Each system is briefly described, and comparative results on the NIST'99 evaluation tasks are discussed.

  • [PDF] [DOI] D. Petrovska, J. Cernocky, J. Hennebert, and G. Chollet, "Segmental Approaches for Automatic Speaker Verification," Digital Signal Processing Journal, vol. 10, iss. 1-3, pp. 198-212, 2000.
    [Bibtex] [Abstract]
    @article{petr00:dsp,
    author = "Dijana Petrovska and Jan Cernocky and Jean Hennebert and G{\'e}rard Chollet",
    abstract = "Speech is composed of different sounds (acoustic segments). Speakers differ in their pronunciation of these sounds. The segmental approaches described in this paper are meant to exploit these differences for speaker verification purposes. For such approaches, the speech is divided into different classes, and the speaker modeling is done for each class. The speech segmentation applied is based on automatic language independent speech processing tools that provide a segmentation of the speech requiring neither phonetic nor orthographic transcriptions of the speech data. Two different speaker modeling approaches, based on multilayer perceptrons (MLPs) and on Gaussian mixture models (GMMs), are studied. The MLP-based segmental systems have performance comparable to that of the global MLP-based systems, and in the mismatched train-test conditions slightly better results are obtained with the segmental MLP system. The segmental GMM systems gave poorer results than the equivalent global GMM systems.",
    crossref = " ",
    doi = "10.1006/dspr.2000.0370",
    issn = "1051-2004",
    journal = "Digital Signal Processing Journal",
    keywords = "Speaker Verification; Speech Processing",
    month = "January",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    number = "1-3",
    pages = "198-212",
    title = "{S}egmental {A}pproaches for {A}utomatic {S}peaker {V}erification",
    Pdf = "http://www.hennebert.org/download/publications/dsp-2000-segmental-approaches-automatic-speaker-recognition.pdf",
    volume = " 10",
    year = "2000",
    }

    Speech is composed of different sounds (acoustic segments). Speakers differ in their pronunciation of these sounds. The segmental approaches described in this paper are meant to exploit these differences for speaker verification purposes. For such approaches, the speech is divided into different classes, and the speaker modeling is done for each class. The speech segmentation applied is based on automatic language independent speech processing tools that provide a segmentation of the speech requiring neither phonetic nor orthographic transcriptions of the speech data. Two different speaker modeling approaches, based on multilayer perceptrons (MLPs) and on Gaussian mixture models (GMMs), are studied. The MLP-based segmental systems have performance comparable to that of the global MLP-based systems, and in the mismatched train-test conditions slightly better results are obtained with the segmental MLP system. The segmental GMM systems gave poorer results than the equivalent global GMM systems.

  • P. Calégari, G. Coray, D. Kobler, P. Kuonen, and A. Hertz, "A Taxonomy of Evolutionary Algorithms in Combinatorial Optimization: Journal of Heuristics," Journal of Heuristics, vol. 5, pp. 145-158, 1999.
    [Bibtex]
    @article{Calegari:1131,
    Author = {Patrice Cal{\'e}gari and Giovanni Coray and Daniel Kobler and Pierre Kuonen and Alain Hertz},
    Journal = {Journal of Heuristics},
    Pages = {145-158},
    Title = {A Taxonomy of Evolutionary Algorithms in Combinatorial Optimization: Journal of Heuristics},
    Volume = {5},
    Year = {1999}}
  • P. Calégari, F. Guidec, P. Kuonen, and F. Nielsen, "Combinatorial optimization algorithms for radio network planning," Theoretical Computer Science (TCS), 1999.
    [Bibtex]
    @article{Calegari:321,
    Author = {P. Cal{\'e}gari and F. Guidec and Pierre Kuonen and F. Nielsen},
    Journal = {Theoretical Computer Science (TCS)},
    Title = {Combinatorial optimization algorithms for radio network planning},
    Year = {1999}}
  • [PDF] G. Chollet, J. Cernocky, G. Gravier, J. Hennebert, D. Petrovska, and F. Yvon, "Towards fully automatic speech processing techniques for interactive voice servers," in Speech Processing, Recognition and Artificial Neural Networks: Proceedings of the 3rd International School on Neural Nets, Eduardo Caianiello, A. E. M. M. Gerard Chollet Gabriella M. Di Benedetto, Ed., Springer Verlag, 1999, p. 346.
    [Bibtex] [Abstract]
    @incollection{chol98:towards,
    author = "G{\'e}rard Chollet and Jan Cernocky and Guillaume Gravier and Jean Hennebert and Dijana Petrovska and Fran{\c{c}}ois Yvon",
    abstract = "Speech Processing, Recognition and Artificial Neural Networks contains papers from leading researchers and selected students, discussing the experiments, theories and perspectives of acoustic phonetics as well as the latest techniques in the field of spe ech science and technology. Topics covered in this book include; Fundamentals of Speech Analysis and Perceptron; Speech Processing; Stochastic Models for Speech; Auditory and Neural Network Models for Speech; Task-Oriented Applications of Automatic Speech Recognition and Synthesis.",
    address = " ",
    booktitle = "Speech Processing, Recognition and Artificial Neural Networks: Proceedings of the 3rd International School on Neural Nets, Eduardo Caianiello",
    chapter = " ",
    edition = " ",
    editor = "Gerard Chollet, Gabriella M. Di Benedetto, Anna Esposito, Maria Marinaro",
    isbn = "1852330945",
    keywords = "Speech Processing, Speech Recognition",
    month = "April ",
    note = "PDF may not be the final published version. Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    number = " ",
    pages = "346",
    publisher = "Springer Verlag",
    title = "{T}owards fully automatic speech processing techniques for interactive voice servers",
    type = " ",
    Pdf = "http://www.hennebert.org/download/publications/sprann-1999-towards-fully-automatic-speech-processing-techniques-interactive-voice-servers.pdf",
    volume = " ",
    year = "1999",
    }

    Speech Processing, Recognition and Artificial Neural Networks contains papers from leading researchers and selected students, discussing the experiments, theories and perspectives of acoustic phonetics as well as the latest techniques in the field of spe ech science and technology. Topics covered in this book include; Fundamentals of Speech Analysis and Perceptron; Speech Processing; Stochastic Models for Speech; Auditory and Neural Network Models for Speech; Task-Oriented Applications of Automatic Speech Recognition and Synthesis.

  • F. Guidec, P. Calégari, P. Kuonen, and M. Pahud, "Object-Oriented Parallel Software for Parallel Radio Wave Propagation Simulation in Urban Environment'," Computers and Artifical Intelligence, 1999.
    [Bibtex]
    @article{Guidec:322,
    Author = {F. Guidec and P. Cal{\'e}gari and Pierre Kuonen and M. Pahud},
    Journal = {Computers and Artifical Intelligence},
    Title = {Object-Oriented Parallel Software for Parallel Radio Wave Propagation Simulation in Urban Environment'},
    Year = {1999}}
  • P. Kuonen, "The K-Ring: a versatile model for the design of MIMD computer topology," Proceedings of the High-Performance Computing Conference (HPC'99), pp. 381-385, 1999.
    [Bibtex]
    @article{Kuonen:300,
    Author = {Pierre Kuonen},
    Journal = {Proceedings of the High-Performance Computing Conference (HPC'99)},
    Month = {avr},
    Pages = {381-385},
    Title = {The K-Ring: a versatile model for the design of MIMD computer topology},
    Year = {1999}}
  • P. Kuonen, "Parallel computer architectures for commodity computing and the Swiss-T1 machine," EPFL-Supercomputing Review, pp. 3-11, 1999.
    [Bibtex]
    @article{Kuonen:299,
    Author = {Pierre Kuonen},
    Journal = {EPFL-Supercomputing Review},
    Month = {nov},
    Pages = {3-11},
    Title = {Parallel computer architectures for commodity computing and the Swiss-T1 machine},
    Year = {1999}}
  • [PDF] J. Cernocky, G. Baudoin, D. Petrovska, J. Hennebert, and G. Chollet, "Automatically derived speech units: applications to very low rate coding and speaker verification," in First Workshop on Text Speech and Dialog (TSD'98), Brno, Czech Republic, 1998, pp. 183-188.
    [Bibtex] [Abstract]
    @conference{cern98:tsd,
    author = "Jan Cernocky and Genevi{\`e}ve Baudoin and Dijana Petrovska and Jean Hennebert and G{\'e}rard Chollet",
    abstract = "Current systems for recognition, synthesis, very low bit-rate (VLBR) coding and text-independent speaker verification rely on sub-word units determined using phonetic knowledge. This paper presents an alternative to this approach determination of speech units using ALISP (Automatic Language Independent Speech Processing) tools. Experimental results for speaker-dependent VLBR coding are reported on two databases: average rate of 120 bps for unit encoding was achieved. In verification, this approach was tested during 1998's NIST-NSA evaluation campaign with a MLP-based scoring system.",
    address = " ",
    booktitle = "First Workshop on Text Speech and Dialog (TSD'98), Brno, Czech Republic",
    crossref = " ",
    editor = " ",
    isbn = "8021018992",
    keywords = "Speaker Verification; Speech Processing",
    month = " ",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    number = " ",
    organization = " ",
    pages = "183-188",
    publisher = " ",
    series = " ",
    title = "{A}utomatically derived speech units: applications to very low rate coding and speaker verification",
    Pdf = "http://www.hennebert.org/download/publications/tsd-1998-automatically-derived-speech-units-application-very-low-rate-coding-speaker-verification.pdf",
    volume = " ",
    year = "1998",
    }

    Current systems for recognition, synthesis, very low bit-rate (VLBR) coding and text-independent speaker verification rely on sub-word units determined using phonetic knowledge. This paper presents an alternative to this approach determination of speech units using ALISP (Automatic Language Independent Speech Processing) tools. Experimental results for speaker-dependent VLBR coding are reported on two databases: average rate of 120 bps for unit encoding was achieved. In verification, this approach was tested during 1998's NIST-NSA evaluation campaign with a MLP-based scoring system.

  • F. Guidec, P. Calegari, and P. Kuonen, "Parallel irregular software for wave propagation simulation," Future Generation Computer Systems (FGCS), vol. 13, pp. 279-289, 1998.
    [Bibtex]
    @article{Guidec:253,
    Author = {F. Guidec and P. Calegari and Pierre Kuonen},
    Journal = {Future Generation Computer Systems (FGCS)},
    Month = {mar},
    Pages = {279-289},
    Title = {Parallel irregular software for wave propagation simulation},
    Volume = {13},
    Year = {1998}}
  • F. Guidec, P. Calégari, and P. Kuonen, "Parallel Irregular Software for Wave Propagation Simulation," Future Generation Computer Systems (FGCS), vol. 13, pp. 279-289, 1998.
    [Bibtex]
    @article{Guidec:311,
    Author = {F. Guidec and P. Cal{\'e}gari and Pierre Kuonen},
    Issn = {0167-739X},
    Journal = {Future Generation Computer Systems (FGCS)},
    Month = {mar},
    Pages = {279-289},
    Title = {Parallel Irregular Software for Wave Propagation Simulation},
    Volume = {13},
    Year = {1998}}
  • [PDF] [DOI] J. Hennebert, "Hidden Markov models and artificial neural networks for speech and speaker recognition," PhD Thesis PhD Thesis, Lausanne, 1998.
    [Bibtex] [Abstract]
    @phdthesis{henn98:phd,
    author = "Jean Hennebert",
    abstract = "In this thesis, we are concerned with the two fields of automatic speech recognition (ASR) and automatic speaker recognition (ASkR) in telephony. More precisely, we are interested in systems based on hidden Markov models (HMMs) in which artificial neural networks (ANNs) are used in place of more classical tools. This work is dedicated to the analysis of three approaches. The first one, mainly original, concerns the use of Self-Organizing Maps in discrete HMMs for isolated word speech recognition. The second approach concerns continuous hybrid HMM/ANN systems, extensively studied in previous research work. The system is not original in its form but its analysis permitted to bring a new theoretical framework and to introduce some extensions regarding the way the system is trained. The last part concerns the implementation of a new ANN segmental approach for text-independent speaker verification.",
    address = "Lausanne",
    doi = "10.5075/epfl-thesis-1860",
    keywords = "ANN, HMM, Artificial Neural Networks, Hidden Markov Models, Speech Recognition",
    month = "October",
    note = "http://library.epfl.ch/theses/?nr=1860
    Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    publisher = "EPFL",
    school = "EPFL",
    title = "{H}idden {M}arkov models and artificial neural networks for speech and speaker recognition",
    type = "PhD Thesis",
    Pdf = "http://www.hennebert.org/download/publications/thesis-1998-hidden-markov-models-artificial-neural-networks-speech-speaker-recognition.pdf",
    year = "1998",
    }

    In this thesis, we are concerned with the two fields of automatic speech recognition (ASR) and automatic speaker recognition (ASkR) in telephony. More precisely, we are interested in systems based on hidden Markov models (HMMs) in which artificial neural networks (ANNs) are used in place of more classical tools. This work is dedicated to the analysis of three approaches. The first one, mainly original, concerns the use of Self-Organizing Maps in discrete HMMs for isolated word speech recognition. The second approach concerns continuous hybrid HMM/ANN systems, extensively studied in previous research work. The system is not original in its form but its analysis permitted to bring a new theoretical framework and to introduce some extensions regarding the way the system is trained. The last part concerns the implementation of a new ANN segmental approach for text-independent speaker verification.

  • [PDF] J. Hennebert and D. Petrovska, "Phoneme Based Text-Prompted Speaker Verification with Multi-Layer Perceptrons," in Speaker Recognition and its Commercial and Forensic Applications (RLA2C), Avignon, France, 1998, pp. 55-58.
    [Bibtex] [Abstract]
    @conference{henn98:rla2c,
    author = "Jean Hennebert and Dijana Petrovska",
    abstract = "Results presented in this paper are obtained in the framework of a text-prompted speaker verification system using Hidden Markov Models (HMMs) and Multi Layer Perceptrons (MLPs). The aims of the study described here are (1) to assess the relative speaker discriminant properties of phonemes with different temporal frame-to-frame context at the input of the MLP's and (2) to study the influence of two sampling techniques of the acoustic vectors while training the MLP's.",
    address = " ",
    booktitle = "Speaker Recognition and its Commercial and Forensic Applications (RLA2C), Avignon, France",
    crossref = " ",
    editor = " ",
    keywords = "Speaker Verification; Speech Processing; MLP",
    month = " ",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    number = " ",
    organization = " ",
    pages = "55-58",
    publisher = " ",
    series = " ",
    title = "{P}honeme {B}ased {T}ext-{P}rompted {S}peaker {V}erification with {M}ulti-{L}ayer {P}erceptrons",
    Pdf = "http://www.hennebert.org/download/publications/rla2c-1998-phoneme-based-text-prompted-speaker-verification-mlp.pdf",
    volume = " ",
    year = "1998",
    }

    Results presented in this paper are obtained in the framework of a text-prompted speaker verification system using Hidden Markov Models (HMMs) and Multi Layer Perceptrons (MLPs). The aims of the study described here are (1) to assess the relative speaker discriminant properties of phonemes with different temporal frame-to-frame context at the input of the MLP's and (2) to study the influence of two sampling techniques of the acoustic vectors while training the MLP's.

  • P. Kuonen, F. Guidec, P. Calégari, and A. Tentner, "Multilevel Parallelism applied to the optimization of mobile networks," Proceedings of the High-Performance Computing (HPC'98), pp. 277-282, 1998.
    [Bibtex]
    @article{Kuonen:323,
    Author = {Pierre Kuonen and F. Guidec and P. Cal{\'e}gari and A. Tentner},
    Issn = {1 56555 145 1},
    Journal = {Proceedings of the High-Performance Computing (HPC'98)},
    Month = {avr},
    Pages = {277-282},
    Title = {Multilevel Parallelism applied to the optimization of mobile networks},
    Year = {1998}}
  • P. Kuonen, F. Guidec, and P. Calegari, "Multilevel Parallelism applied to the optimization of mobile networks," Proceedings of the High-Performance Computing (HPC'98), pp. 277-282, 1998.
    [Bibtex]
    @article{Kuonen:303,
    Author = {Pierre Kuonen and F. Guidec and P. Calegari},
    Journal = {Proceedings of the High-Performance Computing (HPC'98)},
    Month = {avr},
    Pages = {277-282},
    Title = {Multilevel Parallelism applied to the optimization of mobile networks},
    Year = {1998}}
  • R. Menolascino, P. J. Cullen, D. P., S. Josselin, P. Kuonen, Y. Markoulidakis, M. Pizarroso, and D. Zeghlache, "A Realistic UMTS Planning Exercise," 3rd ACTS Mobile Communication SUMMIT'98, vol. 1, pp. 157-162, 1998.
    [Bibtex]
    @article{Menolascino:301,
    Author = {R. Menolascino and P.J. Cullen and Demestichas P. and S. Josselin and Pierre Kuonen and Y. Markoulidakis and M. Pizarroso and D. Zeghlache},
    Journal = {3rd ACTS Mobile Communication SUMMIT'98},
    Month = {jun},
    Pages = {157-162},
    Title = {A Realistic UMTS Planning Exercise},
    Volume = {1},
    Year = {1998}}
  • [PDF] D. Petrovska, J. Hennebert, H. Melin, and D. Genoud, "POLYCOST : A Telephone-Speech Database for Speaker Recognition," in Speaker Recognition and its Commercial and Forensic Applications (RLA2C), Avignon, France, 1998, pp. 211-214.
    [Bibtex] [Abstract]
    @conference{petr98:rla2c,
    author = "Dijana Petrovska and Jean Hennebert and Hakan Melin and Dominique Genoud",
    abstract = "This article presents an overview of the POLYCOST data-base dedicated to speaker recognition applications over the telephone network. The main characteristics of this data-base are: large mixed speech corpus size ($>$ 100 speakers), English spoken by foreigners, mainly digits with some free speech, collected through international telephone lines, and more than eight sessions per speaker.",
    address = " ",
    booktitle = "Speaker Recognition and its Commercial and Forensic Applications (RLA2C), Avignon, France",
    crossref = " ",
    editor = " ",
    keywords = "Speaker Verification; Speech Processing",
    month = " ",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    number = " ",
    organization = " ",
    pages = "211-214",
    publisher = " ",
    series = " ",
    title = "{POLYCOST} : {A} {T}elephone-{S}peech {D}atabase for {S}peaker {R}ecognition",
    Pdf = "http://www.hennebert.org/download/publications/rla2c-1998-polycost-telephone-speech-database-speaker-recognition.pdf",
    volume = " ",
    year = "1998",
    }

    This article presents an overview of the POLYCOST data-base dedicated to speaker recognition applications over the telephone network. The main characteristics of this data-base are: large mixed speech corpus size ($>$ 100 speakers), English spoken by foreigners, mainly digits with some free speech, collected through international telephone lines, and more than eight sessions per speaker.

  • [PDF] D. Petrovska, J. Hennebert, J. Cernocky, and G. Chollet, "Text-Independent Speaker Verification Using Automatically Labelled Acoustic Segments," in International Conference on Spoken Language Processing (ICSLP 98), Sidney, Australia, 1998, pp. 536-539.
    [Bibtex] [Abstract]
    @conference{petr98:icslp,
    author = "Dijana Petrovska and Jean Hennebert and Jan Cernocky and G{\'e}rard Chollet",
    abstract = "Most of text-independent speaker verification techniques are based on modelling the global probability distribution function (pdf) of speakers in the acoustic vector space. Our paper presents an alternative to this approach with a class-dependent verification system using automatically determined segmental units. Segments are found with temporal decomposition and labelled through unsupervised clustering. The core of the system is based on a set of multi-layer perceptrons (MLP) trained to discriminate between client and an independent set of world speakers. Each MLP is dedicated to work with data segments that were previously selected as belonging to a particular class. The last step of the system is a recombination of MLP scores to take the verification decision. Issues and potential advantages of the segmental approach are presented. Performances of global and segmental approaches are reported on the NIST'98 data (250 female and 250 male speakers), showing promising results for the proposed new segmental approach. Comparison with state of the art system, based on Gaussian Mixture Modelling is also included.",
    address = " ",
    booktitle = "International Conference on Spoken Language Processing (ICSLP 98), Sidney, Australia",
    crossref = " ",
    editor = " ",
    keywords = "Speaker Verification; Speech Processing",
    month = " ",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    number = " ",
    organization = " ",
    pages = "536-539",
    publisher = " ",
    series = " ",
    title = "{T}ext-{I}ndependent {S}peaker {V}erification {U}sing {A}utomatically {L}abelled {A}coustic {S}egments",
    Pdf = "http://www.hennebert.org/download/publications/icslp-1998-test-independent-speaker-verification-automatically-labelled-acoustic-segments.pdf",
    volume = " ",
    year = "1998",
    }

    Most of text-independent speaker verification techniques are based on modelling the global probability distribution function (pdf) of speakers in the acoustic vector space. Our paper presents an alternative to this approach with a class-dependent verification system using automatically determined segmental units. Segments are found with temporal decomposition and labelled through unsupervised clustering. The core of the system is based on a set of multi-layer perceptrons (MLP) trained to discriminate between client and an independent set of world speakers. Each MLP is dedicated to work with data segments that were previously selected as belonging to a particular class. The last step of the system is a recombination of MLP scores to take the verification decision. Issues and potential advantages of the segmental approach are presented. Performances of global and segmental approaches are reported on the NIST'98 data (250 female and 250 male speakers), showing promising results for the proposed new segmental approach. Comparison with state of the art system, based on Gaussian Mixture Modelling is also included.

  • [PDF] D. Petrovska, J. Cernocky, J. Hennebert, and G. Chollet, "Text-Independent Speaker Verification Using Automatically Labelled Acoustic Segments," in Advances in Phonetics, Proc. of the International Phonetic Sciences conference, Western Washington Univ., Bellingham, 1998, pp. 129-136.
    [Bibtex] [Abstract]
    @conference{petr98:ips,
    author = "Dijana Petrovska and Jan Cernocky and Jean Hennebert and G{\'e}rard Chollet",
    abstract = "Most of text-independent speaker verification techniques are based on modelling the global probability distribution function (pdf) of speakers in the acoustic vector space. Our paper presents an alternative to this approach with a class-dependent verification system using automatically determined segmental units.",
    address = " ",
    booktitle = "Advances in Phonetics, Proc. of the International Phonetic Sciences conference, Western Washington Univ., Bellingham",
    crossref = " ",
    editor = " ",
    keywords = "Speaker Verification; Speech Processing",
    month = " ",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    number = " ",
    organization = " ",
    pages = "129-136",
    publisher = " ",
    series = " ",
    title = "{T}ext-{I}ndependent {S}peaker {V}erification {U}sing {A}utomatically {L}abelled {A}coustic {S}egments",
    Pdf = "http://www.hennebert.org/download/publications/ips-1998-speaker-verification-automatically-labelled-acoustic-segments.pdf",
    volume = " ",
    year = "1998",
    }

    Most of text-independent speaker verification techniques are based on modelling the global probability distribution function (pdf) of speakers in the acoustic vector space. Our paper presents an alternative to this approach with a class-dependent verification system using automatically determined segmental units.

  • [PDF] [DOI] D. Petrovska and J. Hennebert, "Text-Prompted Speaker Verification Experiments with Phoneme Specific MLP's," in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 98), Seattle, USA, 1998, pp. 777-780.
    [Bibtex] [Abstract]
    @conference{petr98:icassp,
    author = "Dijana Petrovska and Jean Hennebert",
    abstract = "The aims of the study described in this paper are (1) to assess the relative speaker discriminant properties of phonemes and (2) to investigate the importance of the temporal frame-to-frame information for speaker modelling in the framework of a text-prompted speaker verification system using Hidden Markov Models (HMMs) and Multi Layer Perceptrons (MLPs). It is khown that, with similar experimental conditions, nasals, fricatives and vowels convey more speaker specific informations than plosives and liquids. Regarding the influence of the frame-to-frame temporal information, significant improvements are reported from the inclusion of several acoustic frames at the input of the MLPs. Results tend also to show that each phoneme has its optimal MLP context size giving the best Equal Error Rate (EER).",
    address = " ",
    booktitle = "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 98), Seattle, USA",
    crossref = " ",
    doi = "10.1109/ICASSP.1998.675380",
    editor = " ",
    isbn = "0780344286",
    issn = "1520-6149",
    keywords = "Speaker Verification; Speech Processing; MLP",
    month = " ",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    number = " ",
    organization = " ",
    pages = "777-780",
    publisher = " ",
    series = " ",
    title = "{T}ext-{P}rompted {S}peaker {V}erification {E}xperiments with {P}honeme {S}pecific {MLP}'s",
    Pdf = "http://www.hennebert.org/download/publications/icassp-1998-text-prompted-speaker-verification-phoneme-specific-mlp.pdf",
    volume = " ",
    year = "1998",
    }

    The aims of the study described in this paper are (1) to assess the relative speaker discriminant properties of phonemes and (2) to investigate the importance of the temporal frame-to-frame information for speaker modelling in the framework of a text-prompted speaker verification system using Hidden Markov Models (HMMs) and Multi Layer Perceptrons (MLPs). It is khown that, with similar experimental conditions, nasals, fricatives and vowels convey more speaker specific informations than plosives and liquids. Regarding the influence of the frame-to-frame temporal information, significant improvements are reported from the inclusion of several acoustic frames at the input of the MLPs. Results tend also to show that each phoneme has its optimal MLP context size giving the best Equal Error Rate (EER).

  • R. Boichat, S. Josselin, P. Kuonen, P. Seite, and D. Wagner, "Parallel Simulation of Dynamic Channel Assignment," Proceeding of the 47th Vehicular Technology Conference (IEEE-VTC'97), vol. 3, pp. 1475-1478, 1997.
    [Bibtex]
    @article{Boichat:309,
    Author = {R. Boichat and S. Josselin and Pierre Kuonen and P. Seite and D. Wagner},
    Journal = {Proceeding of the 47th Vehicular Technology Conference (IEEE-VTC'97)},
    Month = {mai},
    Pages = {1475-1478},
    Title = {Parallel Simulation of Dynamic Channel Assignment},
    Volume = {3},
    Year = {1997}}
  • P. Calegari, P. Kuonen, F. Guidec, and D. Wagner, "Genetic approach to radio network optimization for mobile systems," Proceeding of the 47th Vehicular Technology Conference (IEEE-VTC'97), vol. 2, pp. 755-759, 1997.
    [Bibtex]
    @article{Calegari:307,
    Author = {P. Calegari and Pierre Kuonen and F. Guidec and D. Wagner},
    Journal = {Proceeding of the 47th Vehicular Technology Conference (IEEE-VTC'97)},
    Month = {mai},
    Pages = {755-759},
    Title = {Genetic approach to radio network optimization for mobile systems},
    Volume = {2},
    Year = {1997}}
  • P. Calegari, F. Guidec, and P. Kuonen, "Urban Radio Network Planning for Mobile Phones," EPFL-Supercomputing Review, pp. 4-10, 1997.
    [Bibtex]
    @article{Calegari:304,
    Author = {P. Calegari and F. Guidec and Pierre Kuonen},
    Journal = {EPFL-Supercomputing Review},
    Month = {nov},
    Pages = {4-10},
    Title = {Urban Radio Network Planning for Mobile Phones},
    Year = {1997}}
  • P. Calégari, F. Guidec, P. Kuonen, and D. Kobler, "Parallel Island-Based Genetic Algorithm for Radio Network Design: Journal of Parallel and Distributed Computing (JPDC): special issue on Parallel Evolutionary Computing," Journal of Parallel and Distributed Computing (JPDC): special issue on Parallel Evolutionary Computing, vol. 47, pp. 86-90, 1997.
    [Bibtex]
    @article{Calegari:1132,
    Author = {Patrice Cal{\'e}gari and Fr{\'e}d{\'e}ric Guidec and Pierre Kuonen and Daniel Kobler},
    Journal = {Journal of Parallel and Distributed Computing (JPDC): special issue on Parallel Evolutionary Computing},
    Month = {nov},
    Pages = {86-90},
    Title = {Parallel Island-Based Genetic Algorithm for Radio Network Design: Journal of Parallel and Distributed Computing (JPDC): special issue on Parallel Evolutionary Computing},
    Volume = {47},
    Year = {1997}}
  • B. Chamaret, S. Josselin, P. Kuonen, M. Pizarroso, B. Salas-Manzanedo, S. Ubeda, and D. Wagner, "Radio Network Optimization with Maximum Independent Set Search," Proceeding of the 47th Vehicular Technology Conference (IEEE-VTC'97), vol. 3, pp. 770-774, 1997.
    [Bibtex]
    @article{Chamaret:308,
    Author = {B. Chamaret and S. Josselin and Pierre Kuonen and M. Pizarroso and B. Salas-Manzanedo and S. Ubeda and D. Wagner},
    Journal = {Proceeding of the 47th Vehicular Technology Conference (IEEE-VTC'97)},
    Month = {mai},
    Pages = {770-774},
    Title = {Radio Network Optimization with Maximum Independent Set Search},
    Volume = {3},
    Year = {1997}}
  • P. J. Cullen, S. Josselin, P. Kuonen, M. Pizarroso, and D. Wagner, "Coverage and Interference Prediction and Radio Planning Optimisation," ACTS Mobile Communication SUMMIT'97, vol. 2, pp. 557-562, 1997.
    [Bibtex]
    @article{Cullen:305,
    Author = {P.J. Cullen and S. Josselin and Pierre Kuonen and M. Pizarroso and D. Wagner},
    Journal = {ACTS Mobile Communication SUMMIT'97},
    Month = {oct},
    Pages = {557-562},
    Title = {Coverage and Interference Prediction and Radio Planning Optimisation},
    Volume = {2},
    Year = {1997}}
  • F. Guidec, P. Kuonen, and P. Calégari, "Radio Wave Propagation Simulation on the Cray T3D," Parallel Computing: Fundamentals, Applications and New Directions, 1997.
    [Bibtex]
    @article{Guidec:329,
    Author = {F. Guidec and Pierre Kuonen and P. Cal{\'e}gari},
    Journal = {Parallel Computing: Fundamentals, Applications and New Directions},
    Month = {sep},
    Title = {Radio Wave Propagation Simulation on the Cray T3D},
    Year = {1997}}
  • F. Guidec, P. Calégari, and P. Kuonen, "Parallel Irregular Software for Wave Propagation Simulation," High-Performance Computing and Networking. HPCN Europe'97, vol. 1225, pp. 84-94, 1997.
    [Bibtex]
    @article{Guidec:310,
    Author = {F. Guidec and P. Cal{\'e}gari and Pierre Kuonen},
    Journal = {High-Performance Computing and Networking. HPCN Europe'97},
    Month = {avr},
    Pages = {84-94},
    Title = {Parallel Irregular Software for Wave Propagation Simulation},
    Volume = {1225},
    Year = {1997}}
  • F. Guidec, P. Calegari, and P. Kuonen, "Object-Oriented Parallel Software for Radio Wave Propagation Simulation in Urban Environment," Proceeding of the third International Euro-Par Conference (EuroPar'97), vol. 1300, pp. 832-839, 1997.
    [Bibtex]
    @article{Guidec:306,
    Author = {F. Guidec and P. Calegari and Pierre Kuonen},
    Journal = {Proceeding of the third International Euro-Par Conference (EuroPar'97)},
    Month = {sep},
    Pages = {832-839},
    Title = {Object-Oriented Parallel Software for Radio Wave Propagation Simulation in Urban Environment},
    Volume = {1300},
    Year = {1997}}
  • [PDF] J. Hennebert, C. Ris, H. Bourlard, S. Renals, and N. Morgan, "Estimation of Global Posteriors and Forward-Backward Training of Hybrid HMM/ANN Systems," in European Conference on Speech Communication and Technology (EUROSPEECH 97), Rhodes, Greece, 1997, pp. 1951-1954.
    [Bibtex] [Abstract]
    @conference{henn97:euro,
    author = "Jean Hennebert and Christophe Ris and Herv{\'e} Bourlard and Steve Renals and Nelson Morgan",
    abstract = "The results of our research presented in this paper is two-fold. First, an estimation of global posteriors is formalized in the framework of hybrid HMM/ANN systems. It is shown that hybrid HMM/ANN systems, in which the ANN part estimates local posteriors, can be used to modelize global model posteriors. This formalization provides us with a clear theory in which both REMAP and ``classical'' Viterbi trained hybrid systems are unified. Second, a new forward-backward training of hybrid HMM/ANN systems is derived from the previous formulation. Comparisons of performance between Viterbi and forward-backward hybrid systems are presented and discussed.",
    address = " ",
    booktitle = "European Conference on Speech Communication and Technology (EUROSPEECH 97), Rhodes, Greece",
    crossref = " ",
    editor = " ",
    month = " ",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    number = " ",
    organization = " ",
    pages = "1951-1954",
    publisher = " ",
    series = " ",
    title = "{E}stimation of {G}lobal {P}osteriors and {F}orward-{B}ackward {T}raining of {H}ybrid {HMM}/{ANN} {S}ystems",
    Pdf = "http://www.hennebert.org/download/publications/eurospeech-1997-remap-estimation-global-posteriors-forward-backward-hybrid-hmm-ann.pdf",
    volume = " ",
    year = "1997",
    }

    The results of our research presented in this paper is two-fold. First, an estimation of global posteriors is formalized in the framework of hybrid HMM/ANN systems. It is shown that hybrid HMM/ANN systems, in which the ANN part estimates local posteriors, can be used to modelize global model posteriors. This formalization provides us with a clear theory in which both REMAP and ``classical'' Viterbi trained hybrid systems are unified. Second, a new forward-backward training of hybrid HMM/ANN systems is derived from the previous formulation. Comparisons of performance between Viterbi and forward-backward hybrid systems are presented and discussed.

  • P. Calegari, F. Guidec, P. Kuonen, B. Chamaret, S. Josselin, D. Wagner, and M. Pizarosso, "Radio network planning with combinatorial optimization algorithms," ACTS Mobile Communications Summit 96 Conference, 1996.
    [Bibtex]
    @article{Calegari:312,
    Author = {P. Calegari and F. Guidec and Pierre Kuonen and B. Chamaret and S. Josselin and D. Wagner and M. Pizarosso},
    Journal = {ACTS Mobile Communications Summit 96 Conference},
    Month = {nov},
    Title = {Radio network planning with combinatorial optimization algorithms},
    Year = {1996}}
  • F. Guidec, P. Kuonen, and P. Calegari, "ParFlow++: a C++ Parallel Application for Wave Propagation Simulation," SPEEDUP journal, vol. 10, pp. 68-73, 1996.
    [Bibtex]
    @article{Guidec:313,
    Author = {F. Guidec and Pierre Kuonen and P. Calegari},
    Journal = {SPEEDUP journal},
    Month = {d{\'e}c},
    Pages = {68-73},
    Title = {ParFlow++: a C++ Parallel Application for Wave Propagation Simulation},
    Volume = {10},
    Year = {1996}}
  • [PDF] J. Hennebert and D. Petrovska, "POST: Parallel Object-Oriented Speech Toolkit," in International Conference on Spoken Language Processing (ICSLP 96), Philadelphia, USA, 1996, pp. 1966-1969.
    [Bibtex] [Abstract]
    @conference{henn96:icslp,
    author = "Jean Hennebert and Dijana Petrovska",
    abstract = "We give a short overview of POST, a parallel speech toolkit that is distributed freeware to academic institutions. The underlying idea of POST is that large computational problems, like the ones involved in Automatic Speech Recognition (ASR), can be solved more cost effectively by using the aggregate power and memory of many computers. In its current version (January 96) and amongst other things, POST can perform simple feature extraction, training and testing of word and subword Hidden Markov Models (HMMs) with discrete and multigaussian statistical modelling. In this parer, the implementation of the parallelism is discussed and an evaluation of the performances on a telephone database is presented. A short introduction to Parallel Virtual Machine (PVM), the library through which the parallelism is achieved, is also given.",
    address = " ",
    booktitle = "International Conference on Spoken Language Processing (ICSLP 96), Philadelphia, USA",
    crossref = " ",
    editor = " ",
    keywords = "ASR; Speech Recognition; Toolkit; Parallelism",
    month = " ",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    number = " ",
    organization = " ",
    pages = "1966-1969",
    publisher = " ",
    series = " ",
    title = "{POST}: {P}arallel {O}bject-{O}riented {S}peech {T}oolkit",
    Pdf = "http://www.hennebert.org/download/publications/icslp-1996-post_parallel_object_oriented_speech_toolkit.pdf",
    volume = " ",
    year = "1996",
    }

    We give a short overview of POST, a parallel speech toolkit that is distributed freeware to academic institutions. The underlying idea of POST is that large computational problems, like the ones involved in Automatic Speech Recognition (ASR), can be solved more cost effectively by using the aggregate power and memory of many computers. In its current version (January 96) and amongst other things, POST can perform simple feature extraction, training and testing of word and subword Hidden Markov Models (HMMs) with discrete and multigaussian statistical modelling. In this parer, the implementation of the parallelism is discussed and an evaluation of the performances on a telephone database is presented. A short introduction to Parallel Virtual Machine (PVM), the library through which the parallelism is achieved, is also given.

  • P. Kuonen, S. Josselin, and D. Wagner, "Parallel Computing of Radio Coverage:," Proceeding of the 46th Vehicular Technology (IEEE-VTC'96), vol. 3, pp. 1438-1442, 1996.
    [Bibtex]
    @article{Kuonen:316,
    Author = {Pierre Kuonen and S. Josselin and D. Wagner},
    Journal = {Proceeding of the 46th Vehicular Technology (IEEE-VTC'96)},
    Month = {mai},
    Pages = {1438-1442},
    Title = {Parallel Computing of Radio Coverage:},
    Volume = {3},
    Year = {1996}}
  • P. Kuonen, S. Ubéda, and J. Zerovnik, "Graph Theory Applied to Mobile Network Optimisation," Electronical Review, vol. 63, pp. 65-144, 1996.
    [Bibtex] [Abstract]
    @article{Kuonen:315,
    Abstract = {Applications of graph theory to some problems of mobile network optimisation are considered. This report gives simplified models for location of transmitters and for channel allocation in terms of graph theory. Two different approaches to the problem of location of transmitters are compared.},
    Author = {Pierre Kuonen and S. Ub{\'e}da and J. Zerovnik},
    Date-Modified = {2016-10-16 07:25:08 +0000},
    Journal = {Electronical Review},
    Month = {jan},
    Pages = {65-144},
    Title = {Graph Theory Applied to Mobile Network Optimisation},
    Volume = {63},
    Year = {1996}}

    Applications of graph theory to some problems of mobile network optimisation are considered. This report gives simplified models for location of transmitters and for channel allocation in terms of graph theory. Two different approaches to the problem of location of transmitters are compared.

  • P. Kuonen, P. Calegari, and F. Guidec, "A parallel genetic approach to transceiver placement optimisation," Proceedings of SIPAR96, 1996.
    [Bibtex]
    @article{Kuonen:314,
    Author = {Pierre Kuonen and P. Calegari and F. Guidec},
    Journal = {Proceedings of SIPAR96},
    Month = {oct},
    Title = {A parallel genetic approach to transceiver placement optimisation},
    Year = {1996}}
  • D. Petrovska, J. Hennebert, D. Genoud, and G. Chollet, "Semi-Automatic HMM-based annotation of the POLYCOST database," in COST 250 workshop on Application of Speaker Recognition Techniques in Telephony, Vigo, Spain, 1996, pp. 23-26.
    [Bibtex]
    @conference{petr96:cost,
    author = "Dijana Petrovska and Jean Hennebert and Dominique Genoud and G{\'e}rard Chollet",
    address = " ",
    booktitle = "COST 250 workshop on Application of Speaker Recognition Techniques in Telephony, Vigo, Spain",
    crossref = " ",
    editor = " ",
    month = " ",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    number = " ",
    organization = " ",
    pages = "23-26",
    publisher = " ",
    series = " ",
    title = "{S}emi-{A}utomatic {HMM}-based annotation of the {POLYCOST} database",
    volume = " ",
    year = "1996",
    }
  • S. Josselin, D. Wagner, P. Kuonen, and S. Ubéda, "Parallel Network Coverage Simulation Application," Second European PVM'Users Group Meeting (EuroPVM'95), 1995.
    [Bibtex]
    @article{Josselin:317,
    Author = {S. Josselin and D. Wagner and Pierre Kuonen and S. Ub{\'e}da},
    Date-Modified = {2016-10-16 07:20:08 +0000},
    Journal = {Second European PVM'Users Group Meeting (EuroPVM'95)},
    Month = {sep},
    Title = {Parallel Network Coverage Simulation Application},
    Year = {1995}}
  • P. Kuonen, "The K-Ring," Proceeding of the European Research Seminar on Advances in Distributed Systems (ERSADS'95), 1995.
    [Bibtex]
    @article{Kuonen:324,
    Author = {Pierre Kuonen},
    Date-Modified = {2016-10-16 07:20:52 +0000},
    Journal = {Proceeding of the European Research Seminar on Advances in Distributed Systems (ERSADS'95)},
    Month = {avr},
    Title = {The K-Ring},
    Year = {1995}}
  • N. Kühne, D. Komplita, P. Kuonen, and Y. Martin, "SEQUOIA: Une contribution à la gestion de l'informatique pour la psychogériatrie de demain," Revue Médicale de la Suisse Romande, 1995.
    [Bibtex]
    @article{Kuehne:325,
    Author = {N. K{\"u}hne and D. Komplita and Pierre Kuonen and Y. Martin},
    Date-Modified = {2016-10-16 07:19:28 +0000},
    Journal = {Revue M{\'e}dicale de la Suisse Romande},
    Month = {jan},
    Title = {SEQUOIA: Une contribution {\`a} la gestion de l'informatique pour la psychog{\'e}riatrie de demain},
    Year = {1995}}
  • N. Kühne, D. Komplita, P. Kuonen, Y. Martin, F. Ramseier, A. -C. Delacrétaz, and P. Lemay, "SEQUOIA: Système d'Enregistrement, de QUantification et d'Organisation de l'Information sur l'Activité," 6ème Congrès Annuel de l'Association Latine pour l'Analyse des Systèmes de Santé (CALASS 95), 1995.
    [Bibtex]
    @article{Kuehne:318,
    Author = {N. K{\"u}hne and D. Komplita and Pierre Kuonen and Y. Martin and F. Ramseier and A.-C. Delacr{\'e}taz and P. Lemay},
    Date-Modified = {2016-10-16 07:22:34 +0000},
    Journal = {6{\`e}me Congr{\`e}s Annuel de l'Association Latine pour l'Analyse des Syst{\`e}mes de Sant{\'e} (CALASS 95)},
    Month = {may},
    Title = {SEQUOIA: Syst{\`e}me d'Enregistrement, de QUantification et d'Organisation de l'Information sur l'Activit{\'e}},
    Year = {1995}}
  • [PDF] F. Simillion, J. Hennebert, and M. Wentland, "From Prediction to Classification : The Application of Pattern Recognition Theory to Stock Price Movements Analysis," in Second Congrès International de Gestion et d'Economie Floue (2nd CIGEF), 1995, pp. 1-15.
    [Bibtex] [Abstract]
    @conference{simi95:sigef,
    author = "Fabian Simillion and Jean Hennebert and Maria Wentland",
    abstract = "The limited success of most prediction systems has proved that future stock prices are very difficult to predict. The purpose of this paper is to show that future prices do not have to be known to make successful investments and that anticipating movements (increases or decreases) of the price can be sufficient. Probabilistic classification systems based on pattern recognition theory appear to be a good way to reach this objective. Moreover, they include some other advantages, principally in terms of risk management. Results show satisfactory classification hit rates but a rather poor translation into financial gains. This paper tries to identify causes of this problem and proposes some ideas of solution. ",
    address = " ",
    booktitle = "Second Congr{\`e}s International de Gestion et d'Economie Floue (2nd CIGEF)",
    crossref = " ",
    editor = " ",
    keywords = "MLP, Parzen, Financial Prediction, Pattern Matching, Machine Learning",
    month = " ",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    number = " ",
    organization = " ",
    pages = "1-15",
    publisher = " ",
    series = " ",
    title = "{F}rom {P}rediction to {C}lassification : {T}he {A}pplication of {P}attern {R}ecognition {T}heory to {S}tock {P}rice {M}ovements {A}nalysis",
    Pdf = "http://www.hennebert.org/download/publications/cigef-1995-from-prediction-to-classification-application-pattern-recognition-theory-to-stock-price-movements-analysis.pdf",
    volume = " ",
    year = "1995",
    }

    The limited success of most prediction systems has proved that future stock prices are very difficult to predict. The purpose of this paper is to show that future prices do not have to be known to make successful investments and that anticipating movements (increases or decreases) of the price can be sufficient. Probabilistic classification systems based on pattern recognition theory appear to be a good way to reach this objective. Moreover, they include some other advantages, principally in terms of risk management. Results show satisfactory classification hit rates but a rather poor translation into financial gains. This paper tries to identify causes of this problem and proposes some ideas of solution.

  • [PDF] V. Fontaine, J. Hennebert, and H. Leich, "Influence of Vector Quantization on Isolated Word Recognition," in European Signal Processing Conference (EUSIPCO 94), Edinburgh, UK, 1994, pp. 115-118.
    [Bibtex] [Abstract]
    @conference{font94:eusip,
    author = "Vincent Fontaine and Jean Hennebert and Henri Leich",
    abstract = "Vector Quantization can be considered as a data compression technique. In the last few years, vector quantization has been increasingly applied to reduce problem complexity like pattern recognition. In speech recognition, discrete systems are developed to build up real-time systems. This paper presents original results by comparing the K-Means and the Kohonen approaches on the same recognition platform. Influence of some quantization parameters is also investigated. It can be observed through the results presented in this paper that the quantization quality has a significant influence on the recognition rates. Surprisingly, the Kohonen approach leads to better recognition results despite its poor distortion performance.",
    address = " ",
    booktitle = "European Signal Processing Conference (EUSIPCO 94), Edinburgh, UK",
    crossref = " ",
    editor = " ",
    isbn = "3200001658",
    keywords = "Speech Recognition; ASR, Vector Quantization; HMM",
    month = " ",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    number = " ",
    organization = " ",
    pages = "115-118",
    publisher = " SuviSoft Oy Ltd.",
    series = " ",
    title = "{I}nfluence of {V}ector {Q}uantization on {I}solated {W}ord {R}ecognition",
    Pdf = "http://www.hennebert.org/download/publications/eusipco-1994-influence-vector-quantization-isolated-word-recognition.pdf",
    volume = " ",
    year = "1994",
    }

    Vector Quantization can be considered as a data compression technique. In the last few years, vector quantization has been increasingly applied to reduce problem complexity like pattern recognition. In speech recognition, discrete systems are developed to build up real-time systems. This paper presents original results by comparing the K-Means and the Kohonen approaches on the same recognition platform. Influence of some quantization parameters is also investigated. It can be observed through the results presented in this paper that the quantization quality has a significant influence on the recognition rates. Surprisingly, the Kohonen approach leads to better recognition results despite its poor distortion performance.

  • [PDF] J. Hennebert, M. Hasler, and H. Dedieu, "Neural Networks in Speech Recognition," in 6th Microcomputer School, invited paper, Prague, Czech Republic, 1994, pp. 23-40.
    [Bibtex] [Abstract]
    @conference{henn94:micro,
    author = "Jean Hennebert and Martin Hasler and Herv{\'e} Dedieu",
    abstract = "We review some of the Artificial Neural Network (ANN) approaches used in speech recognition. Some basic principles of neural networks are briefly described as well as their current applications and performances in speech recognition. Strenghtnesses and weaknesses of pure connectionnist networks in the particular context of the speech signal are then evoqued. The emphasis is put on the capabilities of connectionnist methods to improve the performances of the Hidden Markov Model approach (HMM). Some of the principles that govern the socalled hybrid HMM-ANN approach are then briefly explained. Some recent combinations of stochastic models and ANNs known as the Hidden Control Neural Networks are also presented.",
    address = " ",
    booktitle = "6th Microcomputer School, invited paper, Prague, Czech Republic",
    crossref = " ",
    editor = " ",
    keywords = "ANN; Artificial Neural Networks; Speech Recognition; ASR",
    month = " ",
    note = "Some of the files below are copyrighted. They are provided for your convenience, yet you may download them only if you are entitled to do so by your arrangements with the various publishers.",
    number = " ",
    organization = " ",
    pages = "23-40",
    publisher = " ",
    series = " ",
    title = "{N}eural {N}etworks in {S}peech {R}ecognition",
    Pdf = "http://www.hennebert.org/download/publications/microcomputerschool-1994-neural-networks-in-speech.pdf",
    volume = " ",
    year = "1994",
    }

    We review some of the Artificial Neural Network (ANN) approaches used in speech recognition. Some basic principles of neural networks are briefly described as well as their current applications and performances in speech recognition. Strenghtnesses and weaknesses of pure connectionnist networks in the particular context of the speech signal are then evoqued. The emphasis is put on the capabilities of connectionnist methods to improve the performances of the Hidden Markov Model approach (HMM). Some of the principles that govern the socalled hybrid HMM-ANN approach are then briefly explained. Some recent combinations of stochastic models and ANNs known as the Hidden Control Neural Networks are also presented.

  • D. Komplita, N. Kühne, J. Wertheimer, P. Kuonen, and Y. Martin, "Recherche multidisciplinaire SUPG-EPFL: Monitoring et gestion de la production des soins," SYSTED'94, 1994.
    [Bibtex]
    @article{Komplita:320,
    Author = {D. Komplita and N. K{\"u}hne and J. Wertheimer and Pierre Kuonen and Y. Martin},
    Date-Modified = {2016-10-16 07:18:15 +0000},
    Journal = {SYSTED'94},
    Month = {may},
    Title = {Recherche multidisciplinaire SUPG-EPFL: Monitoring et gestion de la production des soins},
    Year = {1994}}
  • P. Kuonen, "Répartition de la charge dans les algorithmes parallèles irréguliers," EPFL-Supercomputing Review, 1994.
    [Bibtex]
    @article{Kuonen:319,
    Author = {Pierre Kuonen},
    Date-Modified = {2016-10-16 07:18:42 +0000},
    Journal = {EPFL-Supercomputing Review},
    Month = {nov},
    Title = {R{\'e}partition de la charge dans les algorithmes parall{\`e}les irr{\'e}guliers},
    Year = {1994}}
  • P. Kuonen, "Les Systèmes Experts : Champs d'application, critères de choix," Le journal de l'intelligence artificielle, 1988.
    [Bibtex]
    @article{Kuonen:328,
    Author = {Pierre Kuonen},
    Date-Modified = {2016-10-16 07:16:06 +0000},
    Journal = {Le journal de l'intelligence artificielle},
    Month = {oct},
    Title = {Les Syst{\`e}mes Experts : Champs d'application, crit{\`e}res de choix},
    Year = {1988}}