You are here


Primary tabs

  1. Pierluigi Bellutti,
    Improvement of Gate Oxide Quality at the Active Region Border,
    Gate oxide quality at the active border region is known to be quite poor. By adopting a high temperature (1150 °C) dry field oxidation for LOCOS, the gate oxide quality in this border region can be improved to a quality level that is, in terms of charge to breakdown, very close to the characteristics typical for the bulk active region. In this way reliability and yield reduction related to gate oxide failure at the active region border is avoided. A qualitative model is used to support the hypothesis,
  2. Edmondo Trentin,
    Learning the Width of Activations in Neural Networks,
    This report introduces a novel algorithm to learn the width of non-linear activation functions (of arbitrary analytical form) in layered networks. The algorithm is based on a steepest gradient-descent technique, and relies on the inductive proof of a theorem that involves the novel concept of expansion function of the activation associated to a given unit of the neural net. Experimental results obtained in a speaker nomalization task with a mixture of Multilayer Perceptron show a dramatic improvement of performance with respect to the standard Back-Propagation training,
  3. Edmondo Trentin; Diego Giuliani,
    Channel-Dependent Compensation for Noisy Acoustic Data,
    One on the major problems in channel compensation for acoustic data collected over the telephone line arises from the wide variety of involved channels (whenever no fixed line is used). Attempts to perform channel compensation by way of a feature mapping between noisy and clean acoustic spaces as whole resulted unsatisfactory. This lead to the idea of channel-dependent transformations, i.e. channel typologies (classes) are looked for, and feature mappings are accomplished differently according to individual classes. This work discusses experimental results obtained with a connectionist architecture, based on Simple Linear Perceptrons, to approach the problem of channel-dependent compensation. The well-known k-means clustering algorithm has been used to partition background-noise of signals into a codebook of channels prototypes. Preliminary recognition experiments are reported, showing some properties of the proposed approach,
  4. Edmondo Trentin; D. Falavigna,
    An Expectation-Maximization (EM) Approach to Noise Reduction,
    This report discusses the principles and preliminary experimental results obtained with an algorithm based on Expectation-Maximization (EM) to approach the problem of noise reduction over the telephone line. Recognition experiments have been carried out using speaker-independent (SI) recognition systems based on Hidden Markov Models (HMMs). Preliminary conclusions are outlined,
  5. Edmondo Trentin,
    Integrating Recurrent Neural Networks and Dynamic Programming in an Hybrid Speech Recognition System: Concepts and Preliminary Results,
    This report introduces an hybrid speech recognition system for Speaker Independent (SI), continuous speech with a small vocabulary (sequences of Italian digits). The hybrid is based on parallel, state-space recurrent neural networks trained to perform a-posteriori state probability estimates of an underlying hidden Markov model (HMM) with fixed transition probabilities, given the sequence of acoustic observations. Training is accomplished in a supervised manner, relying on a prior Viterbi segmentation. Decoding is accomplished in a Dynamic programming framework, i.e. transitions along paths of the HMM, realizing a Viterbi decoding criterion. Preliminary experimental results of state-emission probability estimation and recognition of noisy signals acquired on the telephone line are presented,
  6. Cesare Furlanello; Stefano Merler; C. Chemini; Annamaria Rizzoli; G. Nicolini; P. Bonavita,
    Metodi neuronali e statistici per l'analisi del rischio di parassiti,
  7. Dan Cristea,
    Representing and Understanding Discourse,
    The paper deals with understanding discourse. Its main goal is to function as in introductory course on the structure of discourse and text cohesion, of the type given by pronominal anaphora resolution, as it is seen today by many authors, but in many respects it also reflects a personal point of view. It begins with a presentation of centering theory (CT), a theory that explains the interaction between local coherence and choice of referring expressions. Some of its weak points are also revealed. Then it continues wit an insight into rhetorical structure theory (RST) a theory that accounts for the structure of texts in terms of relations that hold between parts of the text. Some of its numerous critics are also commented. Then a reconciliation between the two theories, with the benefit of extending the applicability of CT over difficult areas is proposed. This effort materialises in the development of an architecture of a system aimed at doing text interpretation. Representational issues are discussed rather detailed. Functioning of the model is detailed through the presentation of a worked example. Finally some considerations on a upper model (principle) for discourse interpretation are made. The treatment is rather introductory. It does not suppose any previous knowledge on discourse theories, but some bases in computational linguistics are beneficial,
  8. Francesco Ricci; Paolo Avesani,
    Nearest Neighbor Classifaction with a Local Asymmetrically Weighted Metric,
    This paper introduces a new local asymmetric weighting scheme for the nearest neighbor classification algorithm. It is shown both with theoretical arguments and computer experiments that good compression rates can be achieved outperforming the accuracy of the standard nearest neighbor classification algorithm and obtaining almost the same accuracy as the k-NN algorithm with k optimised in each data set. The improvement in time performance is proportional to the compression rate and in general it depends on the data set. The comparison of the classification accuracy of the proposed algorithm with a local symmetrically weighted metric and with a global metric strongly shows that the proposed scheme is to be preferred,
  9. Roberto Gretter,
    Word Spotting - Work in Progress,
    This report describes word spotting activity carried out at IRST. Performance is evaluated for different artificial tasks defined within the APASCI database. The base system makes use of the HMM framework developed in the last years at IRST. In particular, some parameters like the number of keywords, their length, and the use of different filler models (acoustical, lexical, syntactical) are investigated. Experiments show, as expected, that the length of the keywords seems to be the most crucial factor, i.e. short keywords are more difficult to spot than longer ones. Moreover, it is observed that the choice of the filler models strongly influences performance,
  10. I. Gent; E. MacIntyre; P. Prosser; T. Walsh,
    The Constrainedness of Search,
    We introduce a parameter that measures the 'constrainedness' of an ensemble of combinatorial problems. If problems are over-constrained, they are likely to be insoluble. If problems are under-constrained, they are likely to be soluble. This constrainedness parameter generalizes a number of parameters previously used in different NP-complete problem classes. Phase transitions in different NP classes can thus be directly compared. This parameter can also be used as a heuristic to guide search. It captures the intuition of making the most constrained choice first, since it is often useful to branch into the least constrained sub-problem. Many widely disparate heuristics can be seen as minimizing contrainedness,