Old Seminars

Seminars in 2014

Shuffling Data into Cloud Oblivion

 

  • Speaker:  Roberto Tamassia (Brown University)
  • When:  April 24th, 2014 at 3:00 pm
  • Where: Meeting Room of the Department of Engineering, Section of Computer Science and Automation, 2nd floor – Via della Vasca Navale, 79
  • Abstract:  A shuffle is an algorithm for rearranging an array to achieve a random permutation of its elements. Early shuffle methods were motivated by the problem of shuffling a deck of cards. An oblivious shuffle is a distributed shuffle executed by a client who permutes an array of encrypted data items stored at a server in such a way that the server cannot determine the output permutation with probability better than a random guess. Several private cloud storage solutions that obfuscate the access pattern to the data use an oblivious shuffle as a fundamental building block. We present the Melbourne Shuffle, a simple and efficient oblivious shuffle that allows a client with O(\sqrt{n}) memory to obliviously shuffle an array of size n stored at a server by exchanging O(\sqrt{n}) messages of size O(\sqrt{n}). The Melbourne Shuffle is the first provably secure oblivious shuffle that is not based on sorting. This talk is based on the paper “The Melbourne Shuffle: Improving Oblivious Storage in the Cloud” by Olga Ohrimenko, Michael Goodrich, Roberto Tamassia and Eli Upfal, International Colloquium on Automata, Languages and Programming (ICALP), 2014 (to appear).

Tecniche Semantiche per la Web Intelligence

 

  • Speaker:  Prof. Carlo Tasso, Università di Udine
  • When:  January 20th  2014
  • Where: Room N3
  • Abstract:  Il seminario introduce inizialmente la Web Intelligence come l’unione di due discipline: l’Intelligenza Artificiale e le Tecnologie del Web e la loro evoluzione. Viene poi sinteticamente mostrato come i Motori di Ricerca tradizionali abbiano alcune fondamentali limitazioni, che si possono superare solo mediante i nuovi approcci della Web Intelligence. Il seminario esamina poi le tecniche di analisi semantica dell’informazione, nell’ottica di un loro utilizzo volto a migliorare gli strumenti di accesso all’informazione Web. Alcune di tali tecniche vengono illustrate con riferimento alle attività di ricerca svolte negli ultimi anni nell’ambito del Laboratorio di Intelligenza Artificiale dell’Università di Udine: verranno presentati in particolare un sistema per l’estrazione automatica di key phrase significative da un testo, il sistema prototipo RES per la raccomandazione personalizzata di paper scientifici e un sistema per la semantic relatedness che utilizza basi di conoscenza 2.0.

Seminars in 2013

RoseAnn: Reconciling Opinions of Semantic Annotators

 

  • Speaker:  Stefano Ortona (Oxford University)
  • When:  Dicember 16th  2013
  • Where: Room N4
  • Abstract:  Oggi è in continua crescita il numero di servizi che permettono di individuare nei documenti annotazioni semantiche. Se all’inizio questi servizi offrivano annotazioni di pochi e ben noti concetti, oggi giorno lo scenario è diventato molto più ricco e diversificato. Anche se questi servizi di solito hanno diversi vocabolari, è possibile individuare delle connessioni fra di loro. L’obiettivo di RoseAnn è quello di integrare diversi di questi servizi per offrire un risultato più accurato e completo. RoseAnn offre un middleware in grado di produrre un servizio di annotazioni semantiche basato sull’unione di diversi annotatori, invocati come semplici servizi web. In questo modo possiamo raggiungere due obiettivi importanti: offrire un vocabolario di annotazioni estremamente vasto unendo vocabolari diversi tra loro, migliorare l’accuratezza dei singoli servizi sia in termini di precision che di recall. Tuttavia questo processo d’integrazione non è semplice e diretto e porta a diverse problematiche. Dobbiamo tenere in considerazione l’integrazione di diverse sorgenti di informazioni che in molti casi sono in disaccordo e inoltre non possiamo contare su nessuna conoscenza a priori su queste sorgenti d’informazione. In RoseAnn questi problemi sono stati affrontati con diverse tecniche: una che non richiede training data e prende spunto dall’idea di riparare un database inizialmente inconsistente rispetto a certi vincoli; un’altra tecnica supervised che si basa sul Maximal Entropy Markov Model adattato per lo scenario desiderato. Entrambe le tecniche fanno uso di una conoscenza del dominio espressa tramite un’Ontologia ed entrambe sono in grado di raggiungere ottimi risultati sia in confronto con i singoli annotatori sia in confronto delle tecniche attualmente esistenti.

Querying and Exploring Big Brain Data

 

  • Speaker:  Anastasia Ailamaki (EPFL – Ecole Polytechnique Federale de Lausanne)
  • When:  October 24th  2013
  • Where: Room N4
  • Abstract: 
  • Today’s scientific processes heavily depend on fast and accurate analysis of experimental data. Scientists are routinely overwhelmed by the effort needed to manage the volumes of data produced either by observing phenomena or by sophisticated simulations. As database systems have proven inefficient, inadequate, or insufficient to meet the needs of scientific applications, the scientific community typically uses special-purpose legacy software. With the exponential growth of dataset size and complexity, application-specific systems, however, no longer scale to efficiently analyse the relevant parts of their data, thereby slowing down the cycle of analysing, understanding, and preparing new experiments. I will illustrate the problem with a challenging application on brain simulation data and will show how the problems from neuroscience translate into challenges for the data management community. I will also use the example of neuroscience to show how novel data management and, in particular, spatial indexing and navigation have enabled today’s neuroscientists to simulate a meaningful percentage of the human brain. Finally I will describe the challenges of integrating simulation and medical neuroscience data to advance our understanding of the functionality of the brain.

Web Data Management

 

  • Speaker:  Pierre Senellart, Associate Professor in the DBWeb team at Télécom ParisTech,
  • When:
    • 18 Sept 2013, Sala Riunioni Dia, 10:00 – 11:00 “Web Search
    • 20 Sept 2013, Sala Riunioni Dia, 10:00 – 11:00 “MapReduce: Distributed Computing at Large Scale”
    • 23 Sept 2013, Sala Riunioni Dia, 10:00 – 11:00 “Semantic Web Technologies”
    • 26 Sept 2013, Sala Riunioni Dia, 10:00 – 11:00 “Probabilistic XML: A Data Model for the Web”
  • Where: Meeting room DIA
  • Abstract: 
  • The Web has revolutioned access to information. This series of seminar will cover the theme of Web data management, i.e., how to acquire, properly model, store and analyze data from the World Wide Web. We will first give an overview of technologies used by Web search engines to crawl, index, and rank Web pages according to users’ queries. This can only be done by using scalable approaches to distributed storage and computing, which will be presented in a second lecture about the MapReduce framework. Semantic Web technologies propose a workable model for representing semantic information from the Web, either explicitly annotated as such, or automatically extracted from existing resources; this will be the focus of the third lecture. Finally, we will conclude with an introduction to probabilistic data management, especially discussing probabilistic XML data models, which are well-suited to the representation, querying, and updating of the inherent uncertainty contained in Web data.

NADEEF: A Generalized Data Cleaning System

 

  • Speaker:  Paolo Papotti Qatar Computing Research Institute
  • When:   September, 11th , 2013 – 10:00
  • Where: Meeting room DIA
  • Abstract:  Data cleaning is an important problem and data quality rules are the most promising way to face it with a declarative approach. Previous work has focused on specific formalisms, such as functional dependencies (FDs), conditional functional dependencies (CFDs), and matching dependencies (MDs), and those have always been studied in isolation. Moreover, such techniques are usually applied in a pipeline or interleaved. In this work we tackle the problem in a novel system, NADEEF. NADEEF is an extensible, generic and easy-to-deploy data cleaning system that distinguishes between a programming interface and a core to achieve generality and extensibility. The programming interface can be used to express many types of data quality rules beyond the well known CFDs, MDs and ETL rules. The core algorithms can interleave multiple types of rules to detect and repair data errors. Such holistic view of the conflicts is the starting point for a novel definition of repair context that allows us to compute repairs of better quality w.r.t. previous approaches in the literature.

Declarative, optimizable data-driven specifications

of web &  mobile applications

  • Speaker:  Prof. Yannis Papakonstantinou Computer Science and Engineering Univ of California at San Diego
  • When:   September, 12th , 2013 – 10:30
  • Where: Room N3

Abstract:  Developers of web and mobile Ajax applications write too much low level “plumbing” code to efficiently access, integrate and coordinate application state that resides on the multiple tiers of the architecture, and is accessed using different languages: SQL at the database server; HTML and Javascript at the browser, which in HTML5 includes its own database state; Java or other programming languages at the application server. The FORWARD project replaces such low level programming by providing to the developer a programming abstraction where the web application is treated as a single state machine. FORWARD’s cornerstones are
(i) the unified application state virtual database, which enables manipulating the entire application state in an extension of SQL, named SQL++
(ii) specification of Ajax pages as essentially rendered views over the unified application state.

Intra-Domain Pathlet Routing

  • Speaker:  Gabriele Lospoto – Ph.D. student within the Section of Computer Science and Automation of the Doctoral School in Engineering – Department of Engineering, Roma Tre University
  • When:   Thursday July 18th, 2013 – 12:00 pm
  • Where:  Room DIA 2.10

Abstract:  Internal routing inside an ISP network is the foundation for lots of services that generate revenue from the ISP’s customers. A fine-grained control of paths taken by network traffic once it enters the ISP’s network is therefore a crucial means to achieve a top-quality offer and, equally important, to enforce SLAs. Many widespread network technologies and approaches (most notably, MPLS) offer limited (e.g., with RSVP-TE), tricky (e.g., with OSPF metrics), or no control on internal routing paths. On the other hand, recent advances in the research community are a good starting point to address this shortcoming, but miss elements that would enable their applicability in an ISP’s network. We extend pathlet routing by introducing a new control plane for internal routing that pursues the following qualities: it is designed to operate in the internal network of an ISP; it enables fine-grained management of network paths with suitable configuration primitives; it is scalable because routing changes are only propagated to the network portion that is affected by the changes; it supports independent configuration of specific network portions without the need to know the configuration of the whole network; it is robust thanks to the adoption of multipath routing; it supports the enforcement of QoS levels; it is independent of the specific data plane used in the ISP’s network; it can be incrementally deployed and it can nicely coexist with other control planes. We validate our dissemination mechanisms and algorithms experimentally in the simulation framework OMNeT++, that we also use to assess the effectiveness and scalability of our approach.

 

Humans Fighting Uncertainty in Top-K Scenarios

  • Speaker: Davide Martinenghi, Politecnico di Milano
  • When: July 9th, 2013 – 12:00
  • Where: Meeting room of the Department, 1st floor
  • Abstract: 
  • Finding the best answers to a query is a problem of paramount importance in many scenarios, including big data analysis, Web queries, and several other data-centric contexts. A common hindrance to this task comes from an inherent amount of uncertainty that may reside both in the data at hand (e.g., due to unreliability of data sources) and in the query (e.g., in the relative importance of some attributes of the queried sources). Uncertain answers entail uncertain ranking, i.e., there is no consensus on how to rank the tuples in the query answer. One way to cope with this problem is to determine the most representative ranking out of the possible rankings compatible with an uncertain scenario. Orthogonally, one can also try to reduce the amount of uncertainty by asking questions to human users in order to disambiguate the mutual order of some answer tuples. After discussing top-K query answering in the presence of uncertainty, we shall illustrate suitable strategies for exploiting the availability of a crowd of humans by determining which questions to ask and which users to select.

 

Seminars in 2012

 

Cooperative systems

  • Speaker: Prof. Frank Lewis
  • When:
  • September 28th, 2012 – 11:30
  • September 28th, 2012 – 15:00
  • October 1st, 2012, 10:00
  • Where: Meeting room of the Department, 1st floor
  • Abstract: 
  • Distributed Adaptive Control for Synchronization of Unknown Nonlinear Cooperative Systems
  • Cooperative Control Synchronization: Optimal Design and Games on Communication Graphs

Stability vs. Optimality of Cooperative Multiagent Control

Efficient Verification of Web Search Results Through Authenticated Web Crawlers

  • Speaker: Olga Ohrimenko – Department of Computer Science at Brown University (Joint work with M. Goodrich, D. Nguyen,  C. Papamanthou, R. Tamassia, N. Triandopoulos )
  • When: September 11, 2012 – 11:00
  • Where: Meeting room of the Department, 1st floor
  • Abstract: In this talk we consider the problem of verifying the correctness and completeness of the result of a keyword search. We introduce the concept of an authenticated web crawler and present its design and prototype implementation. An authenticated web crawler is a trusted program that computes a specially-crafted signature over the web contents it visits. This signature enables (i) the verification of common Internet queries on web pages, such as conjunctive keyword searches – this guarantees that the output of a conjunctive keyword search is correct and complete; (ii) the verification of the content returned by such Internet queries – this guarantees that web data is authentic and has not been maliciously altered since the computation of the signature by the crawler. In our solution, the search engine returns a cryptographic proof of the query result. Both the proof size and the verification time are proportional only to the sizes of the query description and the query result, but do not depend on the number or sizes of the web pages over which the search is performed. We experimentally demonstrate that the prototype implementation of our system provides a low communication overhead between the search engine and the user, and fast verification of the returned results by the user.

Implementing a Partitioned 2-page Book Embedding Testing Algorithm

  • Speaker: Marco Di Bartolomeo (Joint work with P. Angelini and G. Di Battista )
  • When: September 13, 2012 – 12:00 am
  • Where: Meeting room of the Department, 1st floor
  • Abstract:  In a book embedding the vertices of a graph are placed on the “spine” of a “book” and the edges are assigned to “pages” so that edges on the same page do not cross.
    In the Partitioned 2-page Book Embedding problem egdes are partitioned
    into two sets E_1 and E_2, the pages are two, the edges of E_1 are assigned to page 1, and the edges of E_2 are assigned to page 2. The problem consists of checking if an ordering of the vertices exists along the spine so that the edges of each page do not cross.
    Hong and Nagamochi give an interesting and complex linear time algorithm for tackling Partitioned 2-page Book Embedding based on SPQR-trees.
    We show an efficient implementation of this algorithm and show its effectiveness by performing a number of experimental tests.
    Because of the relationships between Partitioned 2-page Book Embedding and Clustered Planarity we yield as a side effect an implementation of a Clustered Planarity testing where the graph has exactly two clusters.

 

Computational Complexity of Traffic Hijacking under BGP and S-BGP

  • Speaker: Marco Chiesa (Joint work with G. Di Battista, T. Erlebach, and M. Patrignani)
  • When: June 18 – 11:00 am
  • Where: Meeting room of the Department, 1st floor
  • Abstract:  Harmful Internet hijacking incidents put in evidence how fragile the Border Gateway Protocol (BGP) is, which is used to exchange routing information between Autonomous Systems (ASes). As proved by recent research contributions, even S-BGP, the secure variant of BGP that is being deployed, is not fully able to blunt traffic attraction attacks. Given a traffic flow between two ASes, we study how difficult it is for a malicious AS to devise a strategy for hijacking or intercepting that flow. We show that this problem marks a sharp difference between BGP and S-BGP. Namely, while it is solvable, under reasonable assumptions, in polynomial time for the type of attacks that are usually performed in BGP, it is NP-hard for S-BGP. Our study has several by-products. E.g., we solve a problem left open in the literature, stating when performing a hijacking in S-BGP is equivalent to performing an interception.

SOS – Save Our Systems: Uniform Access to Non-Relational Database Systems

  • Speaker: Luca Rossi (Joint work with P. Atzeni, F. Bugiotti. )
  • When: June 14 2012 – h 14:00
  • Where: Meeting room of the Department, 1st floor
  • Abstract:  Non-relational databases (often termed as NoSQL) have recently emerged and have generated both interest and criticism. Interest because they address requirements that are very important in large-scale applications, criticism because of the comparison with well known relational achievements. One of the major problems often mentioned is the heterogeneity of the languages and the interfaces they offer to developers and users. Different platforms and languages have been proposed, and applications developed for one system require significant effort to be migrated to another one. Here we propose a common programming interface to NoSQL systems (and also to relational ones) called SOS (Save Our Systems). Its goal is to support application development by hiding the specific details of the various systems. It is based on a metamodeling approach, in the sense that the specific interfaces of the individual systems are mapped to a common one. The tool provides interoperability as well, since a single application can interact with several systems at the same time.

INFORMATION EXTRACTION FOR SOCIAL MEDIA ANALYSIS

  • Speaker: Denilson Barbosa (University of Calgary, Edmonton)
  • When: May 31 2012 – 15:45
  • Where: Aula N1
  • Abstract: More and more regular users use to blogosphere to express and discuss
    their opinions, the facts, events, and ideas pertaining to their own
    lives, their community, their profession, or society at large. It goes
    without saying that being able to extract reliable data from this
    medium opens the door to the most varied kinds of analysis and using
    datasets of massive proportions. As a result, a great deal of
    attention has been devoted lately to applying information extraction
    to the blogosphere. In this tutorial, I focus on a specific
    sub-problem: extracting information networks which act as summaries of
    the blogosphere as a whole. These networks consist of nodes
    representing entities and edges representing the relationship between
    such entities. I will cover fundamental tools from NLP and network
    science that allow the unsupervised extraction information networks
    from social media content.

Flexible Querying of Graph-Structured Data

  • Speaker: Peter Wood, Birkbeck, University of London
  • When:May 29, 2012 — h 11:30
  • Where: Meeting room of the Department, 1st floor
  • Abstract: We consider the problem of a user querying graph-structured data
    without being fully aware of its structure.  Included with the data may
    be an ontology defined using a simple subset of RDFS.  Using regular
    expressions in queries provides a certain amount of flexibility when
    querying such structures, but additional flexibility is gained if the
    system also supports query relaxation and query approximation.  Query
    relaxation refers to using the ontology in order to generalise a user’s
    query.  Query approximation involves modifications to a user’s query so
    that it matches the data.  In both cases, we define a notion of
    distance which reflects how closely the executed query matches the
    user’s original query. Query answers are returned to the user ranked in
    terms of this distance metric.  We use as a motivating example data
    representing educational and career timelines of users who wish to find
    what career and educational opportunities might be available to them in
    the future.

Routing in the Equilibrium

  • Speaker: Timothy G. Griffin, professor at University of Cambridge
  • When: April 20 – 10:00 am
  • Where: Meeting room of the Department, 1st floor
  • Abstract: Some path problems cannot be modelled using semirings because the associated algebraic structure is not distributive. Rather than attempting to compute globally optimal paths with such structures, it may be sufficient in some cases to find locally optimal paths — paths that represent a stable local equilibrium. For example, this is the type of routing system that has evolved to connect Internet Service Providers (ISPs) where link weights implement bilateral commercial relationships between them. Previous work has shown that routing equilibria can be computed for some non-distributive algebras using algorithms in the Bellman-Ford family. However, no polynomial time bound was known for such algorithms. In this talk, we show that routing equilibria can be computed using Dijkstra’s algorithm for one class of non-distributive structures. This provides the first polynomial time algorithm for computing locally optimal solutions to path problems.

Monitoring the Status of MPLS VPN and VPLS Based on BGP Signaling Information

  • Speaker: Massimo Rimondini, Roma TRE University
  • When: March, 28th – 18:00
  • Where: Meeting room of the Department, 1st floor
  • Abstract: The flexibility and ease of setup of MPLS Virtual Private Networks (VPNs) and Virtual Private LAN Service (VPLS) motivate the large and growing user base of these
    services. It is therefore important for an Internet Service Provider (ISP) to ensure their uninterrupted operation, as also specified in service contracts. Although network monitoring is regarded as an essential activity to pursue this goal, existing monitoring
    approaches are often limited in the ability to capture the effects of VPN-related events such as reconfigurations and device failures. We provide several contributions: 1) a methodology to monitor the status of MPLS VPN and VPLS over time, which considers the BGP signaling messages sent by routers to propagate VPN information; the methodology is founded on an analysis of the observable effects of network events; it also envisions presenting the status of MPLS VPN and VPLS in an easy-to-understand visual form that allows to immediately spot potential anomalies; 2) an extensive discussion of the tradeoff
    between scalability of our monitoring approach and visibility of the effects of network events; 3) an architecture and prototype implementation of a tool based on our methodology; 4) a thorough experimentation of our approach in a realistic network
    scenario. As an example, the methodology allowed us to spot a subtle routing anomaly triggered by an implementation choice in the routing software used in our experiments.

Metaheuristic in optimization

  • Speaker: Arne Lokketangen, Molde University College, Norway
  • When: January, 28th
  • Where:
  • Abstract: The course will give an overview of modern heuristic optimization methods that are suitable for solving practical optimization problems within logistics. The course will contain the following elements
    • Combinatorial Optimization
    • Local Search Heuristics
    • Local search based Metaheuristics
    • Population based Metaheuristics
    • Choice of methods for solving complex problems in a short time

Tool Semantici per la Web Intelligence

  • Speaker: Carlo Tasso dell’Università di Udine
  • When: january, 26th, 14:00-15:30
  • Where: Aula: N10
  • Abstract:  Il seminario ha l’obiettivo di inquadrare il nuovo settore della Web Intelligence e più in particolare di illustrare alcuni degli innovativi strumenti sviluppati per realizzare il cosiddetto Web Semantico. Ci si concentra in particolare su strumenti che sfruttano tecniche di Intelligenza Artificiale per affrontare problematiche di accesso, filtraggio, classificazione ed estrazioni di informazioni. Dopo una sintetica introduzione sull’evoluzione del Web, viene fornita una definizione di Web Intelligence. Si tratta quindi la problematica dei motori di ricerca su Web, evidenziandone le criticità, ed illustrando altresì i nuovi motori di ricerca, che stanno emergendo numerosi in alternativa ai motori di ricerca convenzionali. La parte centrale del seminario riguarda le varie tecniche di Intelligenza Artificiale che sono state proposte e/o utilizzate per accedere all’informazione online disponibile sul Web in modo innovativo, più efficace ed efficiente del modo tradizionale, concentrandosi in particolare sulla illustrazione di alcuni strumenti specifici. Chiudono il seminario alcune considerazioni sull’impatto nel mondo industriale di tali strumenti, in particolare su una serie di processi aziendali che tradizionalmente erano eseguiti con tecniche manuali, tipicamente non supportate da strumenti tecnologici avanzati e che stanno subendo un profondo cambiamento, con tecniche, metodologie, strumenti e campo d’azione completamente nuovi. Ci si riferisce in particolare a tutti i processi legati al marketing ed alla comunicazione, all’ascolto dei clienti, all’analisi della concorrenza ed allo sviluppo del prodotto. Se possibile, verranno anche dimostrati alcuni sistemi in tempo reale.

 

Big Data, Small Brains: Deep Content Analytics for Better Recommendations

  • Speaker: Prof. Giovanni Semeraro dell’Università di Bari
  • When: 9 gennaio, ore 14:00
  • Where: Aula N10
  • Abstract:

    The seminar discusses the role of techniques for deep content analytics in the “Big Data” era.

    The focus is on content-based recommender systems (CBRS), “Small Brains” which filter very large repositories of items (books, news, music tracks, TV assets, web pages…) by analyzing available information, usually textual descriptions of items previously rated by a user, and build a model of user interests, called user profile, based on the features of the items rated by that user. The user profile is then exploited to recommend new potentially relevant items.

    The main limits of CBRS will be addressed, together with current research directions that try to overcome them, including:

    –       semantic analysis, which enables a deeper “human-like” understanding of the items;

    –       knowledge infusion, which leverages open knowledge sources (Wikipedia, dictionaries,…) to diversify recommendations.

    The talk closes with a live demonstration of OTTHO (On the Tip of my THOught), an artificial player of the TV game ‘la Ghigliottina’ designed as a serendipity engine relying on a CB_word_RS.

Seminars in 2011

 

The Japan Earthquake: the impact on traffic and routing observed by a local ISP

  • Speaker: Cristel Pelsser, researcher at Internet Initiative Japan
  • When: October 14 – 2:30 pm
  • Where: Meeting room of the Department (1st floor)
  • Abstract: The Great East Japan Earthquake and Tsunami on March 11, 2011, disrupted a significant part of communications infrastructure both within the country and in connectivity to the rest of the world.
    Nonetheless, many users, especially in the Tokyo area, reported experiences that voice networks did not work yet the Internet did.
    At a macro level, the Internet was impressively resilient to the disaster this time, aside from the areas directly hit by the quake and ensuing tsunami.
    However, little is known about how the Internet was running during this period.
    We investigate the impact of the disaster to one major Japanese Internet Service Provider (ISP) by looking at measurements of traffic volumes and routing data from within the ISP, as well as routing data from an external neighbor ISP. Although we can clearly see circuit failures and subsequent repairs within the ISP, surprisingly little disruption was observed from outside.

Planarity Issues for Graph Drawing

  • Speaker: Prof. Michael Kaufmann – Universität Tübingen
  • When: 05/10/2011 11:00-13:00
  • Where: room N7 – ground floor
  • Abstract:

 

Polygonal Representations of Planar Graphs

  • Speaker: Prof. Michael Kaufmann – Universität Tübingen
  • When: 06/10/2011 11:00-13:00
  • Where: sala riunioni DIA
  • Abstract:

 

Problems and Opportunities in Context Based Personalization

  • Speaker: Letizia Tanca (Politecnico di Milano)
  • When: 28/09/2011 – h:15:00
  • Where: sala riunioni DIA
  • Abstract: In a world of global networking, the increasing amount of heterogeneous
    information, available through a variety of channels, has made it difficult for users to find the right information at the right time and at the right level of detail. The knowledge of the context in which the data are used can support the process of focussing on currently useful information, keeping noise at bay. In this talk I give an account of the researches on context-aware information systems which are going on within the PEDiGREE group at Politecnico di Milano, starting from a foundational framework for the life-cycle of contextaware systems, in which the system design and management activities consider context as an orthogonal, first-class citizen. The initial intuition dates back to the
    Context-ADDICT project, which proposed a powerful context modeling tool – known as Context Dimension Tree (CDT) – for context-aware data tailoring, to represent the admissible contexts and design the accompanying context- dependent views. The design-time and run-time activities involved in the lifecycle of context-aware systems provide
    material for stimulating research, summarized in this talk.

 

Data completeness and the currency of data

  • Speaker: Floris Geerts, Senior Research Fellow at the University of
    Edinburgh
  • When: Wednesday, Sept. 21 at 3.00pm
  • Where: DIA meeting room (first floor)
  • Abstract: Data in real-life databases become obsolete rapidly. One often finds
    that multiple values of the same entity reside in a database. While
    all of these values were once correct, most of them may have become
    stale and inaccurate. Moreover, these values often do not carry
    reliable timestamps.
    With this comes the need for studying data currency,  to identify the
    current value of an entity in a database and to answer queries with
    the current values, in the absence of timestamps. I’ll present a
    possible approach to model data currency.
    Furthermore, one may wonder whether it is possible to complete the
    database by copying more information such that the query result
    becomes more recent and complete. That is, copying more information
    into the database does not affect the query result. In this talk, I
    overview some recent results on completing databases, both in the
    currency and standard relational setting.

Repairing with quality improving dependencies: existing repair
methods, related problems, and hint towards solution.

  • Speaker: Floris Geerts, Senior Research Fellow at the University of
    Edinburgh
  • When: Monday, Sept. 19 at 3.00pm
  • Where: DIA meeting room (first floor)
  • Abstract: In this talk I will survey various approach for enforcing data quality
    dependencies. In particular, I will describe how data can be
    automatically repaired using functional, conditional and matching
    dependencies, and how the presence of reliable reference (master) data
    helps to improve the quality of the repairs. Furthermore, I will be
    briefly relate this to consistent query answering.

A principled approach to data quality: a general survey

  • Speaker: Floris Geerts, Senior Research Fellow at the University of
    Edinburgh
  • When: Giovedì 15 settembre
  • Where:DIA meeting room (first floor)
  • Abstract:

 

Algoritmi trattabili di interrogazione di dati in presenza di ontologie: ER colpisce ancora

  • Speaker: Andrea Cali’ – University of London & Oxford University
  • When: 23 Giugno 2011 Ore 11:30
  • Where:Aula N7
  • Abstract:Recenti avanzamenti nel campo dell’integrazione dei dati e del Web Semantico
    hanno sollevato il problema della risposta a query in presenza di schemi
    concettuali e dati incompleti. In questa situazione lo schema concettuale
    gioca il ruolo di ontologia di dominio, cioè di rappresentazione del frammento
    di dominio (mondo reale) che deve essere modellato dai dati.  In caso di dati
    incompleti, non possiamo assumere che tali dati soddisfino ai vincoli imposti
    dallo schema concettuale.  Rispondere a query in presenza di ontologie (o
    schemi concettuali) consiste dunque in un processo di inferenza atto a fornire
    come risultato ad una query i fatti che sono logicamente dedotti dai dati
    congiuntamente con l’ontologia.

    In questo seminario studiamo il problema di risposta a query congiuntive in
    presenza di schemi concettuali espressi in una variante espressiva del modello
    ER, analizzando dei casi trattabili, e fornendo algoritmi per la valutazione
    di query. I risultati ottenuti sul modello ER si possono trasferire in
    linguaggi per il Web Semantico, in particolare in frammenti di OWL come
    DL-Lite.

Privacy-Preserving Group Data Access via Stateless Oblivious RAM Simulation

  • Speaker: Roberto Tamassia
  • When: June 27- 15:00
  • Where: Meeting room of the Department 1st floor
  • Abstract:We study the problem of providing privacy-preserving access to an outsourced honest-but-curious data repository for a group of trusted users. We show that such privacy-preserving data access is possible using a combination of probabilistic encryption, which directly hides data values, and stateless oblivious RAM simulation, which hides the pattern of data accesses. We give simulations that have only an $O(\log n)$ amortized time overhead for simulating a RAM algorithm, $\cal A$, that has a memory of size $n$, using a scheme that is data-oblivious with very high probability assuming the simulation has access to a private workspace of size $O(n^\nu)$, for any given fixed constant $\nu>0$. This simulation makes use of pseudorandom hash functions and is based on a novel hierarchy of cuckoo hash tables that all share a common stash. We also provide results from an experimental simulation of this scheme, showing its practicality.

 

Combinatorial topology and probabilistic approaches to motion planning

  • Speaker: J.P. Laumond, LAAS-CNRS, Toulouse
  • When: June, 30 – 9:30
  • Where: Room N3
  • Abstract:Robot motion planning is an active research area in Robotics since more than 30 years. In the 80’s, the deterministic approaches based on computational geometry or real algebraic geometry solve the theoretical problem in a generic way. However they fail in solving efficiency practical problems. By relaxing the completeness requirement, probabilistic approaches are introduced in the 90’s: They performs very well. The goal of this talk is to introduce combinatorial structures (the so-called roadmaps) that capture the topology of the configuration spaces. Two schemes of algorithms to compute them will be presented and analyzed. Then effective real-time demonstrations of some algorithms will be presented from benchmarks including free-flying objects, robot manipulators and nonholonomic systems.

 

 

 

Algoritmi trattabili di interrogazione di dati in presenza di ontologie: ER colpisce ancora

  • Speaker: Andrea Calì (Birkbeck College, University of London & Oxford University, UK)
  • When: 23/06/2011 – h:11:30
  • Where: Aula N7
  • Abstract:

 

 

Mathematical Foundations of Robot Motion

  • Speaker: J.P. Laumond, LAAS-CNRS, Toulouse
  • When: June, 8 – 10:00
  • Where: Room N13A
  • Abstract: The talk will give an overview of 30 years of research in robot motion planning, ranging from the introduction in the early 80’s of the configuration space approaches, to the decidability of the piano mover problem, via both theoretical and pragmatic methods. Number of successes will be illustrated in various domains as mobile robotics, Product Lifecycle Management and humanoid robotics.

 

Web-based Information Network Analysis for Computer Science

  • Speaker: Tim Weninger (University of Ullinois at Urbana-Chaimpaign, USA)
  • When: 10/06/2011 – h:11:00
  • Where: sala riunioni, primo piano
  • Abstract:

 

 

Seamless network-wide migrations

  • Speaker:Laurent Vanbever, PhD student at Université Catholique de Louvain
  • When:  May 13 – 2 pm
  • Where: Meeting room of the Department 1st floor
  • Abstract:
    Despite its usefulness, migrating (i.e., reconfiguring) a running network is often an important source of concerns for Internet service providers. Indeed, network operators cannot simply “restart” the network and must modify the network “in place” one equipment at the time while ensuring that the traffic is not disrupted. Doing so potentially creates conflicting interactions between migrated and non-migrated devices that can lead to long and service-affecting outages.
    In this presentation, we will show that, most of the time, seamless migrations can be achieved by following a strict operational ordering. Although computing or even deciding if a safe ordering exists is hard, we will describe several techniques and tools that can efficiently solve the problem for both internal (e.g. OSPF, IS-IS) and external (BGP) routing protocols. We will also describe the design of a provisioning system that automatically performs the migration by pushing the configurations on the devices in the appropriate order while monitoring the entire migration process.

 

Local Transit Policies and the Complexity of BGP Stability Testing

  • Speaker: Marco Chiesa
  • When: March 31 – 12:00
  • Where:Meeting room of the Department 1st floor
  • Abstract:The fact that BGP, the core protocol of the Internet backbone, can oscillate spurred, in the last decade, a huge research effort. Despite those efforts, many problems remain open. For example, determining how hard it is to check that a BGP network is safe, i.e., it is guaranteed to converge, has been an elusive research goal.

    In this paper, we address several problems related to the stability of BGP, determining the computational complexity of testing if a given configuration is safe, is robust, or is safe under filtering. Further, we determine the computational complexity of checking popular sufficient conditions for stability.

    We adopt a model that captures local transit policies, i.e. policies that are a function only of the ingress and the egress points. Such a configuration paradigm is quite popular among network operators of Autonomous Systems. We also address the same problems in the SPP model, widely adopted in the literature.

    Unfortunately, we find that the most interesting problems are computationally hard unless the expressiveness of the policies is restricted so much that they become unsuitable for most practical purposes. Our findings suggest that the computational intractability of BGP stability be an intrinsic property of policy-based path vector routing protocols that allow policies to be specified in complete autonomy.

Trusted Keyword Searches in Outsourced Document Collections

  • Speaker: Roberto Tamassia, Brown University
  • When:March 30 – 14:30
  • Where: Meeting room of the Department 1st floor
  • Abstract: Abstract: We study the problem of authenticating outsourced set operations performed by an untrusted server over a dynamic collection of sets that are owned by a trusted source. We present efficient methods for authenticating fundamental set operations, such as union and intersection so that the client can verify the correctness of the received answer. Based on a novel extension of the security properties of bilinear-map accumulators, our authentication scheme is the first to achieve optimality in several critical performance measures:

    1. The verification overhead at the client is optimal, that is, the client can verify an answer in time proportional to the size of the query parameters and answer.
    2. The update overhead at the source is constant.
    3. The bandwidth consumption is optimal, namely constant between the source and the server and operation-sensitive between the client and the server (i.e., proportional only to the size of the query parameters and the answer).
    4. The storage usage is optimal, namely constant at the client and linear at the source and the server.

    Updates and queries are also efficient at the server. In contrast, existing schemes entail high bandwidth and verification costs or high storage usage since they recompute the query over authentic data or precompute answers to all possible queries.

    We show applications of our techniques to the authentication of keyword searches on outsourced document collections (e.g., inverted-index queries) and of queries in outsourced databases (e.g., equi-join queries). Since set intersection is heavily used in these applications, we obtain new authentication schemes that compare favorably to existing approaches.

 

Labeling: Refinements on boundary labeling

  • Speaker: Prof. Michael Kaufmann – Universität Tübingen
  • When: 28/03/2011 11:00-13:00
  • Where:
  • Abstract:

 

Hierarchical graph drawing: Basics and advanced
techniques

  • Speaker: Prof. Michael Kaufmann – Universität Tübingen
  • When: 24/03/2011 11:00-13:00
  • Where:
  • Abstract:

 

 F# Language

  • Speaker: Giuseppe Maggiore
  • When: January 18
  • Where: N1
  • Abstract:In questo seminario vedremo una introduzione al linguaggio
    F#. Partiremo dalle keyword essenziali (let, fun). Ci sposteremo verso
    le funzioni di ordine superiore (il modulo List) e l’operatore di
    invocazione (|>). Quindi vedremo alcune nozioni circa il type system
    di F# (tuple, unioni discriminate, records) e su come tale type system
    interagisce con quello del .Net (classi, interfacce,
    ereditarietà). Infine, a seconda del tempo a disposizione, potremo
    approfondire un argomento avanzato di meta-programmazione scelto tra
    monadi e quotations

 

On the Area Requirements of Euclidean Minimum Spanning Trees

  • Speaker: Fabrizio Frati
  • When: January 4 – 16:30
  • Where: Meeting room of the Department 1st floor
  • Abstract: In their seminal paper on Euclidean minimum spanning trees
    [Discrete & Computational Geometry, 1992], Monma and Suri proved that any tree of maximum degree 5 admits a planar embedding as a Euclidean minimum spanning tree. The algorithm they presented constructs embeddings with exponential area; however, the authors conjectured that (c^n) x (c^n) area is sometimes required to embed an n-vertex tree of maximum degree 5 as a Euclidean minimum spanning tree, for some constant c>1. In this paper, we prove the first exponential lower bound on the area requirements for embedding trees as Euclidean minimum spanning trees.

Seminars in 2009/2010

 

  • Minimizing total weighted earliness-tardiness on a single machine around a small common due date: an FPTAS using quadratic knapsack

    • Docente: Hans Kellerer
    • Data: Lunedì 20 Dicembre 2010, ore 14:30
    • Aula: Sala Riunioni DIA, 1° piano
    • Abstract: We design a fully polynomial-time approximation scheme (FPTAS) for a single machine scheduling problem to minimize the total weighted earliness and tardiness with respect to a common restrictive due date. Notice for this problem no constant-ratio approximation algorithm has been known so far. Our approach is based on adopting an FPTAS for a special version of the knapsack problem to minimize a convex quadratic non-separable function. For the continuous relaxation of such a knapsack problem we give an algorithm of a quadratic time complexity. The running time of each presented FPTAS is strongly polynomial.
      Per ulteriori informazioni rivolgersi a G. Nicosia (nicosia@dia.uniroma3.it, tel. 06 57333455).
    • Materiale: –
    • Pagina Web: –

 

Computing with Uncertainty

  • Speaker: Thomas Erlebach, University of Leicester
  • When: November 25 – 14:30
  • Where: Meeting room of the Department 1st floor
  • Abstract: We consider problems where the input data is initially uncertain
    but the exact value of an input item can be obtained at a certain
    cost. For example, a typical setting is that instead of an exact
    value, only an interval containing the exact value is given. An
    update of an input item then reveals its exact value. An algorithm
    performs a number of updates until it has gathered sufficient
    information to output a correct solution to the problem. The goal
    is to minimise the number of updates.

    We discuss several problems in the setting of computing with
    uncertainty, including the minimum spanning tree problem and
    the minimum multicut problem in trees. Using competitive
    analysis, we compare the number of updates that an algorithm
    makes on an instance of the problem with the best possible
    number of updates that suffices to solve that instance.

 

On the Queue Number of Planar Graphs

  • Speaker: Fabrizio Frati
  • When: October 21st 17:30
  • Where: Meeting room of the Department 1st floor
  • Abstract: We prove that planar graphs have $O(\log^4 n)$ queue number, thus improving upon the previous $O(\sqrt n)$ upper bound. Consequently, planar graphs admit 3D straight-line crossing-free grid drawings in $O(n \log^c n)$ volume, for some constant $c$, thus improving upon the previous $O(n^{3/2})$ upper bound.

 

    • Assigning AS Relationships to Satisfy the Gao-Rexford Conditions

      • Docente: Massimo Rimondini (DIA)
      • Data: Giovedì 16 Settembre 2010, ore 12.00
      • Aula: Sala Riunioni DIA, 1° piano
      • Abstract: Compliance with the Gao-Rexford conditions is perhaps the most realistic explanation of Internet routing stability, although BGP is renowned to be prone to oscillations. Informally, the Gao-Rexford conditions assume that (i) the business relationships between Internet Service Providers (ISPs) yield a hierarchy, (ii) each ISP behaves in a rational way, i.e., it does not offer transit to other ISPs for free, and (iii) each ISP ranks routes through customers better than routes through providers and peers. We show an efficient algorithm that, given a BGP configuration, checks whether there exists an assignment of peer-peer and customer-provider relationships that complies with the Gao-Rexford conditions. Also, we show that preferring routes through peers to those through providers, although more suitable than the original formulation of Condition (iii) to describe the business relationships between ISPs, makes the problem NP-hard. The above results hold both in the well known theoretical framework used to model BGP and in a more realistic setting where (i) local preferences are assigned on a per-neighbor basis and (ii) transit is allowed from/to specific neighbor pairs. Observe that the latter setting, where policy complexity only depends on the number of neighbors, is very close to the way in which operators typically configure routers.
        This is a joint work with Luca Cittadini (Roma Tre University), Giuseppe Di Battista (Roma Tre University), Thomas Erlebach (University of Leicester), and Maurizio Patrignani (Roma Tre University)
        The same presentation will be given at the 18th IEEE International Conference on Network Protocols, Kyoto, Japan, Oct. 5 – 8, 2010
      • Materiale: –
      • Pagina Web: –

 

    • Top-k Approximate Subtree Matching

      • Docente: Denilson Barbosa (University of Alberta, Edmonton)
      • Data: Lunedì 5 Luglio 2010, ore 15.00
      • Aula: N7 DIA
      • Abstract: We consider the top-k Approximate Subtree Matching (TASM) problem: finding the k best matches of a small query tree, e.g., a DBLP article with 15 nodes, in a large document tree, e.g., DBLP with 26M nodes, using the canonical tree edit distance as a similarity measure between subtrees. Evaluating the tree edit distance for large XML trees is difficult: the best known algorithms have cubic runtime and quadratic space complexity, and, thus, do not scale. Our solution is TASM-postorder, a memory-efficient and scalable TASM algorithm. We prove an upper-bound for the maximum subtree size for which the tree edit distance needs to be evaluated. The upper bound depends on the query and is independent of the document size and structure. A core problem is to efficiently prune subtrees that are above this size threshold. We develop an algorithm based on the prefix ring buffer that allows us to prune all subtrees above the threshold in a single postorder scan of the document. The size of the prefix ring buffer is linear in the threshold. As a result, the space complexity of TASM-postorder depends only on k and the query size, and the runtime of TASM-postorder is linear in the size of the document. Our experimental evaluation on large synthetic and real XML documents confirms our analytic results.
      • Materiale: –
      • Pagina Web: –

 

    • A Framework for Automatic Schema Mapping Verification Through Reasoning

      • Docente: Denilson Barbosa (University of Alberta, Edmonton)
      • Data: Giovedì 1 Luglio 2010, ore 15.00
      • Aula: N7 DIA
      • Abstract: We advocate an automated approach for verifying mappings between source and target databases in which semantics are taken into account, and that avoids two serious limitations of current verification approaches: reliance on availability of sample source and target instances, and reliance on strong statistical assumptions. We discuss how our approach can be integrated into the workflow of state-of-the-art mapping design systems, and all its necessary inputs. Our approach relies on checking the entailment of verification statements derived directly from the schema mappings and from semantic annotations to the variables used in such mappings. We discuss how such verification statements can be produced and how such annotations can be extracted from different kinds of alignments of schemas into domain ontologies. Such alignments can be derived semi-automatically; thus, our framework might prove useful in also greatly reducing the amount of input from domain experts in the development of mappings.
      • Materiale: –
      • Pagina Web: –

 

    • An Environment for Building, Exploring and Querying Social Networks

      • Docente: Denilson Barbosa (University of Alberta, Edmonton)
      • Data: Mercoledì 30 Giugno 2010, ore 15.00
      • Aula: N3 DIA
      • Abstract: Social network analysis aims at uncovering and understanding the structures and patterns resulting from social inter- actions among individuals and organizations engaged in a common activity. Since the early days of the field, networks are modeled as graphs modeling social actors and the relations between them. The field has become very active with the maturity of computational machinery to handle large-scale graphs, and, more recently, the automated gathering of social data. This talk will describe ReaSoN: an ongoing work towards building a system for extracting, visualizing and exploring social networks. The talk will focus on the underlying infrastructure behind ReaSoN, the extraction of social networks from structured citation databases as well as unstructured social media, some data management issues that arise in building such systems, notions of network visibility and the processing of user-defined queries. As an illustration, the talk covers the first incarnation of ReaSoN: a social network resulting from academic research built from ACM DL, DBLP and Google Scholar data. In doing so, ReaSoN contributes to the understanding as well as fostering of the social networks underlying research.
      • Materiale: –
      • Pagina Web: –

 

    • Query Optimization in the Deep Web

      • Docente: Andrea Cali’ (Brunel University), Davide Martinenghi (Politecnico di Milano)
      • Data: Giovedi’ 10 Giugno 2010, ore 11:30
      • Aula: N7 DIA
      • Abstract: The term Deep Web refers to the data content that is created dynamically as the result of a specific search on the Web. In this respect, such content resides outside web pages, and is only accessible through interaction with the web site – typically via HTML forms. It is believed that the size of the Deep Web is several orders of magnitude larger than that of the so-called Surface Web, i.e., the web that is accessible and indexable by search engines. Usually, data sources accessible through web forms are modeled by relations that require certain fields to be selected – i.e., some fields in the form need to be filled in. These requirements are commonly referred to as access limitations in that access to data can only take place according to given patterns. Besides data accessible through web forms, access limitations may also occur i) in legacy systems where data scattered over several files are wrapped as relational tables, and ii) in the context of Web services, where similar restrictions arise from the distinction between input parameters and output parameters. In such contexts, computing the answer to a user query cannot be done as in a traditional database; instead, a query plan is needed that provides the best answer possible while complying with the access limitations. In these talks, we illustrate the semantics of answers to queries over data sources under access limitations and present techniques for query answering in this context. We show different techniques to optimize query answering both at the time of the query plan generation and at the time of the execution of the query plan. We analyze the influence of integrity constraints on the sources, of the kind that is usually found in database schemata, on query answering. We present prototype systems that are aimed at querying the deep web, and show their achievements.
      • Materiale: pdf (~4.93 MB)
      • Pagina Web: –

 

 

 

Seminari proposti nell’Anno Accademico 2008/2009

 

    • Leveraging Data and Structure in Ontology Integration

      • Docente: Professoressa Renee Miller (University of Toronto)
      • Data: Lunedì 9 Novembre 2009, ore 14.30
      • Aula: N1 piano terra DIA
      • Abstract: There is a great deal of research on schema and ontology integration which makes use of rich logical constraints to reason about the structural and logical alignment of schema and ontologies. There is also considerable work on matching data instances from heterogeneous schema or ontologies. However, little work exploits the fact that ontologies include both data and structure. We provide a first step in closing this gap with a new algorithm (Iliads) that integrates both data matching and logical reasoning to achieve better matching of ontologies. We evaluate our algorithm on a set of pairs of OWL Lite ontologies with the schema and data matchings found by human reviewers. We compare against two systems – the ontology matching tool FCA-merge and the schema matching tool COMA++. This is preliminary work in this area and the talk will highlight further opportunities for integrating data matching (including entity resolution) with schema matching and mapping. This is joint work with Octavian Udrea and Lise Getoor of the University of Maryland which appeared in SIGMOD 2007.
      • Materiale: –
      • Pagina Web: –

 

 

    • Learning-from-Observation: from assembly plan through dancing humanoid

      • Docente: Prof. Katsushi Ikeuchi (University of Tokyo)
      • Data: Giovedì 8 Ottobre 2009, ore 15.00
      • Aula: Sala Riunioni DIA
      • Abstract:We have been developing the paradigm referred to as programming-by-demonstration.The method involves simple observation of what a human is doing and generation of robot programs to mimic the same operations. The first half of this talk presents the history of what we have done so far under this paradigm. Here, we emphasize the top-down approach to utilize pre-defined, mathematically derived, task-and-skill models for observing and mimicking human operations. We will show several examples of task-and-skill models applicable in different domains. Then, the second half focuses on our newest effort to make a humanoid robot dance Japanese folk dances using the same paradigm. Human dance motions are recorded using optical or magnetic motion-capture systems. These captured motions are segmented into tasks using motion analysis, music information, and task-and-skill models. We can characterize personal differences of dance using task-and-skill models. Then, we can map these motion models onto robot motions by considering dynamic and structural differences between human and robot bodies. As a demonstration of our system, I will show a video in which a humanoid robot performs a Japanese folk dance.
      • Materiale: –
      • Pagina Web: –

 

    • Extending Convex Drawings of Graphs

      • Docente: Prof.ssa Seok-Hee Hong (University of Sydney)
      • Data: Giovedì 1 Ottobre 2009, ore 16.00
      • Aula: Sala Riunioni DIA
      • Abstract:Graph Drawing has attracted much attention over the last twenty years due to its wide range of applications such as VLSI design, social networks, software engineering and bioinformatics. A straight-line drawing is called a “convex drawing” if every facial cycle is drawn as a convex polygon. Convex representation of graphs is a well-established aesthetic in Graph Drawing, however not all planar graphs admit a convex drawing as observed by Steinitz, Tutte and Thomassen earlier. In this talk, we introduce two new notions of drawings, “inner-convex drawings” and “star-shaped drawings” , as natural extensions of convex drawings. We present various results including characterisation, testing, embedding and drawing algorithms. Our results extend the classical results by Tutte, and include results by Thomassen and Steinitz as a special case.
      • Materiale: –
      • Pagina Web: –

 

    • Information Discovery on Vertical Domains

      • Docente: Vagelis Hristidis (Florida International University)
      • Data: Martedì 14 Marzo 2009, ore 11.30
      • Aula: Sala riunioni DIA
      • Abstract: As the amount of available data increases, the problem of information discovery, often referred to as finding the needle in the haystack problem, becomes more pressing. The most successful search applications today are the general purpose Web search engines and the well-structured database querying (e.g., SQL). Directly applying these two search models to specific domains is ineffective since they ignore the domain semantics e.g., meaning of object associations e.g., a biologist wants to see different results from a physician for the same query on PubMed. We present challenges and techniques to achieve effective information discovery on vertical domains by modeling the domain semantics and its users, and exploiting the knowledge of domain experts. Our focal domains are products marketplace, biological data, clinical data, and bibliographic data. This project is being funded by NSF
      • Materiale: –
      • Pagina Web: –

 

    • Ecology of Robots

      • Docente: Chair: Federica Pascucci (DIA). Speakers: Alessandro Saffiotti (Universita’ di Orebro, Svezia), Maurizio Di Rocco (DIA), Attilio Priolo (DIA), Andrea Gasparri (DIA), Paolo Stegagno (La Sapienza), Fabrizio Flacco (La Sapienza), Marilena Vendittelli (La Sapienza)
      • Data: 6 maggio 2009, ore 9.30
      • Aula: Aula N3, DIA
      • Pagina Web: Pagina Web

 

 

  • Advances in Security and Privacy

  • Docente: Roberto Tamassia (Brown University, USA)
  • Data: 24, 26 Marzo e 22, 24 Giugno 2009
  • Aula: Sala Riunioni DIA
  • Abstract:
  • Materiale:
  • Pagina Web:
  • Biomechanics & allied disciplines: a casual bird’s-eye view

    • Docente:
      • Roberto Contro (LaBS-Laboratory of Biological Structural Mechanics, Politecnico di Torino)
      • Marcelo Epstein (Faculty of Mechanical and Manufacturing Engineering & Adjunct Professor, Faculty of Kinesiology and Humanities, University of Calgary)
      • Antonio Di Carlo (LaMS-Modelling & Simulation Lab, Università Roma Tre)
    • Data: Giovedì 19 febbraio, ore 10
    • Aula: Aula Seminari (Terzo piano) Dipartimento di Strutture – via Corrado Segre 6
    • Abstract: –
    • Materiale: –
    • Pagina Web: –