All tutorials will take place on July 6th, 2009 in three parallel tracks.
There has been much research and commercial work in recent years on applications using complex event processing systems. However, although there are several tens of descriptions of event processing use cases, those applications have been targeting very different domains and have been described only superficially or at incomparable levels of detail. We believe that the lack of a set of clearly described event processing use cases reduces clarity regarding the area, and reduces research and product acceptance. To counter this trend, the EPTS started a Use Cases WorkGroup (UCWG) on November 2007 with the goal of collecting, classifying, understanding, detailing, and disclosing event processing use cases. This tutorial will present the templates completed so far, and will provide very detailed descriptions of four event processing use cases as well as common trends and variations to those use cases seen in other use cases. Special emphasis is placed on challenging issues that push the state-of-the-art and real-life applications and issues.
Pedro G. Bizarro Pedro is currently an Assistant Professor at the University of Coimbra (2006-present) were he leads BiCEP, a 5-student research project addressing issues of event processing benchmarking, performance, scalability and Internet integration. Pedro worked and is interested on adaptive query processing, data stream systems, and virtualization. Pedro, a Fulbright Fellow (2001-2006), is also an Associate Visiting Teaching Professor at Carnegie Mellon University, a consultant for Evolve Space Solutions, and a Marie Curie EC Fellow.
Dieter Gawlick is architect at Oracle. He architected the first messaging system fully integrated into a database and was a key contrib utor to Oracle's integration and sensor technologies. Dieter's current focus is leveraging and evolving database technologies to accelerate the evolution of event processing. Additionally, Dieter works on database support for long running transactions. Dieter was a key contributor in the development of high end database and transaction systems.
Thomas Paulus is a PhD student at the University of Regensburg and works as an IT consultant for the Center for Information Technology-Transfer (CITT) on various BPM projects, currently for the BPM software development company jCOM1 AG and for the global investor service provider CACEIS. In addition, he has been deeply involved in various projects for a wide variety of customers especially in retail trade over the past years. Thomas contributed to the development of the new field of Event-Driven Business Process Management (ED-BPM) for several conferences (e.g. DEBS 08 in Rome, NESSI ServiceWave 08 in Madrid or SSOKU09 in Brussels). Thomas is currently supporting the preparation for a major project for the European Commission in the field of ED-BPM and is developing ED-BPM-oriented use cases especially for fraud management in retail trade. Furthermore, Thomas worked onsite at the IBM Haifa Research Labs on a publication about existing and future standards for ED-BPM-based applications.
Harvey Reed joined MITRE and started supporting GCSS-AF (Global Combat Support System-Air Force), a large shared security and infrastructure effort, in 2004. He was Chief Engineer for GCSS-AF from 2005-2008. He led the delivery for a number of firsts in the Air Force: first Enterprise Service Bus (ESB) (2005), first general Metadata Environment (MDE) (2007), first security federation with the DoD Portal (DKO) (2008-9). Currently he is leading an effort to analyze DoD and IC efforts for enterprise security patterns, and working in the DHS space regarding very large EPN (Event Processing Networks). Prior to joining MITRE, Harvey was a Product Manager at Sonic Software for business process products, as well as a voting member of the OASIS WS BPEL TC (Business Process Execution Language). Harvey also worked for a Venture Capital firm evaluating software infrastructure startups. Prior to this Harvey worked in a variety of product, architectural and enterprise consulting roles. Harvey Reed has a B.S. from Purdue in Pure Math and Computer Science, and an M.S. from Georgia Tech in Computer and Information Science. He is also the Marketing Director for the MetroWest Chess Club, the largest chess club in New England.
Matthew Cooper is VP of Engineering at Event Zero, a Brisbane based ISV. Matthew runs the team that is responsible for all aspects of the development of Event Zero's unique Event Processing Network product suite as well as being involved in pre and post sales support and training. He has a varied commercial background in IT ranging from product and application delivery to systems support.
Peer-to-peer overlay networks are an important new platform for designing large-scale event-based
applications which has a sizable and rapidly growing body of research. This tutorial surveys the key results
and main trends in overlay networks, introduces terminology, and explains the organization of the research
literature. We also describe existing research in using overlay networks for event based processing, and
discuss open issues and research directions.
The focus of this tutorial is to present the key elements of using peer-to-peer overlay networks to enable global scale event processing systems. There are many important topics, including: design, performance, security, and management of large scale overlays; integration of sensor networks and information fusion mechanisms with overlay networks; load balancing, semantic routing, and service discovery to support advanced applications.
John Buford is a Research Scientist with Avaya Labs Research. Previously he was a Lead Scientist at the Panasonic Princeton Laboratory, VP of Software Development at Kada Systems, Director of Internet Technologies at Verizon, and Chief Architect-OSS at GTE Laboratories. Earlier he was tenured Associate Professor of Computer Science at the University of Massachusetts Lowell, where he also directed the Distributed Multimedia Systems Laboratory. He has authored or coauthored 120 refereed publications, 2 books, and is co-editing a forthcoming handbook. He is an IEEE Senior Member and is co-chair of the IRTF Scalable Adaptive Multicast Research Group. He holds the PhD from Graz University of Technology, Austria, and MS and BS degrees from MIT.
Last minute cancellation (due to unavailability of presenter caused by visa issues)Antonio A. F. Loureiro - Dept. of Computer Science, Federal University of Minas Gerais, Brazil (presenter)
The fast growth in wireless sensors and actuators have the potential to create a global
computing infrastructure that is profoundly changing the way people live and work.
People may interact with themselves, the physical world, and information services using
a wide range of sensor devices connected together, enabling computing and communication
at an unprecedented scale and density. This new wireless sensor infrastructure
presents a number of challenges especially when it comes to data-intensive applications:
enormous scale, different types of data, varying and intermittent connectivity,
location dependence and context awareness, limited bandwidth and power capacity,
small device size, and multimedia delivery across different networks.
Wireless sensor networks are now evolving from passive observation and reporting systems to active and reactive systems that dynamically evolve in response to complex and rapid spatio-temporal events. Upon the occurrence of events of interest, different network activities and functions start executing, transforming those simple events into meaningful and sophisticated events to an application. This processing chain includes localization, synchronization, information fusion, self-organization, power management, routing, filtering and correlation, query processing, privacy and security, data mining and knowledge discovery, etc. Furthermore, this processing chain should be based on event propagation models to accommodate the requirements of sensor applications.
Compared to event processing already existing on distributed systems available on the Internet, that designed for wireless sensor networks poses unseen challenges due to limitations in sensor storage, processing, and communication capacities. Adding to the aforementioned issues is the curse of dimensionality. In practice, due to their sophistication, sensor events are usually identified by more than one attribute. Management of multi-dimensional data is already a difficult problem in information systems. Doing so under the resource constraints of sensor networks is even much harder.
This tutorial aims at presenting a broad view of event processing in wireless sensor networks in the light of different contexts and backgrounds. The goal is to discuss the different network activities and functions that are related to event processing in wireless sensor networks.
Antonio Loureiro is a Professor of Computer Science at the Federal University of Minas Gerais (UFMG), Brazil. He holds a PhD in Computer Science from the University of British Columbia, Canada, 1995. His main research areas are wireless sensor networks, computer networks, distributed systems, and distributed algorithms. In the last 10 years he has published over 80 papers in international conferences and journals. Most of those papers were presented by Professor Loureiro who has also been the instructor of six tutorials in Brazilian conferences in the last five years. Since 1996, when he became a faculty member at UFMG, Professor Loureiro has received seven times the Undergraduate Teaching Excellence Award in Computer Science from the students at the Department of Computer Science. He was the TPC Chair for LANOMS 2001 (Latin American Network Operations and Management Symposium, sponsored by IEEE Communications Society) and for the 2005 ACM Workshop on Wireless Multimedia Networking and Performance Modeling.
Event processing in support of data-driven computational science research and education takes multiple forms. In our experience over the past 6 years in developing and supporting the National Science Foundation funded Linked Environments for Atmospheric Discovery (LEAD), we have reasoned over the benefits and use of events and event processing systems in multiple contexts. In this tutorial we propose to share the motivations, architectural considerations, research outcomes, and tools used in event processing in the LEAD science supporting cyberinfrastructure. We provide a hands-on experience with the production LEAD portal. We additionally discuss the number of individual tools that have emerged from the project that we support for use in other applications.
Beth Plale is Director of the Center for Data and Search Informatics, within the newly endowed Pervasive Technologies Institute at Indiana University. Her research interests are in data provenance, automated metadata collection and data curation, workflow systems in e-Science, and complex event processing.
Jeff Cox, Chathura Herath, Scott Jensen and Yiming Sun are PhD students in the Center for Data and Search Informatics, whose research closely aligns with the material presented in this tutorial.
In the last decade there have been a lot of activities around languages that support event driven applications. A variety of languages have been developed within academic projects; while this continues to be an active area of research, in the last few years commercial products emerge that also feature a variety of language and language styles. This tutorial follows the work done in the EPTS Event Processing Language Analysis Working Group, and is intended to provide insight about the various programming styles and languages; it will also discuss some possibilities towards language standardization
Jon Riecke is a Lead Platform Architect at Aleri Inc. for the Aleri Streaming Platform. He received his PhD from MIT in 1991 in theoretical computer science, and held a postdoctoral fellowship at the University of Pennsylvania. Prior to joining Aleri in 2001, he was a Member of Technical Staff at AT&T/Lucent Bell Laboratories. Dr. Riecke has authored or co-authored more than 30 peer-reviewed papers in the areas of semantics of programs, functional and object-oriented programming languages, computer security, and network, and has presented work at international conferences and served on numerous program committees. At Aleri, he has been the primary designer and implementer of the stream primitives and the SPLASH embedded language, and serves as co-chair (with Opher Etzion) of the EPTS Language Analysis Workgroup.
Opher Etzion is IBM Senior Technical Staff Member, and Event Processing Scientific Leader in IBM Haifa Research Lab. Previously he has been lead architect of event processing technology in IBM Websphere, and a Senior Manager in IBM Research division, managed a department that has performed one of the pioneering projects that shaped the area of complex event processing. He is also the chair of EPTS (Event Processing Technical Society), and is blogging about event processing since August 2007. In parallel he is an adjunct professor at the Technion - Israel Institute of Technology. He has authored or co-authored more than 70 papers in refereed journals and conferences, on topics related to: active databases, temporal databases, rule-base systems, complex event processing and autonomic computing, and co-authored the book "Temporal Database - Research and Practice", Springer-Verlag, 1998. Prior to joining IBM in 1997, he has been a faculty member and Founding Head of the Information Systems Engineering department at the Technion, and held professional and managerial positions in industry and in the Israel Air-Force.
François Bry is a full professor at the Institute for Informatics at Ludwig-Maximilians-Universtät München, heading the research group for programming and modeling languages. He is currently investigating methods and applications related to querying answering and reasoning on the Web. In particular he focuses on query and rule languages for complex events, Web data formats such as XML and RDF, and social media. Frcois Bry has a research record of over 130 peer-reviewed scientific publications and regularly contributes to scientific conferences and journals, especially in the areas Web and Semantic Web as reviewer or program committee chair. Before joining University of Munich in 1994, he worked with the industry research center ECRC in Munich.
Michael Eckert is a researcher at the Institute for Informatics at Ludwig- Maximilians- Universtät München in the programming and modeling languages group. His research interests are Complex Event Processing and reactive languages for the Web. So far, he has been focused mainly on query languages for complex events, covering the spectrum from language design over formal semantics to efficient incremental query evaluation. He has (co-)authored more than 15 peer-reviewed papers. Before obtaining his PhD from University of Munich in 2008, Michael studied computer science at University of Munich (1999-2005) and at University of Washington (2002-2003).
Adrian Paschke is research director at the Centre for Information Technology Transfer (CITT) GmbH, director of RuleML Inc., vice director of the Semantics Technologies Institute Berlin (STI Berlin), organizer of the Berlin Semantic Web Meetup Group and professor at the Freie Universitaet Berlin (FUB) holding a chair on Corporate Semantic Web. He is steering-committee chair of the RuleML Web Rule Standardization Initiative (RuleML), co-chair of the Reaction RuleML technical group, founding member of the Event Processing Technology Society (EPTS), co-chair of the EPTS Reference Architecture working group (EPTS RA), voting member of OMG, and active member of several W3C groups such as the W3C Health Care and Life Sciences group (W3C HCLS) and the W3C Rule Interchange Format working group (W3C RIF), where he is editor of the W3C RIF standard and is hosting the W3C HCLS KB in Berlin. Adrian is/was involved in several national and international projects such as the EU Network of Excellence REWERSE, EU STREP Sealife and is currently leading the InnoProfile project Corporate Semantic Web (BMBF, 2008-2012, 2.3Mio Euro).
Capital markets control the international flow of money and securities. These
systems operate around the clock, processing huge volumes of transactions and
information. Most of this volume can be modeled as events, and as a result capital
markets is the focus of much innovation in Event Processing. Financial services also
receive a lot of attention in the mainstream media, especially lately.
Despite these facts, the use of event processing in the capital markets is largely invisible to the academic community. Very little has been published by financial organizations, intent on preserving their competitive advantage. Information is generally passed by word of mouth, with systems changing faster than they can be documented. Recently though, researchers in event processing along with creators of commercial event processing platforms have begun to standardize the way in which systems are built. This knowledge transfer not only improves the software being used in capital markets; it is an opportunity for event processing researchers to learn from the practical experience of industry engineers. The focus of this tutorial will be on areas where practices are standardized, rather than on the details that confer competitive advantage. We will start by reviewing capital markets' structure, understanding the participants and products. Then we will discuss information flows, both transactional and non-transactional, along with data formats and rates.
We will review messaging protocols and fault tolerance, particularly the FIX protocol and various market data transmission protocols. Decades of experience with multi-party distributed systems have built up a lot of interesting tools and systems, and we will identify opportunities for learning. Time permitting, we will also discuss the history and future of some of these protocols.
The bulk of the tutorial will be spent discussing applications for event processing within capital markets. These include market data analysis, order flow management, pricing, and monitoring. Particular attention will be paid to aspects of these systems that challenge the capabilities of current systems, or where approaches unfamiliar to the academic community are common. We will discuss not only Complex Event Processing/Event Processing Platform (CEP/EPP) implementations, but also other implementation techniques including distributed transactions, distributed memory fabrics, and publish/subscribe systems.
Researchers in event processing will learn about a new field of applications and challenges. Practitioners not familiar with capital markets will learn about implementation techniques from that field which may be applicable to their work. Attendees experienced in capital markets may also learn additional techniques or a new perspective.
Richard Tibbetts is co-founder and Chief Technology Officer at StreamBase Systems. Tibbetts is directly involved in the design and implementation of the StreamBase Event Processing Platform, including the StreamSQL programming language and deployment environment. He is also responsible for the architecture of StreamBase Frameworks for Capital Markets. Prior to StreamBase, Tibbetts contributed to the Aurora project at Brown University and the Medusa project at MIT, proposing language extensions to support real world applications of event processing. He also created the Linear Road Benchmark for measuring performance of event processing platforms. Tibbetts was a teaching assistant in Software Engineering at MIT and has given a variety of lectures on software engineering and compilers as part of the MIT IAP program. At StreamBase, he is responsible for briefing customers and analysts on Event Processing, and has also guest lectured on the subject at Harvard University. Tibbetts earned both his Bachelor of Science in computer science and engineering and his Master of Engineering degrees at MIT.//include ("nav-footer.inc");?>