HICSS TUTORIAL DAY
|WIRELESS SENSOR NETWORKS|
|NETWORK SECURITY||ADVANCES IN TECHNOLOGY-SUPPORTED LEARNING|
|PERSISTENT CONVERSATION||SUCCESSFUL SCIENCE FROM THE LOGICAL POSITIVIST PERSPECTIVE|
|ENTERPRISE METADATA||COLLABORATION ENGINEERING|
|IT RESPONSES TO CHANGING CORPORATE GOVERNANCE||LITERACY IN THE INTERNET AGE|
|MODELING BUSINESS AND REQUIREMENTS USING UML|
MULTI-DISCIPLINARY SYSTEM DESIGN KNOWLEDGE
|AN INTELLIGENT UBIQUITOUS INFORMATION SYSTEM|
Ivana Vujovic, Erich Neuhold, and Veljko Milutinovic
Semantic Web is a concept that enables better machine processing of information
on the Web, by structuring documents written for the Web in such a way that it
becomes understandable by machines. This can be used for creating more complex
applications (intelligent browsers, more advanced Web agents), global databases
with the data from the Web, reuse of information, etc. This tutorial
explains all of the above, using both basic theory and the appropriate examples.
Semantic modeling languages like the Resource Description Framework (RDF) and topic maps employ XML syntax to achieve this objective. New tools exploit cross domain vocabularies to automatically extract and relate the meta information in a new context. Web Ontology languages such as DAML+OIL extend RDF with richer modeling primitives and provide a technological basis to enable the Semantic Web. These concepts are explained through examples and case studies.
logic languages for Semantic Web are described (which build on the top of RDF
and ontology languages). They, together with digital signatures, enable a web of
trust, which will have levels of trust for its resources and for the rights of
access, and will enable the generating of proofs for the actions and resources
on the Web.
Erich Neuhold received his M. S. in Electronics and his Ph.D. degree in Mathematics and Computer Science at the Technical University of Vienna, Austria, in 1963 and 1967, respectively. Since 1986 he has been Director of the Institute for Integrated Publication and Information Systems (IPSI) in Darmstadt, Germany (a former Institute of the German National Research Center for Information Technology - GMD, since July 2001 a Fraunhofer Institute). He is a member of many professional societies, an IEEE senior member, and currently holds the chairs of the IEEE-CS Technical Committee on Digital Libraries and the Technical Committee on Data Engineering.
Dr. Neuhold is also Professor of Computer Science, Integrated Publication and Information Systems, at the Darmstadt University of Technology, Germany. His primary research and development interests are in heterogeneous multimedia database systems, WEB technologies and persistent information and knowledge repositories (XML, RDF, …..) and content engineering. In content engineering special emphasis is given to all technological aspects of the publishing value chain that arise for digital products in the WEB environment. Search, access and delivery includes semantic based retrieval of multimedia documents. He also guides research and development in user interfaces including virtual reality concepts for information visualization, computer supported cooperative work, ambient intelligence, mobile and wireless technology, security in the WEB and applications like e-learning, e-commerce, e-culture and e-government.
Veljko Milutinovic is on the faculty of the University of Belgrade and has also taught at Purdue University and delivered numerous invited lectures at many other U. S. universities. Dr. Milutinovic is a prolific writer and author with over 20 books with major US publishers, more than 50 papers in IEEE journals and about 100 journal papers in total. Most recently Dr. Milutinovic's research has included infrastructure for e-business on the Internet, where he combines his expertise in hardware, software, and business administration. He is on the advisory boards of TechnologyConnect from Boston, Massachusetts (www.technologyconnect.com) and BioPop from Charlotte, North Carolina. As a consultant he has worked for Intel, Fairchild, Honeywell, Compaq, IBM, GE, RCA, NCR, AT+T, QSI, DEC, TDT, Aerospace Corporation, Electrospace Corporation, Zycad, Encore, Virtual, MainStreetNetworks, Siemens, Philips, CNUCE, SSGRR, and other large high tech companies.
FUNDAMENTALS – AND BEYOND – OF NETWORK SECURITY
The exponential growth of cyber intrusions and attacks is threatening all digital systems, including those used in health-care, business, E-commerce, and workplace management. This tutorial on the fundamentals of computer and network security is broken into two parts.
Part I (morning session): Fundamentals of Computer and Network Security for
The first part is a morning session
entitled, Fundamentals of Computer and Network Security for Non-CS People.
It is oriented toward researchers and practitioners with little or no training
in computer science. Attendees will be introduced to the threats and risks of
computer networking, and the basic technologies needed to safeguard their home
computers and small business systems.
Part II (afternoon session): Beyond the Fundamentals of Computer and Network Security
The second part is an afternoon session entitled, Beyond the Fundamentals of Computer and Network Security. It is oriented toward people with a rudimentary knowledge of cyber threats and risks, who want to better improve the security of their systems. The afternoon session covers hacking tools and techniques with a focus on defensive technologies including cryptography, biometrics, authentication, and email and web security. Anyone attending the morning session will be adequately prepared to attend the afternoon tutorial.
Paul W. Oman is a Professor of Computer Science at the University of Idaho. He is currently working on cyber security with grants from NSF and NIST. He has published over 100 papers and has received several awards for teaching and speaking at conferences.
PERSISTENT CONVERSATION: PERSPECTIVES FROM RESEARCH AND DESIGN (WORKSHOP)
Persistent conversation has to do with the transposition of ordinarily ephemeral conversation into the potentially persistent digital medium. The phenomena of interest include human-to-human interactions carried out using email, chat, IM, texting, web boards, blogs, wikis, mailing lists, 3-D VR, multimedia computer-mediated communication, etc. Computer-mediated conversations blend characteristics of oral conversation with those of written text: they may be synchronous or asynchronous; their audience may be small or vast; they may be highly structured or almost amorphous; etc. The persistence of such conversations gives them the potential to be searched, browsed, replayed, annotated, visualized, restructured, and re-contextualized, thus opening the door to a variety of new uses and practices.
In particular, we are interested in both efforts at understanding existing forms of CMC, and attempts to design new types of CMC systems. In terms of understanding practice, we are interested in questions ranging from how various features of conversations (e.g., turn-taking, topic organization, expression of paralinguistic information) have adapted in response to the digital medium, to new roles played by persistent conversation in domains such as education, business, and entertainment. And, in terms of design, we are interested in analyses of existing systems as well as designs for new systems that better support conversation.
The workshop will provide a background for the Minitrack and set the stage for a dialogue between researchers and designers that will continue during the sessions. The minitrack co-chairs will select in advance a publicly accessible CMC site, which each author will be asked before the workshop convenes to analyze, critique, redesign, or otherwise examine using their disciplinary tools and techniques; the workshop will include presentations and discussions of the participants' examinations of the site and its content.
Thomas Erickson is a Research Staff Member and an interaction designer and researcher at IBM's T. J. Watson Research Center in New York. He is interested in understanding how large groups of people interact via networks, and in designing systems that support deep, productive, coherent, network-mediated conversation.
Susan Herring is a Professor of Information Science and Linguistics at Indiana University, Bloomington. Her research applies language-focused methods of analysis to digital conversations in order to identify their recurrent properties and social effects. She is also the editor-in-chief of the Journal of Computer-Mediated Communication.
The tutorial is intended for academics, researchers, students, and professionals who want to understand the dynamics of implementing enterprise metadata. This may also include individuals interested in enterprise application integration, data architecture, and technical asset management. Metadata has moved beyond the “data about data” definition and into the forefront of the organizational architecture.
Enterprise metadata has received a lot of attention and press over the past few years, as many organizations have attempted to push the data warehouse success to the enterprise level. Unfortunately the integrated tools, controlled environment, and high degree of quality assurance are much harder to find at the enterprise level. Enterprise data architects are looking for alternative methods of data integration. Perhaps, enterprise metadata holds the key. Historically, definitions of metadata have described structural aspects of mechanisms that house data such as table, column, and attribute characteristics of a relational database management system (RDBMS). Additionally, at times, metadata, as a concept, has included descriptions of the state of data and enumerating and summarizing various views of data. In this tutorial, we will review a new definition of metadata, as well as review the different perspectives on the role and value of metadata within the organization. From theory to implementation, metadata plays an integral role in developing organizational knowledge.
Today, with the advent of technologies such as hypermedia and heuristically-based searching and indexing, a new, broader, more generic definition of metadata is needed. This definition should include the traditional concepts, but it should add the concepts of existence, perspective, modeling, and topicality. A new definition should recognize that much, if not most, of enterprise data is not found in traditional RDBMSs, but rather, it is found in the myriad technological assets and views of those assets that exist throughout the organization.
This course will enable you to:
Todd Stephens is the Director of the Metadata Services Group for the BellSouth Corporation, and is one the foremost authorities on metadata management with over 70 academic and professional publications. He holds degrees Mathematics and Computer Science (B.S.) and Information Systems (MBA, Ph.D.). Dr. Stephens is a member of IEEE, ACM, DC-Corporate, Upsilon Pi Epsilon, and DAMA International.
Note: Hands-On Demonstration/Use of Strategic Planning Tools.
Participants use their own laptops. Enrollment limited: 50
Recent developments in Corporate America have caused the Congress to pass laws and issues guidelines to ensure the public that bankruptcies such as World Com and Enron can not happen in the future. There are laws that have personal implications for not only the CEOs of companies but IT managers as well. As the changes in corporate governance require better financial data that are verifiable, companies have a renewed interest in ERP systems as a way to address the legal issues involved. In addition, new tools added to the ERP infrastructure are available to give students of management in general and IT in particular needed information about the system and its data. In this workshop we review the legal implications for companies in general, and CIOs in particular, of recent legislation such as Sarbanes Oxley for US companies and Basel2 for European based companies. We end by having some working examples of data that can be used in classes to show how strategic information can support the management of a company.
We first explain to the participants the legal environment that CEOs and CIOs must now operate. The implications of this legal environment are then explored within the context of ERP basic and enhanced systems. We will have hands on interactive presentations using Chico’s ERP, Business Warehouse and SEM (Strategic Enterprise Management) systems.
Gail Corbitt is a Professor of Management Information Systems at California State University, Chico and chair of the Department of Accounting and MIS. She is a past Director of the CSU Chico SAP program, has worked on SAP implementation projects for HP and Chevron, and has taught SAP Systems Administration, ABAP and ERP Configuration and Use.
Heather Czech is the U.S. University Alliances Program Manager at SAP, and is responsible for helping member schools use SAP solutions in the classroom. She has 15 years experience in SAP software sales, curriculum product management, teaching SAP functionality, and software implementation and process improvement consulting.
Lorraine Gardiner received her Ph.D. from the University of Georgia in 1989 with a major in management science and minor in management information systems. She taught from Fall 1988 through Summer 2002 at Auburn University in the Department of Management and joined the MIS faculty at CSU, Chico beginning Fall 2002. She has developed and teaches a graduation course in Decision Support Systems that uses SEM and BW in the classroom.
Peter Jones joined SAP America as an Educational Consultant in the Training Group. Platinum Educational Consultant for BW and SEM. Author for curriculum for Sarbanes-Oxley Act at SAP. Educational Material Expert (EME) for CO, BW, SEM and Auditing. Numerous implementations for BW and SEM. Articles written in the FI/CO and BW Expert (Advisor for articles also)on BW configuration, SEM functionality and SOX requirements for BW and SEM.
Amelia A. Maurizio joined SAP America as an Associate Manager of the University Alliance Program in 1998. In October of 2000, Dr. Maurizio was named Director of SAP’s Education Alliance Program. Prior to joining SAP, Dr. Maurizio spent several years in higher education as Assistant Vice President for Academic Administration and Adjunct Professor of Finance.
MODELING BUSINESS AND REQUIREMENTS USING UML
This intermediate-level tutorial is targeted towards people who are supposed to work on business modeling and requirements in the context of object-oriented software development. The desired attendee background is some familiarity with object-oriented concepts as well as interest in business modeling, requirements or analysis.
This tutorial addresses several important issues with regard to object-oriented approaches because they are relevant for business modeling and industrial software development. Starting to utilize object-oriented ideas and UML already from business models and early requirements, this tutorial explains their relationship with use cases and requirements modeling.
The Unified Modeling Language (UML) can be used seamlessly during business modeling and requirements engineering, both for creating and representing object-oriented models. The Unified Process (UP) indicates how that can be done, and this will be explained in the proposed tutorial. In addition, we put special focus on related improvements from our own work by addressing the following questions:
For the purpose of illustration, a running example will be used throughout. In addition, the participants will have the opportunity to work on exercises themselves in groups.
About a year ago, Hermann Kaindl has joined the Institute of Computer Technology at the Vienna University of Technology in Vienna, Austria. Prior to moving to academia as a full professor, he was a senior consultant with the division of program and systems engineering at Siemens AG Austria. There he gained more than 24 years of industrial experience in software development. He has published three books and more than eighty papers in refereed journals, books and conference proceedings. He is a senior member of the IEEE, a member of the ACM and INCOSE, and is on the executive board of the Austrian Society for Artificial Intelligence.
MULTI-DISCIPLINARY SYSTEM DESIGN KNOWLEDGE (WORKSHOP)
John M. Carroll, Mary Beth Rosson, Steven R. Haynes
In this workshop we will explore issues related to integration, management, and use of multi-disciplinary system design knowledge. We need theory, techniques, and tools to guide design knowledge management on complex system development projects, but challenges are significant and span the range of engineering, psychological, and social sciences. Participants will be researchers interested in more effective capture, management, and use of the broad range of knowledge collected and generated in complex systems design projects. Systems design and systems design knowledge management are both multi-disciplinary activities spanning research in human-computer interaction, computer-supported cooperative work, software engineering, intelligent systems, and organizational science, among others.
Potential topics include but are not limited to:
John M. Carroll is Director of the Center for Human-Computer Interaction and the Edward M. Frymoyer Professor of Information Sciences and Technology at Penn State. His books on scenario-based design include Scenario-Based Design: Envisioning Work and Technology in System Design (John Wiley, 1995), Making Use: Scenario-Based Design of Human-Computer Interactions (MIT Press, 2000), and Usability Engineering: Scenario-Based Development of Human-Computer Interaction (Morgan Kaufmann, 2001).
Mary Beth Rosson is Professor of Information Sciences & Technology at Penn State. She is author of numerous articles, book chapters, and tutorials, including Usability Engineering: Scenario-Based Development of Human-Computer Interaction (Morgan Kaufmann, 2001).
Steven R. Haynes is an Assistant Professor of Information Sciences & Technology at Penn State. He researches the relationship between systems design and usability an especially between design rationale and explanation.
The tutorial introduces the concepts and tools necessary for the participants to develop intelligent agent-based information systems. A complete introduction to the agent learning aspect is provided. The potent and flexible support vector machine-learning algorithm — upon which the agents base their intelligence — is therefore an integral part of the session. Most importantly, we demonstrate and explain how to use support vector machines to implement a ubiquitous distributed information system in Java. We include all the development tools and open source software needed for the participants to get a head start in this subject area.
The intelligence algorithm, support vector machines (SVM), is solidly founded in statistical learning theory that is established by Vladimir N. Vapnik. Exploiting Vapnik’s structures, the tutorial will display by what means statistical learning theory can be applied in distributed learning environments such as intelligent mobile agents. This knowledge can be utilized to yield distributed regression, classification, and clustering learning machines. As a result, support vector machines (SVMs) address important requirements in terms of scalability, generalization performance, security, and storage requirements. New distributed agent systems that utilize the interconnected web structure provided by the Internet, continue to surface and some of these systems are based in the Java programming language. The primary reason is that Java has the ability to run in multiple environments spanning widely from embedded devices to enterprise servers.
The targeted audience for this interdisciplinary tutorial will come from two communities. The first group is the data mining community while the second group--to whom this tutorial is also interesting--comes from the distributed systems and intelligent agents researchers. To this second group, the use and introduction of a possible successor to neural networks will be most interesting, as the tutorial will supply them with the necessary tools to use support vector machines in intelligent agents.
We will make no assumptions about previous knowledge by the attendees. However, it would be beneficial to have some knowledge of multi-layer neural networks and support vector machines beforehand.
Doctorial student Rasmus Pedersen (expected graduation September 2004) presents this tutorial under the supervision of his Professor Eric Jul. Mr. Pedersen has accepted an offer to continue as Assistant Professor at Dept. of Informatics, Copenhagen Business School upon graduation in September 2004.
Eric Jul, is associated with the Distributed Systems Lab., Dept. of Computer Science, University of Copenhagen. Professor Jul pioneered the distributed object oriented system Emerald.
The advent of nano-technology has made it technologically feasible and economically viable to develop low-power devices that integrate general-purpose computing with multi-purpose sensing and wireless communications capabilities. It is expected that these small devices, referred to as sensor nodes, will be mass-produced and deployed, making their production costs negligible. Individual sensor nodes have a small, non-renewable power supply and, once deployed, must work unattended. For most applications we envision a massive deployment of sensor nodes, perhaps in the hundreds or even thousands. Aggregating sensor nodes into sophisticated computational and communication infrastructures, called sensor networks, will have a significant impact on a wide array of applications ranging from military, to scientific, to industrial, to healthcare, to domestic, establishing ubiquitous wireless sensor networks that will pervade society redefining the way in which we live and work.
WSNs differ in many fundamental ways from Mobile Ad-hoc Networks (MANET).
Among the differences that may impact the network and protocol design are
• In most applications the WSNs need to work unattended,
• Much of the processing in WSNs is data centric,
• Many WSNs are mission-oriented and, therefore, may need to be programmed/reprogrammed to achieve mission success, and
• The need in WSNs for distributed calibration due to the nature of some monitoring applications.
Early recognition of these and other differences between WSNs and other wireless networks including MANET has motivated the research community to develop solutions specifically designed for WSN. Indeed, the recent years have seen a flurry of activity in the arena of WSNs and their applications. These results are scattered in a surprisingly large array of conference proceedings and workshops. Not surprisingly, many practitioners and researchers find it hard to synthesize existing results in this area and identify critical open issues that need to be addressed. Furthermore, the experience gained from real-life experiments that have validated different architectural models and protocols for WSNs or brought out their limitations, have not been well reported or are not widely available.
The proposed tutorial will provide the participants with up-to-date survey or sensor networks and their various applications. It is intended for a broad audience, directed both to those in the area who are interested in some aspect of wireless networking that is complementary to their activity, and people that want to approach and get a general view of this new and booming area The tutorial will consist of five foci:
Overview – This section will provide a high level overview of wireless sensor networks, covering key concepts and some simple applications. This will provide the participants with a broad overview of the material and how it may be relevant to him or her. In addition, the overview will provide pointers to the aspects of wireless sensor network research that may be relevant for a participant for a particular context.
Basic Techniques - This section will discuss the basic issues and techniques that form the core of wireless sensor network research. Each topic will be covered at a level compatible with the assumed level of the audience. Of course, a more sophisticated audience can be served with a more in-depth set of topics. This will be decided dynamically.
Advanced Techniques - This section will discuss the advanced techniques and algorithms that represent the state-of-the-art in wireless sensor network research and applications. How far this will go depends to a large extent on the level and interests of the audience.
System and Software Issues – Understanding and determining the problem to solve is only half the problem. There are significant system and software challenges that need to be met for solving a real world problem. This section will describe existing architectures, operating systems and integration issues required for developing wireless sensor-based applications.
Applications/Case Studies – No tutorial is complete without a section on real-world applications of wireless sensor networks. This section will cover applications of wireless sensor networks to practical scenarios in many disciplines. An interactive approach will be adopted.
Stephan Olariu is full professor in Computer Science at Old Dominion University, Norfolk, Virginia. He is a world-renowned technologist in the areas of parallel and distributed systems, parallel and distributed architectures and networks. He was invited and visited more than 80 universities and research institutes around the world lecturing on topics ranging from wireless networks and mobile computing, to biology-inspired algorithms and applications, to telemedicine, to wireless location systems, and demining. Professor Olariu is the Director of the Sensor Networks Research Group at Old Dominion University. Dr. Olariu earned his Ph.D. (Computer Science) in three years at the McGill University, Montreal. He has co-authored two books: Solutions to Parallel and Distributed Computing Problems: Lessons from Biological Sciences (with A. Zomaya and F. Ercal), Wiley and Sons, New York, 2000, ISBN 0471353523, Parallel Computation in Image Processing (with S. Tanimoto), Cambridge University Press, to appear 2004, and Wireless Sensor Networks and Applications, Wiley and Sons, New York, 2004, with four more books in preparation. He has also published 100+ journal articles and 100+ conference articles. Stephan is an Associate Editor of Networks, International Journal of Foundations of Computer Science, and serves on the editorial board of Journal of Parallel and Distributed Computing and served (until January 2003) as Associate Editor of IEEE Transactions on Parallel and Distributed Systems and VLSI Design.
ADVANCES IN TECHNOLOGY-SUPPORTED LEARNING
Jay F. Nunamaker, Jr. and Eric Santanen
Some things just come as naturally as breathing. For everything else there is teaching…and…learning. For centuries, technology for learning did not advance beyond slates and chalk, pencils, paper, and the printing press. Formal learning processes evolved to maximize the benefits those technologies could provide.
Electronic and computer technologies gave rise to a renaissance of approaches to learning. Technical researchers are exploring everything from hyper linked multi-media presentations to fully immersive virtual worlds, and in many cases the findings are quite promising. Social and cognitive researchers report that technological innovations appear to be accompanied by substantial interpersonal, social and institutional changes, ranging from fully distributed student bodies to fully asynchronous on-line university degree programs.
In this tutorial five action researchers from around the globe discuss five very different approaches to using technology to support learning. Their presentations will deal with the pragmatic technical, organizational, and social issues they find in the field. They will demonstrate new technologies, and offer tips from the trenches about implementing and studying technology-supported learning.
Jay Nunamaker is Regents and Soldwedel Professor of MIS, Computer Science and Communication, and Director of the Center for the Management of Information at the University of Arizona, Tucson. His research on group support systems addresses behavioral as well as engineering issues and focuses on theory as well as implementation. Dr. Nunamaker founded the MIS department (3rd and 4th nationally ranked MIS department) at The University of Arizona and established campus-wide instructional computer labs that have attracted academic leaders in the MIS field to the university faculty. He received his Ph.D. in systems engineering and operations research from Case Institute of Technology, an MS and BS in engineering from the University of Pittsburgh, and a BS from Carnegie Mellon University. He has been a registered professional engineer since 1965.
Eric Santanen is an Assistant Professor of Management Information Systems in the Department of Management at Bucknell University, where he teaches introduction to information systems, management information systems, and capstones. He earned his Ph.D. from the University of Arizona in June 2000, where he was involved in several research projects, including Brainstorming and Creativity, Colors and Interface Design, and a Scenario Modeling Tool. Dr. Santanen’s BS and MS are from the New Jersey Institute of Technology. His primary research interests are: Group Support Systems using GSS for collaborative requirements elicitation, collaborative process modeling, and increasing creative ideation; Systems Analysis and Design: designing software for distributed collaboration; and Human Factors, using color in interface design and issues for distributed, collaborative use of software, and human-computer interaction.
SUCCESSFUL SCIENCE FROM THE LOGICAL POSITIVIST PERSPECTIVE: A WAY OF THINKING – A WAY OF WRITING
Robert O. Briggs and Douglas L. Dean
Positivism and interpretivism are mental tool sets for different purposes, but with at least one common goal: to help keep us academics from concluding (and then publishing) things that turn out to be embarrassingly wrong-headed. If we wish to study “the inferences that people draw and the meanings people ascribe to the things other people say and do,” then the mental disciplines of interpretivism can help, if we wield them with intelligence and insight. Likewise, if we form questions of cause and effect, then logical positivist thought, when wielded with intelligence and insight, can help us rule out misguided or self-interested explanations.
This tutorial offers a useful way to think about scientific enquiry into the design and use of systems. It focuses on the simple structure of the logical positivist approach, and the way that structure plays out in a paper. Papers based on the logical positivist structure are straightforward to write, and they can be nearly bulletproof to reviewers. They can be exciting, sometimes even inspiring to read, and they can increase the likelihood that people, organizations, and societies will survive and thrive. Papers that adhere to the forms of logical positivism without embracing its underlying structure are typically difficult to write, agonizing to get past the reviewers, dull to read, and ultimately, may make no difference to the world.
We will connect the dots from a simple statement of the positivist philosophy to the nature and structure of a theory, to the derivation of a hypothesis, to nature of an experiment, to the meaning of data, to worthwhile and very publishable papers. Along the way, it also attempts to dispel some of the prevailing myths about logical positivism that ascribe it both too much and too little merit.
Robert O. Briggs is Research Coordinator for the Center for the Management of Information at the University of Arizona, is on the Faculty of Technology, Policy, and Management at Delft University of Technology in The Netherlands, and is Director of R&D for GroupSystems.com. Since 1990 he has investigated the theoretical and technological foundations of collaboration, and has applied his findings to the design and deployment of new technologies, workspaces, and processes for high-performance teams. He and his colleagues are responsible for numerous recognized theoretical breakthroughs and technological milestones. In his field research he has created team processes for the highest levels of government, and has published more than 60 scholarly works on the theory and practice of collaborative technology. He earned his Ph.D. in MIS at the University of Arizona and holds a BS and an MBA from San Diego State
Douglas L. Dean is an Associate Professor and David and Knight Fellow at the Marriott School of Management at Brigham Young University. He is also research coordinator for the Rollins Center for E-business at Brigham Young University. Dr. Dean’s research interests include electronic commerce technology and strategy, software project management, requirements analysis, and collaborative tools and methods. He has published over 30 scholarly works and is a leading scholar in collaborative tools to support modeling, business analysis, and systems analysis and design. He received his Ph.D. in MIS from the University of Arizona and holds a Masters of Accountancy from Brigham Young University.
REPEATABLE SUCCESSES WITH TEAM TOOLS AND PROCESSES
Robert O. Briggs, Gert-Jan de Vreede, Jay F. Nunamaker, and Bob Harder
Organizations are limited in their creation of goods and services by the competence and capacity of their members: each individual can assimilate and understand only so much, reason so much, and take only so many actions in a day. By collaborating they can accomplish more than they could as separate individuals. Yet, many organizations struggle to make collaboration work. They often resort to implementing technologies, while experiences show that technology alone seldom is the answer. What is needed is the conscious design of effective collaboration processes followed by the design of collaboration technologies to support these processes. This is the realm of an exciting new research area: Collaboration Engineering, which finds itself at the crossroads of disciplines: information systems, computer science, systems engineering, organizational behavior, and cognitive psychology.
Collaboration Engineering is a design approach for recurring collaboration processes that can be transferred to groups that can be self-sustaining in these processes using collaboration techniques and technology. The unique challenge for a collaboration engineer is to design a recurring collaboration process once and then transfer it so that it can be facilitated by the practitioners themselves without the ongoing intervention of professional facilitators.
Collaboration Engineering is in its infancy, but early results are promising and exciting. This tutorial will bring a number of researchers from around the world to the stage to discuss various aspects of the Collaboration Engineering approach, including:
of Thinking’ – the theoretical and philosophical foundations of the design
· The ‘Way of Working’ – the tasks (steps, phases, and activities) that a collaboration engineer has to carry out to design repeatable collaboration processes.
· The ‘Way of Modeling’ – the visual and verbal formalisms for representing subtle and sophisticated collaboration processes quickly and simply.
· The ‘Way of Controlling’ – the managerial aspects of the design approach, including the evaluation of its effectiveness and efficiency.
The presenters in this tutorial will present the latest thinking about topics such as state-of-the-art collaboration process building blocks (thinkLets), the latest technology for creating purpose-built collaborative applications on the fly, and examples of repeatable collaboration processes that have been adopted by a number of organizations for mission critical tasks such as risk self-assessments, requirements negotiation, and military command-and-control.
Robert O. Briggs is Associate Professor at Delft University of Technology in the Netherlands, is Research Coordinator for the Center for the Management of Information at the University of Arizona, and is Director of R&D for GroupSystems.com. Since 1990 he has investigated the theoretical and technological foundations of collaboration, and has applied his findings to the design and deployment of new technologies, workspaces, and processes for high-performance teams. He and his colleagues are responsible for numerous recognized theoretical breakthroughs and technological milestones. In his field research he has created team processes for the highest levels of government, and has published more than 60 scholarly works on the theory and practice of collaborative technology. He earned his Ph.D. in MIS at the University of Arizona, and holds a BS and an MBA from San Diego State University.
Gert-Jan de Vreede is professor at the University of Nebraska at Omaha. Previously, he was head of the Department of Systems Engineering at Delft University of Technology in the Netherlands, from where he got his PhD and is still an affiliated fellow. Both in Delft and Omaha, he established a successful program of Group Support Systems research. His research interests include the design of collaboration processes for mission critical tasks, the application of collaboration technologies to facilitate organizational re-engineering, and the adoption and diffusion of GSS in different socio-cultural environments. His articles have appeared in journals such as Journal of MIS, Journal of Decision Systems, Holland Management Review, DataBase, Group Decision & Negotiation, Communications of the ACM, Simulation, and Information Technology for Development.
Jay Nunamaker is Regents and Soldwedel Professor of MIS, Computer Science and Communication, and Director of the Center for the Management of Information at the University of Arizona, Tucson. His research on group support systems addresses behavioral as well as engineering issues and focuses on theory as well as implementation. Dr. Nunamaker founded the MIS department (3rd and 4th nationally ranked MIS department) at The University of Arizona and established campus-wide instructional computer labs that has attracted academic leaders in the MIS field to the university faculty. He received his Ph.D. in systems engineering and operations research from Case Institute of Technology, an MS and BS in engineering from the University of Pittsburgh, and a BS from Carnegie Mellon University. He has been a registered professional engineer since 1965.
Robert Harder is a Computer Scientist for the US Army Research Laboratory. He serves as a researcher at the US Army Battle Command Battle Laboratory at Fort Leavenworth, Kansas. He is currently applying Group Support Systems to military decision making processes envisioned for the future Army. He earned his MS in Industrial Engineering at North Carolina Agriculture and Technical State University and holds a BA from the University of Florida in Mathematics.
LITERACY IN THE INTERNET AGE: WHAT IT MEANS TO READ AND WRITE IN MEDIA
What does it mean to be literate in today's world? Reading and writing prose used to be enough. But the world is fast going post-textual. Some claim that you can't be literate in the internet-worked world if you can't deeply read many media genres... and create them as easily as jotting a note.
The traditional notions of document as "office memo" or "textbook chapter" have changed, and are now likely to be email, include pointers to web sites, references to IRC channels, images in the body of the document, interactive content (e.g., Flash or Java-based media), audio annotations and video. What's more, these definitions are constantly under flux. Media stability and long-term writing skills in a particular media type and genre are luxuries. We have to reconsider what it means to have
"documents" in the workplace as media types multiply and increasingly becoming real-time and connected in nature. And as importantly, how is it that we come to create and use such diverse kinds of media?
It has become increasingly apparent that corporate and educational content is distributed via focused narrow-casting or via special web documents. Meetings and presentations are being captured on digital video for later reference, with indexing and search tools becoming as common as the jog shuttle, fast forward and rewind. Rather than offering training classes, companies are recording training materials and providing them on demand via a broadband network service. In the classroom, lectures are now commonly recorded for later study or use at another time and place. Going beyond stored content, teleconferencing
and live connections are clearly seen as
part of the media mix -- a video connection between meeting participants is just
another kind of media.
In all arenas we are seeing entirely new practices emerge in literate media creation. The introduction of rich media types, in an
intrinsically networked media world, with
both synchronous and asynchronous aspects, promises to fundamentally reconfigure
what it means to be a writer and a reader.
While there are a rich number of tools and idioms for creating different kinds of media, there are still a large number of issues in defining new media types, creating the content, and developing methods of using the media content. This symposium will bring together leading figures in the world of media creation for an engaging discussion of what all this fusion and fission
mean, and how the sudden shifts and changes will affect our collective ideas of literacy for the next decade.
PROBLEM-BASED LEARNING SYSTEMS AND TECHNOLOGIES (WORKSHOP)
Ben Martz and Morgan Shepherd
In this workshop we will demonstrate problem based learning techniques and processes on different technology platforms. Attendees will learn how to implement their particular research ideas in technological environments. For example, a participant may want to use technology to implement a structured debate technique. Through discussion and demonstration, a prototype of each technology-based technique will be developed. We envision the research topics to center around using technology to support PBL techniques, new PBL techniques enabled by technology, and problem-based learning techniques.
Ben Martz is an Associate Professor of Information Systems at the University of Colorado at Colorado Springs with interests in groupware, entrepreneurship and distance education. Ben has published his research in MIS Quarterly, Decision Support Systems, the Journal of Management Information Systems, Decision Sciences Journal of Innovative Education, Journal of Cooperative Education and Journal of Computer Information Systems.
Morgan Shepherd is an Associate Professor of Information Systems at the University of Colorado at Colorado Springs with interests in telecommunication and computer literacy with a primary research emphasis on making distributed groups productive. His research has appeared in the Journal of Management Information Systems, Informatica, and Journal of Computer Information Systems.
STRATEGIC AND ECONOMIC METHODS FOR
ASSESSMENT OF IV&V ACTIVITIES
Harako Nakao, Christina D. Moats, Masa Katahira, Daniel N. Port
Independent verification and validation (IV&V) is a growing and evolving discipline that must constantly re-assess itself if it is to continue and improve. However some of the most basic questions in this pursuit have proven formidable, as they are economic in nature, e.g.
This workshop seeks to establish and clarify the issues in
developing effective assessment for the evolution of IV&V practice. Attendees
will prepare a position paper on one or more of the topics listed below and
selected papers will be presented to stimulate discussion. The workshop is
designed to provide a forum for dialogue about the challenges and concerns
within the IV&V community, while also providing opportunities to learn from the
experiences of others. It is anticipated that this workshop will also present
opportunities for future collaboration.
For further information about the workshop, please go to http://www.jamss.co.jp/docs/HICSSprogram.htm
Japan Manned Space Systems Corporation
NASA IV&V Facility
Japan Aerospace Exploration Agency
University of Hawai'i
RETURN TO PROGRAM INFORMATION
Send questions or comments to: email@example.com