Power, D. and S. Kaparthi, "Building Web-based Decision Support
Systems", Studies in Informatics and Control, Vol. 11, Number 4, Dec.
2002, pp. 291-302, URL: http://www.ici.ro/ici/revista/sic2002_4/art1.pdf
Technology Hits a Midlife Bump, an article from the New York Times
From ACM News, May 2, 2003
"Making Intelligence a Bit Less Artificial"
New York Times (05/01/03) P. E1; Guernsey, Lisa
- Amazon, NetFlix, and other online retail services rely on automated recommender systems to anticipate customer purchases based on past choices; however, a February report from Forrester Research found that just 7.4 percent of online consumers often bought products recommended by such systems, roughly 22 percent ascribed value to those recommendations, and about 42 percent were not interested in the recommended products. To improve the results requires the enhancement of recommendation engines with human intervention, according to TripleHop Technologies President Matt Turck. One of the key ingredients of today's recommendation technology is collaborative filtering, in which a buyer is matched to others who have bought or highly rated similar items. Commonplace problems with this methodology include cold starts, in which predicting purchases is difficult because the system lacks a large database of people with similar tastes, and the popularity effect, whereby the computer!
delivers recommendations that are pedestrian and prosaic. Some companies try to avoid such problems by adding a human element: Barnesandnoble.com, for instance, employs an editorial staff to tweak recommendations. "If it is not vetted and monitored by humans and not complemented by actual hand-selling, as we say in the book industry, it doesn't feel like there is anybody there," notes Barnesandnoble.com's Daniel Blackman. Some recommendation engines, such as Amazon's, can improve results with customer input via continuous editing of consumer profiles and special features--alerting the e-tailer not to make recommendations based on a purchase that is a gift for someone else, for example. Some companies also want recommender systems to prioritize surplus items in order to better manage inventory, a strategy that could engender consumer distrust, say software developers.
http://www.nytimes.com/2003/05/01/technology/circuits/01reco.html
"Artificial Intellect Really Thinking?"
Washington Times (05/01/03) P. C9; Reed, Fred
- A computer can be labeled as artificially intelligent, but the intelligence--if indeed that is what it is--actually resides in the program, writes Fred Reed. However, he points out that such programs, once deconstructed, consist of incremental steps that by themselves do not indicate true intelligence. A program such as the one IBM's Deep Blue used to beat chess champion Garry Kasparov in 1997 works out all potential moves from a given board position simply and mechanically via a "move generator." Just as mechanical are the rules that the program uses to choose the optimal maneuver. However, Deep Blue could be rated as intelligent by mathematician Alan Turing's supposition that an intelligent computer can interact with a person so well that that person cannot distinguish it from a human being. Reed writes that most people can identify intelligence without clearly defining it, so the term itself is subject to interpretation. "For practical purposes, and certainly in the b!
usiness world, the answer seems to be that if it seems to be intelligent, it doesn't matter whether it really is," he notes. The convergence of speech recognition, robotic vision, and other technologies is paving the way for practical machines that at least appear to be intelligent, such as robots designed to care for the elderly in Japan.
http://www.washtimes.com/business/20030501-20933620.htm
"The Great IT Complexity Challenge"
NewsFactor Network (04/30/03); Brockmeier, Joe
- Autonomic computing promises to clear up complexity in company IT operations, freeing people from mundane maintenance tasks and handing those functions over to computers themselves. Major IT vendors are latching on to autonomic computing not only as a way to reduce complexity and save money, but also to make the IT infrastructure more adaptive to business demands. IBM autonomic computing director Miles Barel says there are four required elements of autonomic computing: Self-configuration, self-healing, self-optimization, and self-protection. He also gives a maturation schedule for autonomic computing and says most enterprises have deployed managed services, but do not use prediction or adaptive systems. Gartner analyst Tom Bittman says that standards are a looming issue, and notes that customers are wary of buying all of their products from one vendor in order to get autonomic capability; he explains that autonomic computing eventually will mean less room for actual staf!
f in the IT department since systems will largely run themselves, but at the same time the purpose is primarily increased flexibility and performance, not saved costs. Sun Microsystems' Yael Zheng says autonomic computing will move IT employees up the ladder in terms of what they contribute to the business. Instead of maintaining hardware, they can focus on more critical functions such as writing business applications. Autonomic computing involves virtualizing the IT infrastructure, which also boosts utilization rates since resources can be provisioned across the enterprise unhindered.
http://www.newsfactor.com/perl/story/21393.html
"Personalizing Web Sites With Mixed-Initiative Interaction"
IT Professional (04/03) Vol. 5, No. 2, P. 9; Perugini, Saverio; Ramakrishnan, Naren
- Saverio Perugini and Naren Ramakrishnan of Virginia Polytechnic Institute believe that a truly personalized Web site will personalize the user's interaction, and a mixed-initiative architecture where the user can control the interaction is the best option. They characterize browsing as an example of directed dialogue because the Web site takes the initiative by providing an array of hyperlinked choices that the user must respond to; the disadvantages of such a model, which usually consists of a Web site with multiple browsers, include supporting an exhaustive number of potential browsing scenarios and over-specification of the personalization goal. Perugini and Ramakrishnan believe plugging an out-of-turn interaction toolbar into the browser will support mixed-initiative interactions and enable the user to take charge within the Web site: This will eliminate the site's need to directly uphold all potential interfaces within the hyperlink framework, reduce the interface's!
clutter, and make the interaction more akin to a natural dialogue. Web site personalization is streamlined to a partial evaluation of a representation of interaction, and the authors write that the Extensible Style Sheet Language Transformation (XSLT) engine is a good choice for easy deployment. Programs are first modeled in XML, and then an XSLT style sheet is outlined for each user input and applied on the XML source; dead ends are shaved off via additional post processing transformation, while high-level XSLT functions can expedite link label ordering on recreated Web pages, among other processes. Perugini and Ramakrishnan claim that this approach can unify other types of Web site personalization and model dynamic content. Other areas they are investigating include inverse personalization, multimodal Web interface design, and mixed-initiative functionality based on VoiceXML. Transformation-based personalization strategies will become more important as wireless devices a!
nd the display of only the most relevant data on handheld computers be
come prevalent, the authors contend.
http://www.computer.org/itpro/it2003/f2009.pdf
From ACM News, April 30, 2003
"Who Loves Ya, Baby?"
Discover (04/03) Vol. 24, No. 4; Johnson, Steven
- Social-network software that visualizes the interactions and relationships within groups of people promises to radically transform large organizations. Mapping social interactions has become easier thanks to the advent of email, chat rooms, Web personals, and bulletin boards. Software designer Valdis Krebs' InFlow--the end result of 15 years' development--provides organizational maps that resemble molecular configurations, with an employee representing each individual molecule; these maps are derived from employee surveys used to determine their collaborative relationships and work patterns. MIT grad student Danah Boyd and programmer Jeff Potter have developed Social Network Fragments, a software program that studies emails sent and received, and from them constructs a map of social networks. The program can outline not only the size of different social groups but the bonds between them. "If we're going to spend more of our social life online, how can we improve what tha!
t experience feels like?" asks Judith Donath of MIT Media Lab's Sociable Media Group. "You have this enormous archive of your social interactions, but you need tools for visualizing that history, for feeling like you're actually inhabiting it." Social-network software can also be a useful tool for political analysis.
http://www.discover.com/apr_03/feattech.html
From ACM News, April 25, 2003
"TeleLiving: When Virtual Meets Reality"
Futurist (04/03) Vol. 37, No. 2, P. 44; Halal, William E.
- New technological trends are bringing TeleLiving--conversational human-machine interaction that facilitates a smoother, more comfortable way to educate, shop, do one's job, and even socialize--closer to reality. High-speed broadband communications will supply TeleLiving's backbone, while increasing computational power through the advent of two-dimensional and later three-dimensional chips will also be beneficial. However, there is little consumer demand for broadband or additional computing power because of limited functionality and overcomplicated technology. The key to boosting demand lies in the development of a killer app, and George Washington University's William E. Halal believes TeleLiving's killer app will be the conversational interface. Such a tool will combine animated digital characters or avatars with speech-recognition technology, while the interface itself will be accessible wherever a liquid crystal display (LCD) screen can be installed. Roles the conver!
sational interface is expected to play include a virtual assistant that can help a person set up doctor appointments and consultations remotely, for instance; an enhancement to audiovisual communications that supports high-fidelity real-time images; and a bridge across the digital divide between technological haves and have-nots upheld by the elimination of the need for computer literacy. For TeleLiving to come to fruition, artificial intelligence must advance to the point where a computer can model the cognitive capacity of the human brain, a goal that conservative estimates expect to be reached by 2020.
An example of DSS for Handheld Devices.
From ACM News, April 14, 2003
"GUIs Face Up to the Future"
VNUNet (04/03/03); Sharpe, Richard
- A number of U.K.-based companies are working
to radically enhance the function and usability of graphical computer interfaces
(GUIs). Visual Information (VI) is a family-owned affair whose flagship product
is Vi Business Analyst (ViBA), a front-end database that presents a geographically-based
representation of data, which has the potential to save users vast sums of
money. Edinburgh University spinoff Rhetorical Systems is developing a text-to-speech
computer tool based on language processing research spearheaded by CEO Marc
Moens and CTO Paul Taylor; with such a tool, a user can ask the computer
a question, thus prompting a spoken answer. Rhetorical Systems' goal is the
rapid and cost-effective generation of voices, which can be delivered in
a variety of accents. Meanwhile, Lexicle is building animated "agents" that
can respond to inquiries in real time and in a user-friendly format. A key
component of Lexicle's tool is Rhetorical Systems' voice-generation software.
Lexicle's offering is based on artificial intelligence research carried out
by Patrick Olivier and Suresh Manandhar, with the former's focus being the
generation of mouth movements by an animated figure in response to typed-in
queries. Moens notes that GUI developers should launch products based on
their readiness, rather than the economic climate. http://www.vnunet.com/Features/1139942
"Smart Tools"
BusinessWeek 50 (04/03) No. 3826, P. 154; Port, Otis; Arndt, Michael; Carey, John
- Artificial intelligence is being
employed in many sectors, both public and private, and has led to significant
productivity and efficiency gains across the board. Financial institutions
have reduced incidents of credit-card fraud through the application of neural
networks, which feature circuits arranged in a brain-like configuration that
can infer patterns from data. Several AI technologies--neural nets, expert
systems, and statistical analysis--are implemented in data-mining software
that retailers such as Wal-Mart use to sift through raw data in order to
forecast sales and plan appropriate inventory and promotional strategies.
The medical sector is also taking advantage of data-mining: One application
involves a collaboration between IBM and the Mayo Clinic to detect patterns
in medical records, while another project uses natural-language processing
to map out the "grammar" of amino acid sequences and match them to specific
protein shapes and functions. Government organizations such as the Defense
Department and the National Security Agency are using AI technology for several
efforts related to national security, such as the Echelon telecom monitoring
system. The Defense Advanced Research Projects Agency (DARPA) is a leading
AI research investor, and the breakthroughs that come out of DARPA-funded
projects are more often than not put to civilian rather than military use.
IBM, Hewlett-Packard, Sun Microsystems, and Unisys are also intensely focused
on AI, especially in their pursuit of self-healing, autonomous computer systems
that can automatically adjust their operations in response to fluctuating
demands. Click Here to View Full Article
From ACM News, April 11, 2003
"Designing New Handhelds to Improve Human-Computer Interaction"
SiliconValley.com (04/09/03); Gillmor, Dan- Professionals
in the field of human-computer interaction gathered in Ft. Lauderdale, Fla.,
this week to discuss the latest research projects involving handheld devices.
The annual ACM conference on human-computer interaction, CHI 2003, gave the
world a glimpse of futuristic handhelds. University of California-Berkeley
researcher Ka-Ping Yee used the conference to unveil his "peephole display,"
which virtually enlarged the display of a Palm device. The display acts as
a small window hovering over a larger display, allowing the handheld user
to view a portion of an image. Users move the handheld up-and-down or side-to-side
to see other areas of an image. Researchers at the University of Maryland
and Microsoft have created "DateLens," a smart calendar that offers handheld
users complex scheduling features, such as zooming in so they can highlight
competing events on their schedule. However, there are some questions whether
there is a consumer demand for such scheduling features. A research team
at Carnegie Mellon University and Maya Design are experimenting with devices
based on the PocketPC and mobile phones that would add remote control capabilities
for lights and other household appliances to handhelds. Meanwhile, shorthand
for handhelds, the work of researchers at IBM and in Sweden, appeared to
be one of the more challenging research projects because it would require
handheld users to learn a new way of writing. http://www.siliconvalley.com/mld/siliconvalley/5593541.htm
From Edupage, April 9, 2003
COMPUTERS THAT MONITOR USERS
- Researchers at the Human Media Lab at Queen's University in Ontario
have designed a computer that monitors its user to help with time
management. They have designed devices that determine how much
attention a person is paying to his or her PC and the relative
importance of each message received. One device is an eye contact
sensor the computer employs to determine if the user is present and
looking at the screen to decide if and when to make contact with the
user. The lab's director, Dr. Vertegaal, said, "We now need computers
that sense when we are busy, when we are available for interruption and
know when to wait their turn--just as we do in human-to-human
interaction." BBC, 8 April 2003 http://news.bbc.co.uk/2/hi/technology/2925403.stm
From Edupage, April 4, 2003
"Software Uses Pictures to Represent Info People Monitor"
EurekAlert (04/04/03)- Research
at the Georgia Institute of Technology puts personal information updates
on a separate networked display in a way that does not distract the user,
but provides a comprehensive and eye-pleasing ambiance. Associate professor
of computing John Stasko and his students are testing the application, called
InfoCanvas, and plan to present their work at the ACM's CHI 2003 meeting.
Instead of cluttering a person's main monitor with information or news they
are tracking, the idea is to consolidate that changing information in a theme-based
picture on another display. Prototype themes include an aquarium setting,
mountain camp site, or desert. Users designate which visual elements will
represent certain information, such as stock market indexes, new emails,
temperature, or traffic congestion. Stasko's own InfoCanvas, for example,
has a seagull that flies lower or higher according to fluctuations in the
Dow Jones index. Stasko suggests InfoCanvas could be used on a wall-mounted
display, where it would serve as an informational painting. Users could touch
elements in order to read more information in a pop-up box, or follow a Web
link to the original source of news. The research team is already testing
InfoCanvas with three users and plans more studies to determine if InfoCanvas
provides easier access to information than a traditional text-based Web portal,
for instance. http://www.eurekalert.org/pub_releases/2003-04/giot-sup040303.php
"Interview With the KDE and Gnome UI/Usability Developers"
OSNews.com (03/10/03); Loli-Queru, Eugenia
- It is hoped that the Unix desktop will
be revolutionized when the Gnome and KDE user interfaces (UIs) become interoperable;
the Gnome project's Havoc Pennington discussed usability issues with Waldo
Bastian and Aaron J. Seigo of the KDE project. Seigo commented that KDE users
are highly desirous of configurability, and Bastian explained that improved
UI systems are supposed to retain all functions while making the most of
usability; Pennington said that a balance must be achieved between UI simplicity,
stability, and the rate of development. Seigo says an important trend that
is likely to continue is desktop "componentization," as demonstrated by the
Konqueror interface, which offers context-based functionality. "Already we
have reached the point where people, even those who are quite aware of interface
issues and design, stop distinguishing between individual applications and
instead simply experience the desktop as a coherent entity doing a lot of
great and cool things," he declared. All three participants understood the
value of having applications comply with some form of certification to support
consistency, although the general feeling was that it should be unofficial
rather than official. Two distinct approaches to improving file storage and
organization have recently come into vogue: The setup of a general architecture
and the provision of filetype-specific file management programs; Seigo saw
disadvantages in both techniques, and favored a hybrid program that resides
above the filesystem and below the user application level, while Pennington
said that Gnome is currently using the filetype-specific method. Looking
out over the next five years, Bastian thought that desktop UI technology
will undergo little change and Seigo was confident that leading-edge desktop
software will be defined by KDE, while Pennington predicted that there will
be an emphasis on streamlining. http://www.osnews.com/story.php?news_id=2997
From DSS News, April 13, 2003
Ask Dan: Can using a DSS have unintended negative consequences?
- YES! Researchers and managers often focus too much on the anticipated
positive consequences of using a specific Decision Support System.
Nadeeka Silva emailed me recently (3/14/2003) asking for some help in
answering some provocative questions about unintended negative
consequences of DSS. I'm assuming Nadeeka is taking a DSS class so I'm
broadening the assignment questions and responding publicly in this Ask
Dan! column. My answer is a starting point that leaves many
opportunities for Nadeeka and Ask Dan! readers to extend the analysis.
- The first "assignment" question states "The best DSS cannot overcome a
faulty decision maker. It cannot force a decision maker to make a
request of it, pay attention to its responses, or properly factor its
responses into the decision that is made. Do you agree with this
statement? Justify your answer."
- I generally agree with the statement. It points out that a DSS cannot
completely overcome the ability and attitude limitations of the person
who is using it. We are all "faulty decision makers". Each of us makes
some bad, wrong or incorrect decisions even when supported by a DSS. A
task specific Decision Support System is intended to increase the
quality and effectiveness of a specific decision or decisions. A
well-designed DSS has the potential to assist those decisions makers who
can and do use it. A DSS can improve a decision maker's "batting
average". In some situations a decision maker learns from using a DSS
about criteria, facts or process issues that need to be considered in a
specific decision situation. DSS encourage and promote "rationality" in
decision making. The goals of a DSS are not however always achieved! So
what is the correct conclusion? Companies and individuals that don't
recognize the limitations of DSS and of decision makers will be
surprised when a DSS doesn't improve decision making for some users.
Even though it is an unintended negative consequence, some decision
makers may actually be hindered by a DSS and a poorly designed DSS can
negatively impact even the "best" decision maker.
- Another "assignment question" also raises the issue of unintended
consequences of using a DSS. The question states "There is a DSS danger:
the danger of overdependence on a DSS, of blindly following the DSS, or
of interacting with it in an entirely mechanical and unimaginative
fashion. Do you agree with this statement? Justify your answer."
- Many people believe the above statement is true and it seems reasonable
that these "dangers" can and do happen. I am not however aware of
empirical research that confirms these "dangers". We don't know how
likely "overdependence" is, or if some users will "blindly follow" or
mechanically interact with some or all types of DSS. I'm assuming
"overdependence" means a person can not make a specific decision without
using a DSS. For many DSS, the intent is that users will become
"dependent" on using it. If decisions are improved, then the goal of
training, reinforcements and rewards should be to promote regular and
habitual use of the DSS. Managers and DSS users who recognize the
"dangers" are sensitized to them and that makes the "dangers" less
likely to occur or less likely to cause harm. DSS are intended to
support not replace decision makers so users need to consciously
interact with a DSS to use it effectively. The expectation needs to be
created that the human decision maker is the ultimate authority and that
the user can "over rule" or choose to ignore analyses and
recommendations of the DSS. The "dangers" raised in this question
warrant our attention and certainly they should be studied, but they do
not justify avoiding the use of a DSS or rejecting a proposed DSS
project.
From ACM News, April 7, 2003
"Semantic Applications, or Revenge of the Librarians"
Darwin (03/03); Moschella, David- The
supplier-centric IT industry will become customer-centric when Web services
shift to semantic applications that enable interoperability between computer
systems, thus systematizing data searches and transaction processing, writes
David Moschella, author of "Customer-Driven IT: How Users are Shaping Technology
Industry Growth." Web pioneer and World Wide Web Consortium (W3C) director
Tim Berners-Lee has long championed the concept of a semantic Web that can
interpret the context of content with greater precision. Semantic applications
are often lumped into one of two categories: Web content management and intelligent
applications. Elements common to both categories include metadata, taxonomies,
ontologies, and directly addressable, self-contained information objects,
and initiatives are underway to standardize these various terminologies in
nearly every major industry. A good portion of these projects involve business
exchanges and cross-industry entities. For example, the Defense Advanced
Research Projects Agency's DARPA Agent Markup Language (DAML) initiative
is designed to extend HTML and XML to accommodate ontologies. Improved business
interoperability can only be leveraged if customers can settle on and adopt
common ontologies and taxonomies, and then deploy them consistently and cautiously,
both for structured and unstructured data. New business skills will be needed
to take advantage of semantic systems, and the propagation of such skills
must proceed on an industry-by-industry basis. http://www.darwinmag.com/read/030103/semantic.html
From ACM News, March 31, 2003
"HP Thinks in 3D for Web Browsing"
InternetNews.com (03/25/03); Singer, Michael- Hewlett-Packard
has introduced a new tool for creating three-dimensional views of online
stores, similar to Doom and other video games. Called the VEDA (virtual environment
design automation) project, the application is used as a visualization database,
and allows users to stroll through virtual rooms and corridors using their
mouse. Items are arranged according to the user's chosen categories. The
application was developed by HP Labs researchers Amir Said and Nelson Chang,
who say the tool provides more interaction than a normal Web page, is visually
appealing, and provides a little bit of the feel of a brick-and-mortal store.
The back-end VEDA application is based on OpenGL and XML technologies, and
supports robust audio, video, and 3D models that can be modified, says Chang.
The researchers admit that the software's biggest hurdle is Internet bandwidth
limitations. Said says lower speeds degrade the graphics and high resolution
photographs. But the tool could run on HDTV as well as the Web because of
its advanced technologies, which are more developed compared to 2ce's CubicEye
and other 3D Web browsers. HP Labs is now planning to test the system with
such retailers as Wal-Mart. http://siliconvalley.internet.com/news/article.php/2169931
From Knowledge @ Emory, March 26, 2003
Is The Time Right For Web Services?- Microsoft’s
Bill Gates has called web services “the holy grail,” but some industry insiders
claim there’s more hype than substance in a system that would allow computers
to talk to each other regardless of software or system. Who’s right? Experts
at Emory University’s Goizueta Business School view the current status of
the system and explain why they are cautiously optimistic. http://knowledge.emory.edu/articles.cfm?catid=7&articleid=658
Why the Large Players Stand to Win at Web Services
- In the brave new world of web services,
emerging technologies and new applications are reshaping the way that companies
do business. Not surprisingly, the large companies in the IT realm appear
to be dominating the web services technological wave. Faculty at Emory University’s
Goizueta Business School and industry experts discuss the web services opportunity
and the strategic plans of some of the largest players. http://knowledge.emory.edu/articles.cfm?catid=14&articleid=657
From DSS News, March 30, 2003
Ask Dan: What are the characteristics of a Decision Support System?
- Many faculty who teach DSS courses intend that their students will
master the skill of determining if a specific information system is a
"DSS". Gaining this skill is complicated because the concept "Decision
Support System" is used in various ways by authors, researchers and
practitioners. On March 7, 2003, Chan Chun Kit emailed asking "what are
the characteristics of a Decision Support System?" Also, on March 13,
Juliet Stephen emailed asking about the characteristics of DSS. She
noted "I'm really interested in DSS". This Ask Dan! tackles this
difficult and potentially controversial question.
- DSSResources.COM, my book (Power, 2002) and this column advocates the
"big tent" or umbrella approach to defining DSS. Following the lead of
Alter (1980) and Sprague and Carlson (1982), I have concluded that
"Decision Support Systems (DSS) are a specific class of computerized
information system that support decision-making activities. DSS are
interactive computer-based systems and subsystems intended to help
decision makers use communications technologies, data, documents,
knowledge and/or models to identify and solve problems and make
decisions. Five more specific DSS types include: Communications-driven
DSS, Data-driven DSS, Document-driven DSS, Knowledge-driven DSS, and
Model-driven DSS."
- Turban and Aronson (1995) and others try to narrow the "population of
systems" called DSS. Turban and Aronson define DSS as "an interactive,
flexible, and adaptable CBIS specially developed for supporting the
solution of a nonstructured management problem for improved decision
making (p. 77)". A few paragraphs later, they broaden the definition and
define 13 characteristics and capabilities of DSS. Their first
characteristic is "DSS provide support for decision makers mainly in
semistructured and unstructured situations by bringing together human
judgment and computerized information. Such problems cannot be solved
(or cannot be solved conveniently) by other computerized systems or by
standard quantitative methods or tools". Their list is a useful starting
point.
Alter (1980) identified three major characteristics of DSS:
- 1. DSS are designed specifically to facilitate decision processes,
- 2. DSS should support rather than automate decision making, and
- 3. DSS should be able to respond quickly to the changing needs of
decision makers.
- Clyde Holsapple and Andrew Whinston (1996) identified four
characteristics one should expect to observe in a DSS (see pages
144-145). Their list is very general and provides an even broader
perspective on the DSS concept. Holsapple and Whinston specify that a
DSS must have a body of knowledge, a record-keeping capability that can
present knowledge on an ad hoc basis in various customized ways as well
as in standardized reports, a capability for selecting a desired subset
of stored knowledge for either presentation or for deriving new
knowledge, and must be designed to interact directly with a decision
maker in such a way that the user has a flexible choice and sequence of
knowledge-management activities.
- Turban and Aronson note their list is an ideal set. They state "Because
there is no consensus on exactly what a DSS is, there is obviously no
agreement on standard characteristics and capabilities of DSS". This
conceptual confusion and lack of consensus on a well defined DSS concept
originally prompted me in 1995 to try to more systematically define and
categorize DSS. In seems impossible to conduct meaningful scientific
research about systems that can't be consistently identified and
categorized. A more consistent definition of DSS and set of
"characteristics" should also improve communications about these
important computerized systems with students and DSS practioners.
- So what is a characteristic of a DSS? In this context, it is an
observable feature, peculiarity, property, or attribute of ANY type of
Decision Support System that differentiates a DSS from another type of
computerized system. Why do we develop lists of characteristics and
attribute lists? In general, such lists can identify an object as part
of a class or group of similar objects; it helps us in recognition and
identification!
- The following is my list of the characteristics of a DSS, please
comment!
CHARACTERISTICS OF A DECISION SUPPORT SYSTEM
- 1. DSS facilitate and support specific decision-making activities and/or
decision processes.
- 2. DSS are computer-based systems designed for interactive use by
decision makers or staff users who control the sequence of interaction
and the operations performed.
- 3. DSS can support decision makers at any level in an organization. They
are NOT intended to replace decision makers.
- 4. DSS are intended for repeated use. A specific DSS may be used
routinely or used as needed for ad hoc decision support tasks.
- 5. DSS provide specific capabilities that support one or more tasks
related to decision-making, including: intelligence and data analysis;
identification and design of alternatives; choice among alternatives;
and decision implementation.
- 6. DSS may be independent systems that collect or replicate data from
other information systems OR subsystems of a larger, more integrated
information system.
- 7. DSS are intended to improve the accuracy, timeliness, quality and
overall effectiveness of a specific decision or a set of related
decisions.
References
- Alter, S. Decision Support Systems: Current Practice and Continuing
Challenges. Reading, Mass. : Addison-Wesley, Inc., 1980.
- Holsapple, C. W. and A. B. Whinston. Decision Support Systems: A
Knowledge Based Approach. Minneapolis, MN.: West Publishing, Inc., 1996.
- Power, D. J., Decision Support Systems: Concepts and Resources for
Managers, Westport, CT: Greenwood/Quorum Books, 2002.
- Sprague, R. H. and E. D. Carlson. Building Effective Decision Support
Systems, Englewood Clifts, N.J.: Prentice-Hall, Inc.: 1982
- Turban, E. and J. E. Aronson Decision Support and Intelligent Systems.
(5th edition) Upper Saddle River, N.J.: Prentice-Hall, Inc.: 1995.
From ACM News, March 28, 2003
"Your Brake Pads May Have Something to Say (By E-Mail)"
New York Times (03/27/03) P. F6; Austen, Ian
- Car owners, fleet operators, automotive
manufacturers, and dealers could be alerted to potential vehicle problems
early thanks to in-vehicle computers that collect data from sensor arrays
and transmit them to a central database. Such systems normally jettison such
data except in the event of collision, but researchers postulate that the
information could be recycled to support vehicle maintenance. It is not unusual
for computers in racecars to track vital readings of vehicle systems and
relay them to pit crews so that problems can be detected and repaired quickly,
and theoretically such systems could be easily retooled for consumer vehicles,
and equipped to notify drivers of incipient problems by cell phone or email.
DaimlerChrysler Research and Technology North America CEO Akhar Jameel says
the real challenge is seamless coordination of the various computers in a
car, which are often hobbled by different software and communication barriers.
To solve this problem, the Society of Automotive Engineers is striving to
standardize car computer systems. Another challenge is determining what kind
of operational readings are important and how often such data should be recorded:
For instance, Jameel's research team thinks the activation of antilock brake
systems (ABS) should be tracked, since brake pads could be worn out faster
as a result. If a system records unusually high ABS usage, it could suggest
to the owner that the vehicle be looked over by a mechanic, or call a dealership's
service department to set up an appointment and order replacement parts.
Sally Greenberg of the Consumers Union is concerned that the data gathered
by such telematics system could be misused and violate the owner's privacy;
to prevent this, she argues that consumers should have final say over what
kind of information should be transmitted. http://www.nytimes.com/2003/03/27/technology/circuits/27next.html
"Making Machines See"
Photonics Spectra (03/03) Vol. 37, No. 3, P. 80; Hogan, Hank
- The
long-term target of machine vision vendors and developers is to enable machines
to see in three dimensions, while near-term goals include seeing around corners
using mirrors and prisms (a key component of semiconductor part inspection)
and seeing in color through pixel integration and other methods. As machine
vision systems shrink and become less complex, they are generally transitioning
to a CMOS-based architecture and boasting larger sensors due to increasing
pixel density, a trend that not all vendors have decided to exploit. American
vendors such as Electro Scientific Industries and European companies such
as Machine Vision Systems Consultancy are embracing CMOS technology. Machine
Vision Systems founder Don Braggins says interest in CMOS is driven by the
technology's potential research applications and the abundance of small-scale
manufacturing. On the other hand, Redlake MASD is one company that is sticking
with CCD-based sensors, although higher bandwidth and storage demands will
be a problem. Machine vision systems, despite their advancements, still have
drawbacks, such as motion-induced blurring, which could be resolved in a
number of ways, including sensor fusion, the addition of gyroscopes, or embedding
fiduciary marks in the environment captured by the camera. Investing machine
vision systems with the ability to process 3D information is the goal of
many commercial and academic initiatives, which are researching such possibilities
as stereoscopic vision and laser probes. However, one of the most daunting
challenges will be enabling machine vision systems to mimic a person's ability
to extract 3D data when one eye is closed. Click Here to View Full Article
From ACM News, January 15, 2003
"Business Apps Get Bad Marks in Usability"
CNet (01/14/03); Gilbert, Alorie- Difficult
to use business applications impede many software projects and cost companies
millions of dollars, according to Forrester Research. Forrester says many
enterprise resource planning (ERP) applications are too difficult for ordinary
users, and that many straightforward tasks are hard to accomplish using the
programs. ERP helps automate everyday work duties, such as aspects of human
resource management, order taking, and accounting. The study included ERP
applications from 11 leading vendors. The Forrester analysts involved did
not receive any special training in the programs before testing, but attempted
only what they considered normal, upfront tasks. Those included downloading
updates and software patches, changing security profiles, and adjusting the
program to reflect changes in company organization. Analyst Laurie Orlov
said ERP customers should demand better usability from their software vendors,
especially in the current constrained budget environment. Large firms often
spend millions of dollars implementing these systems, and anywhere from 10
percent to 15 percent of that amount usually goes toward software training
for personnel. Poor usability hinders the effectiveness of ERP projects because
workers spend more time doing things, require more training, and in some
cases abandon the electronic process altogether. http://news.com.com/2100-1017-980648.html
"Games of Infinite Possibilities"
Raleigh News & Observer Online (01/15/03); Cox, Jonathan B.
- North Carolina State University assistant
professor of computer science R. Michael Young is researching artificial
intelligence that allows evolving storylines in computer games. Young says
that instead of following programmers' expectations of how a game player
reacts, these new games study users and change plots to fit their style.
Still, he says there must be some tension between what the player expects
and what makes for a better story, so not everything is entirely predictable.
Such computer games would build upon a model of storytelling Young is refining.
His group recently added artificial intelligence capabilities to the first-person
shooter game Unreal Tournament by linking it to servers running their artificial
intelligence programs. The technology has applications beyond gaming as well,
since it would enable interactive learning methods. Young says that academic
gaming development sometimes crosses the commercial sector, but that the
pressures to meet a product deadline do not accommodate academia's need to
study larger problems. Young became interested in his current area of study
when considering how to have computers solve complex problems, then explain
the solution to humans with understandable instructions. He says the mechanical
aspects of robotics are advancing quickly so that humanoid machines can navigate
themselves and ascend stairs. He expects that artificial intelligence will
slowly pervade people's everyday lives, beginning with traditional computer
devices that can communicate ideas not preprogrammed. Click Here to View Full Article
From ACM News, January 10, 2003
"Computer Linguists Mix Language, Science"
Dallas Morning News Online (01/05/03); Rivera, Patricia V.- The
job of computer linguists involves teaching computers to comprehend spoken
language, speak, and translate text, according to Dr. Gary F. Simons of SIL
International, formerly the Summer Institute of Linguistics. The Internet
has spurred interest in text and speech processing, which has created opportunities
for professionals with computer linguistics expertise. Demand for computer
linguists is especially high in companies that publish multiple-language
catalogs and are seeking ways to reduce spending while still maintaining
quality. Mary Pope of inlingua Dallas explains that more and more customers
want their material to be available in multiple formats and softwares, not
just multiple languages. In addition to text-to-speech, speech recognition,
and machine translation programs, computer linguists also focus on developing
applications that enable computers to answer questions in natural language,
conduct Web-based information searches, and help people learn foreign languages.
Another potentially lucrative application Simon mentions is a program that
automatically screens email. The holy grail for many computer linguists is
to teach computers to understand natural language, but scientists are worried
that such a development could lead to the replacement of telephone operators,
airline reservation agents, and other service professionals. The ideal computer
linguist would be familiar with linguistic theory and artificial intelligence,
proficient in a procedural programming language, and skilled in the areas
of natural language processing or neural networks. Click Here to View Full Article
From DSS News, January 5, 2003
Ask Dan:
Where can I find or download a DSS demo?
What vendors have decision
support demonstration software?- Requests for information about DSS demos is a frequent query to Ask Dan.
The answer changes however depending upon what vendors are doing at
their web sites and with their products. Recently, Peter Keenan posted a
request on ISWorld for information on DSS demos, Hector Diaz wrote
"Where can I download a DSS demo?" and Bassem Farouk wrote "where can I
download a free trial for GDSS software?" So I'll try to provide a
starting point in this response. Some vendors will be missed in this
response and they should contact me with details of their free demos
and/or downloads. Readers may also want to check the DSS Vendor Pages at
http://dssresources.com/vendorlist.
- Buyer Beware! If you are considering downloading demonstration software,
keep in mind that the software may use significant resources on your
computer. Also, the size of some demos often results in slow downloads.
Demonstration software downloads have limitations and restrictions so
read the materials at the web site providing the software carefully. You
will usually need to complete an on-line request form and you may need
to speak with a sales representative.
- A biased sampling. It is hard to find good demonstration downloads
related to building data-driven DSS. A number of vendors do however
showcase web-based demonstrations of DSS built using their software. The
following list includes downloads for some development tasks like ETL
and some web-based demonstrations of data-driven DSS. Downloads of
single user development tools for building knowledge-driven and
model-driven DSS are much easier to find. Finding demos for
communications-driven and Group DSS is also a problem because by design
such software is a multi-user, server based application. The only GDSS
demo download that I've found is at Meetingworks.com. I haven't tried
downloading the 8-participant license of Meetingworks for Windows, but
I'm sure the company hopes you'll try it and then upgrade to
Meetingworks Connect and Meetingworks InternetEdition. Also, please note
I have not tried most of the demonstration downloads listed below ... so
send me feedback if you try any of them ... In general, I think
Microsoft Excel is the best introductory development environment for
learning about building both data-driven and model-driven DSS.
Check the following vendors for decision support related demos:
- 1. Actuate Software Corporation - Actuate Reporting System and
e.Spreadsheet Designer Demo,
http://www.actuate.com/productdemo/index.asp
- 2. Advanced Software Applications - ModelMAX Plus is a modeling,
scoring, and profiling analytical tool,
http://www.asacorp.com/HTML/Products/products.htm
- 3. arcplan, Inc. - dynaSight, there are several web demos available
showing the utilization of dynaSight in different computing
environments,
http://www.dynasight.com/demos_live.html
- 4. ArcView MapObjects 2.1, download a 90-day evaluation version at
http://www.esri.com/company/free.html.
- 5. Attar Software - XpertRule Knowledge Builder, http://www.attar.com/samples/kb_setup.htm, and XpertRule Miner, http://www.attar.com/samples/xm_setup.htm
- 6. Bayesia S.A. - BayesiaLab,
http://www.bayesia.com/GB/produits/bLab/BlabPresentation.php
- 7. Business Objects - WebIntelligence, a query, reporting, and online
analytical processing tool, an online demo, http://www.businessobjects.com/products/webi/
- 8. Brio.Enterprise and Brio.Report self-running demonstrations at
http://www.brio.com/products/demo/index.html
- 9. Business Forecasting Systems, Inc. - Forecast Pro,
http://www.forecastpro.com/downloads_&_demos.htm
- 10. Cygron Pte. Ltd. - DataScope, a data mining and decision support
tool, http://www.cygron.com/TrialDownload.html
- 11. Databeacon Inc. - Databeacon 5.3, online demo and download an
evaluation version, http://www.databeacon.com/
- 12. Decisioneering - Crystal Ball 2000 simulation software demo, an
Excel add-in, http://www.decisioneering.com/dss/
- 13. Decision Systems, Inc. - decisionport,
http://www.decisionsystems.com/dsiport.htm
- 14. DIMENSION 5 - miner3D.excel,
http://www.miner3d.com/m3Dxl/download.html
- 15. Enterprise Solutions, Inc. - MeetingWorks at http://www.Meetingworks.com
- 16. Entopia, Inc. - Quantum Suite,
http://www.entopia.com/products_pg3.3.htm
- 17. Expert Choice at http://www.expertchoice.com (free EC2000 trial version).
- 18. Insightful Corporation - Demos include Insightful Miner, a data
mining workbench, and example web applications,
http://www.insightful.com/products/demos.asp
- 19. Infoharvest - Criterium Decision Plus (an AHP implementation) at
http://www.infoharvest.com (free student version of CDP 3.0).
- 20. KMtechnologies - work2gether, electronic repository for company
documents and ideas, web-based flash demo only works in Internet
Explorer, http://www.kmtechnologies.com/en/Products/flashdemo.asp
- 21. Logic Technologies - BestChoice3 at http://www.logic-gem.com/bc.htm.
- 22. Lumina - ANALYTICA 2.0 from http://www.lumina.com, try it free for 30 days.
- 23. Megaputer Intelligence, Inc - PolyAnalyst Pro, suite of data mining
algorithms, and TextAnalyst, http://www.megaputer.com/php/eval.php3
- 24. Palisade Corp. - DecisionTools Suite, @RISK, PrecisionTree, TopRank,
BestFit and RISKview, http://www.palisade.com/html/trial_versions.html
- 25. Pilot Software at http://www.pilotsw.com has a walkthrough online of its
Balanced Scorecard system.
- 26. Treeage - DATA 4.0 trial download at www.treeage.com.
From ACM News, January 3, 2003
"Getting Smart About Predictive Intelligence"
Boston Globe (12/30/02) P. C1; Kirsner, Scott- Boston
Globe columnist Scott Kirsner expects the major technology debate of 2003
to revolve around the use of predictive intelligence, which is being employed
in the private sector for marketing purposes, but, more importantly, lies
at the heart of the Total Information Awareness system being developed by
the Defense Advanced Projects Research Agency (DARPA). The purpose of the
Total Information Awareness system is to root out terrorists and prevent
terrorist acts by focusing on suspicious online transactions, but organizations
such as the Electronic Privacy Information Center and the Electronic Frontier
Foundation argue that it is in fact little more than an unconstitutional
public surveillance system that supercedes citizens' privacy rights. There
are also objections over the choice of retired Navy Admiral and Iran-Contra
scandal figure John Poindexter to lead the Total Information Awareness project.
Meanwhile, companies that deal in predictive intelligence software and services
see a beneficial side in the business world: The technology helps them collate
profiles of customers in order to more effectively market products. Still,
Genalytics founder Doug Newell acknowledges that projects such as Total Information
Awareness should set limits on what kinds of data can be collected, how long
that data should be retained, and what should be done with it. He adds that
Poindexter's project is unworkable, because there have been so few U.S.-based
terrorist incidents on which to build reliable terrorist profiles. If the
system fails to achieve its primary goal, then it might be used by local
law enforcement to single out people that commit minor crimes such as parking
violations. http://digitalmass.boston.com/news/globe_tech/at_large/2002/1230.html
From ACM News, January 3, 2003
"Realer Than Real"
Nikkei Weekly (12/23/02) Vol. 40, No. 2061, P. 3; Naito, Minoru
- Head-mounted display (HMD) technologies
are an example of "mixed reality" systems that promise to seamlessly integrate
computer-generated data and imagery with the real world, and Japan is leading
the charge in this area: Japanese hospitals have made the technology an essential
tool in surgical procedures, while other uses for HMDs are being found in
the automotive design, entertainment, and disaster preparation sectors. Toppan
Printing, in conjunction with Takashi Kawai of Waseda University, is working
on a mixed reality system that combines HMDs with the Global Positioning
System to offer 3D graphical overlays that show visitors to museums and archeological
digs what ruins must have looked like in ancient times. Meanwhile, Canon
has partnered with SGI Japan to develop a mixed reality product for automotive
manufacturers and parts suppliers that allows potential customers to see
detailed, 3D images of cars as well as experience simulated travel. Communications
Research Laboratory has built a facility designed to simulate natural disasters--earthquakes,
volcanic eruptions, etc.--so that local and national government officials
can better prepare for such catastrophes and implement measures that will
significantly reduce the loss of life and property. Experts such as Shunichi
Kita of the Nomura Research Institute forecast that over the next 10 years
the HMD market will surge in much the same way the cell phone market did.
Mainstream adoption of HMDs and mixed reality systems will depend on researchers
developing lighter, more comfortable products. Optical companies are supplying
hardware for such efforts. Minolta, for example, has developed a 25-gram
holographic HMD that can be attached to the frame of the user's glasses.
From ACM News, January 6, 2003
"Interface Gets the Point"
Technology Research News (01/08/03); Patch, Kimberly- Scientists
at Pennsylvania State University and Advanced Interface Technologies are
developing a computer interface that can recognize the relationship between
prosody and gestures in an attempt to make human-computer interaction more
natural. Penn State researcher and Advanced Interface Technologies President
Rajeev Sharma says the project is a formidable challenge, and notes that
"The same gesture...can exhibit different meanings when associated with a
different spoken context; at the same time, a number of gesture forms can
be used to express the same meaning." He says that computers can recognize
isolated gestures with as much as 95 percent accuracy, and adds that the
precision of gesture recognition systems was raised from 72 percent to approximately
84 percent when the system took prosody into consideration. It is no mean
feat to detect correspondence between visual and audio signals, while speech's
phonological information and intonational characteristics increase the difficulty,
according to Sharma. He notes that a more natural human-computer interface
such as the one his team is working on could be very useful for applications
such as video games, crisis management, and surgery. An important step in
the system's development was the inclusion of speech pitch and hand velocity
into the Hidden Markov Model, which boosted the scientists' understanding
of the connection between prosody and gestures. The researchers are currently
employing their technique in a geographic information system prototype that
incorporates ceiling-attached microphones, cameras that track user gestures,
and a large screen display. Sharma says testing how well the system can interact
with people is the next step. Click Here to View Full Article
"Feeling Blue? This Robot Knows It"
Wired News (01/01/03); Knapp, Louise
- A research team at Vanderbilt University's
Department of Mechanical Engineering is developing a robot equipped with
sensors that are used to determine people's emotions by picking up physiological
cues. The machine is designed to approach a person and offer assistance when
it discerns that the person is in distress. The scientists believe the robot
will be well-suited to perform as an assistant for military personnel under
battlefield conditions, although getting people to accept it will be a major
challenge. The robot can record a person's heartbeat with an electrocardiogram,
notice fluctuations in perspiration via a skin sensor, measure blood pressure,
identify muscular stress in the brow and jaw with an electromyography detector,
and read temperature. Algorithms are used to translate these readings into
a format that the robot can comprehend, explains Vanderbilt researcher Nilanjan
Sarkar. He adds that this data can be processed in real time. Office of Naval
Research (ONR) corporate communications officer John Petrik says that his
organization, which co-sponsors the Vanderbilt project, thinks military robot
aides could become smarter thanks to the researchers' work. However, Carnegie
Mellon University's Takeo Kanade cautions that "we are at a very primitive
stage of understanding the relation between the internal states--what is
observable--and human emotion." http://www.wired.com/news/technology/0,1282,56921,00.html
From ACM News, January 13, 2003
"Immobots Take Control"
Technology Review (01/03) Vol. 105, No. 10, P. 36; Roush, Wade- Immobile
robots, or "immobots," use model-based reasoning to obtain a clear picture
of their internal operations and the interaction of their myriad components
so that they can reconfigure themselves for optimal performance and avoidance
of unexpected difficulties. Immobot applications range from office equipment
to vehicle diagnostics to spacecraft, while even more complex systems are
on the drawing board or undergoing testing. Several Xerox printer-copiers
feature model-based scheduling programs designed to optimize moment-to-moment
paper flow and boost productivity, while IBM is developing reconfigurable,
autonomic storage networks and Web servers. Meanwhile, several efforts are
underway in Europe to outfit passenger vehicles with diagnostic immobots
that use model-based programming to detect problems that expert mechanics
may not have considered. Such projects indicate that the initial commercial
application of immobot software will likely be in the automotive sector,
according to Louise Trave-Massuyes of France's Centre National de la Recherchi
Scientifique. Model-based software is seen as far more efficient than hand-coded
heuristics software that most engineers rely on; the problem with the latter
is that reliability is often sacrificed for affordability. The model-based
approach obviates the need for engineers to anticipate every possible contingency,
and leaves that deduction to the immobot software, which builds a step-by-step
plan for solving problems or fulfilling operational parameters based on its
knowledge of its inner workings. Currently being tested in Brazil is one
of the larger immobot projects, an advisor that helps manage five metropolitan
water treatment facilities by monitoring water quality and suggesting solutions
to problems; the project is highly complicated, since it requires the immobot
to model not only system components, but the physical and biological processes
that impact water quality. http://www.technologyreview.com/articles/roush1202.asp
From ACM News, March 24, 2003
"Why We Should Lose the Back Button"
Computerworld New Zealand (03/18/03); Bell, Stephen
- Zanzara owner Richard Mander counsels
vendors and user groups on user interface design. He says Web designers should
stop being dependent on the Web browser's back button and instead incorporate
more obvious navigation links and indicators on Web sites. He says the action
of the back button is too vague, since it can go to a page on the site or
to another site altogether, and that the user should have an option of where
he or she wants to go specifically. In addition, he says, systems should
inform users upfront about what its different parts are and how they can
be used instead of leaving them to find out on their own. Icons should provide
a hint as to what they give access to or what function they perform, similar
to how buttons on a device are labeled, says Mander. As it is, many Web pages
rarely define page links, application links, and static areas clearly. For
intricate processes, Mander believes users should be guided step-by-step
much like software program "wizards." And if something fails to work properly,
the user should be able to halt the job in such a way that minimally impacts
other phases of the process. http://www.pcworld.com/news/article/0,aid,109856,00.asp
"Knowledge Managing"
InfoWorld (03/17/03) Vol. 25, No. 11, P. 1; Angus, Jeff
- The economic downturn has cleared
a path for the rethinking of knowledge management (KM) and how it can be
incorporated into the enterprise. Now companies face the daunting challenge
of investing in KM and reorganizing their workflow and processes in order
to accommodate it. Past KM projects were characterized by numerous failures
attributed to companies' inaptitude to integrate stored knowledge and human
expertise, concludes Xerox Global Services CTO Bob Bauer. "They got lost
because they got focused on trying to create transactions of data with data
when the fact is that any decision process or action based on valuable transactions
involves people," he explains. More recent KM projects are being driven by
KM's maturity as well as the entrenchment and proliferation of technologies
such as XML and better recognition technology. Open Text's Anik Ganguly believes
that more robust integration is an important component of KM, in that it
deconstructs input, which is perhaps the single biggest hindrance to KM implementations.
Forthcoming technologies that promise great advancements in KM systems' primary
functions--gathering, organizing, refining, and distributing--should spur
KM adoption. Voice-mining technology will be especially crucial for the knowledge
gathering component, since experts reckon that 75 percent of all essential
corporate knowledge is relayed verbally; improved pattern-recognition initiatives
from ClearVision, Autonomy, and others will benefit organization; and Bauer
says better pattern recognition and further XML adoption are helping the
refinement function. http://www.infoworld.com/article/03/03/14/11km_1.html?s=feature
"The Meaning of Computers and Chess"
IEEE Spectrum (03/03); Ross, Philip
- The
last three major human vs. computer chess matches ended in a draw, thus demonstrating
the continued refinement of software and human players' inability to modify
their strategies against such programs; it also signifies that either computer
intelligence is improving, or that playing chess may not necessarily be a
sign of true intelligence. It is a significant development in the field of
artificial intelligence research, which was the reason why a chess-playing
computer program was conceived in the first place. Chess master Garry Kasparov
accused IBM, the creator of his 1997 computerized opponent Deep Blue, of
cheating, arguing that only a person could have prepared to exchange pawns
the way Deep Blue did. Deep Blue designer Feng-hsiung Hsu, in his book "Behind
Deep Blue: Building the Computer that Defeated the World Chess Champion,"
counters that the software was programmed, in consultation with Grandmaster
Joel Benjamin, to consider files that were not just open, but potentially
open, and let its if-this-then-that algorithm dictate the move based on those
variables. Electrical engineer Claude Shannon, who proposed the chess-playing
algorithm over half a century ago, conceived a search function as the first
step, one that generates all possible move sequences to a certain depth,
as determined by the computer's speed and memory. The rub is that a program
with unrestricted search powers can flawlessly play with an assessment function
that can only discern between checkmate and draw, while a program with superior
evaluation powers would be unable to look even one move ahead. The Israeli
chess program Kasparov battled in February, Deep Junior, was not as powerful
as Deep Blue, but it had the advantage of greater knowledge of the game,
and thus understood chess better, according to Israeli grandmaster Boris
Alterman; in addition, Deep Junior distinguished itself by being willing
to sacrifice materials in order to reach intangible goals, such as freedom
of movement or making its opponent's king more vulnerable to attack.
http://www.spectrum.ieee.org/WEBONLY/wonews/mar03/chesscom.html
From ACM News, March 21, 2003
"Recent Advances in Computer Vision"
Industrial Physicist (03/03) Vol. 9, No. 1, P. 18; Piccardi, Massimo; Jan, Tony
- Computer vision technology is being
developed to usher in sophisticated, human-centered applications for human-computer
interfaces (HCIs), augmented perception, automatic media interpretation,
and video surveillance. Computer vision is incorporated into HCIs on the
premise that computers can respond more naturally to human gestures via camera;
notable achievements in this sector include a computer that makes its screen
scroll up or down by following users' eye movements, and a downloadable application
that tracks the movements of the user's nose. Cameras could also act as peripherals
in smart houses, triggering various functions--lighting, temperature control,
and so on--in response to a human presence. Augmented perception tools are
designed to enhance the normal sensory faculties of people, and one interesting
development in this field is The vOICe from Philips Research laboratories.
VOICe uses a camera to accompany people and produces sounds to alert them
to the position and size of objects in their path--a very useful tool for
visually-impaired users. Computer vision is also aiding security personnel
through video surveillance systems programmed to categorize objects--cars,
people, etc.--and track their trajectories in order to determine anomalous
or suspicious behavior. One example is a system designed to single out suspicious
pedestrian behavior in parking lots, which was developed at Sydney's University
of Technology; the system first subtracts an estimated "background image"
to distinguish moving objects from static objects, identifies people based
on form factor, takes samples of each person's speed every 10 seconds to
establish a behavioral pattern, and classifies that behavior with a neural
network classifier. Computer vision utilized for automatic media interpretation
helps users quickly comb through videos for specific scenes and shots: Carnegie
Mellon University's Face Detention Project, for instance, can pinpoint images
containing faces, while the MPEG-4 standard supports consistent visual quality
in compressed digital video by assigning objects in a scene varying degrees
of quality. http://www.aip.org/tip/INPHFA/vol-9/iss-1/p18.html
Some guidelines on web design:
What Makes a Great Web Site?
From ACM News, March 19, 2003
"Flu Shots for Computers"
Economist (03/15/03) Vol. 366, No. 8135, P. 8- Researchers
have applied computing to biology to map the human genome, but now biology
is being applied to computing to fight electronic viruses and worms. Sana
Security has borrowed the concept of the human immune system in its effort
to create software that protects computers from security breaches. Sana's
Primary Response software, based on research done at the University of New
Mexico in Albuquerque, is designed to work similar to the way the body's
natural immune system fights off illness by creating a profile of itself.
Primary Response monitors programs running on computers such as remote-login,
Web, email, and database servers, looking at patterns of system access requests
to build up the profile. This method is distinct from others that rely on
built-in assumptions of what an attack will look like. The software considers
a deviation from the profile an attack, and moves to block all file access
associated with a program under attack, protecting files from being stolen,
modified, and deleted, and stopping new programs from being launched. Primary
Response also does a forensic investigation of file-access details, log files,
and open network connections to determine what happened. Besides hacker break-ins,
Sana Security founder Steven Hofmeyr says the system also alerts administrators
to malfunctions and other cases of irregular behavior. Customers who use
Sana's solution report only a few false alarms each month.
From ACM News, March 17, 2003
Can Sensemaking Keep Us Safe?"
Technology Review (03/03) Vol. 106, No. 2, P. 42; Waldrop, M. Mitchell
- The Sept. 11 attacks created a demand
to leverage the United States' strength in analytical technology and networking
to build what the Markle Foundation's Task Force on National Security in
the Information Age calls a virtual analytic community threaded together
by "sensemaking" technologies designed to mine vast quantities of data for
signs of possible terrorist activity. The first step in building a virtual
intelligence system is to establish an online forum where local officials
can share and analyze intelligence, such as a virtual private network secured
by standard encryption; the next major component is data-sharing between
federal, state, and local agencies. The technical limitations of such a process
could be partially alleviated through distributed computing, but information
stored in antiquated databases can be difficult to access due to incompatible
formats, while the organizations that control the databases are reluctant
to disclose so-called "sensitive" information to outsiders. Oracle thinks
all data should be changed to a standard format and stored in a common data
warehouse, but IBM thinks "federated" information will solve the sensitive
data problem as well as the compatibility problem: With such a system, a
data owner can augment his database with a "wrapper" that allows outsiders
to access portions of his data while keeping sources, methods, and other
sensitive information secret. The key difficulty is extrapolating information
important to security from the gargantuan database of unstructured information,
and Stratify is one of a number of startups funded by the CIA's In-Q-Tel
firm working on a solution. Stratify's Discovery System is programmed to
automatically create a taxonomy of the received information so that it can
be organized into categories representing specific subject matter and concepts,
notes Stratify CTO Ramana Venkata. Meanwhile, U.S. intelligence agencies
are making use of i2's Analyst's Notebook visualization toolkit, which can
map out the progression of events while connecting related transactions,
people, and entities via link analysis charts that also include supporting
evidence tagged with associated data--sources, security levels, reliability,
etc.
From Edupage, March 17, 2003
SPELLING AND GRAMMAR CHECKERS ADD ERRORS
- In a study conducted at the University of Pittsburgh, computer spelling
and grammar checkers actually increased the number of errors for most students.
The study looked at the performance of two groups of students: one with relatively
high SAT verbal scores and one with relatively lower scores. The group with
lower SAT scores made an average of 12.3 mistakes without the spelling and
grammar tools turned on and 17 mistakes with the tools. The students with
higher SAT scores made an average of 5 mistakes without the tools and an
average of 16 errors with the tools. According to Dennis Galletta, a professor
of information systems at the Katz Business School, the problem is one of
behavior rather than of technology. Some students, he said, trust the software
too much. Richard Stern, a speech-recognition technology researcher at Carnegie
Mellon University, said that when computers attempt to identify proper grammar,
the computer has to make some guesses. It becomes "a percentage game," he
said. Wired News, 14 March 2003 http://www.wired.com/news/business/0,1367,58058,00.html
DSS examples and discussions
- Cutting Through the Fog of War
- Gruden's High-Tech Advantage
An example of analysis and decision making: Trust No One at the Airport.
The Big (Data) Dig: Data mining analytics for business intelligence and decision support
From DSS News, March 16, 2003 -- Vol. 4, No. 6
Ask Dan!
by Daniel J. Power
How has and will Moore's Law impact computerized decision support?- There is a certain comfort that comes from identifying predictive
"natural" laws. They simplify and make sense of otherwise complex
phenomena. Moore's Law has provided that type of comfort to many
technologists for almost 40 years. So what is Moore's Law?
- In 1965, Gordon Moore wrote a paper for Electronics magazine in a
feature "The experts look ahead" titled "Cramming more components onto
integrated circuits". He began "The future of integrated electronics is
the future of electronics itself. The advantages of integration will
bring about a proliferation of electronics, pushing this science into
many new areas. Integrated circuits will lead to such wonders as home
computers ..."
- According to the Intel web site, Moore observed an exponential growth in
the number of transistors per integrated circuit and predicted that the
trend would continue. The popularized statement of Moore's Law is that
the number of transistors in an integrated circuit doubles every 18 to
24 months. Intel expects Moore's Law will continue at least through the
end of this decade. The "mission of Intel's technology development team
is to continue to break down barriers to Moore's Law".
- Gordon Moore helped found Fairchild Semiconductor and then Intel. His
efforts and those of his colleagues made sure integrated circuit
technology evolved and improved at the predicted rate of progress.
- The evidence of the past 35 years supports the conclusion Moore reached
in 1965. Intel introduced the 4004 microprocessor in 1971 with 2,250
components. The 8008 chip introduced in 1972 had 2,500. By 1974, the
8080 chip had 5,000 components. The groundbreaking 8086 microprocessor
of 1978 had 29,000 components. In 1982, the 286 chip had 120,000; the
386™ processor in 1985 had 275,000; by 1989 the 486™ DX processor had
1,180,000 components on a small chip. Once the million barrier was
broken, the number and density of components expanded rapidly. In 1993,
the Pentium® processor had 3,100,000 components and the Pentium II
processor in 1997 had 7,500,000. In 1999, Intel introduced the Pentium
III processor with 24,000,000 components. Approximately 18 months later,
Intel announced the Pentium 4 processor with 42,000,000 components. On
March 12, 2003, Intel introduced it's Centrino™ mobile technology
integrating wireless capability.
- The two
most important Integrated Circuit product categories are the microprocessor
and memory devices. These products provide the technology that enables computerized
decision support. As the technology has gotten more powerful and more cost
effective new applications have become feasible.
- Improvements in microelectronics have stimulated and enabled the
development of decision support technologies. The earliest Integrated
Circuits provided some limited decision support capabilities for Apollo
Space missions. The chips of the late 1970s made it possible to develop
spreadsheets and PC-based decision support applications. Specialized
chips in the early 1980s stimulated Artificial Intelligence research.
The 386™ and 486™ DX processor made client-server applications and GDSS
feasible. Improvements in memory size and speed in the early 1990s made
data warehousing feasible. Putting more components on microprocessors
miniturized our computers and supported development of innovative input
and output technologies. Suppliers of innovative microelectronics make
innovative DSS possible.
- There seems to be a 2-3 year lag in the diffusion of improvements in
microelectronics into decision support applications. Currently, the
capability of the Pentium 4 for enhanced graphics and visualization is
reflected more in video games than in DSS. The Centrino™ mobile
innovation can potentially expand the presence of decision support in
our work and personal lives.
- Moore's Law has served as a stimulus and benchmark for developments in
microelectronics and information processing. It has become a driver of
innovation and progress in the semiconductor industry. Expectations
matter! Decision support applications need to exploit the enhanced
capabilities that result from cramming more components on integrated
circuits.
- There has been a mutually beneficial relationship between innovation in
semiconductors and end-use decision support applications. The advance of
technology lets us work to implement what we can envision to create
innovative DSS. Advanced decision support will result from technology
advances, opportunistic and fortuitous circumstances, and from the
active imaginations and dedicated actions of innovators.
References
- Moore, Gordon E., "Cramming more components onto integrated circuits",
Electronics, Vol. 38, No. 8, April 19, 1965, URL
ftp://download.intel.com/research/silicon/moorespaper.pdf
- Schaller,
Bob, "The Origin, Nature, and Implications of 'MOORE'S LAW': The Benchmark
of Progress in Semiconductor Electronics", September 26, 1996, http://mason.gmu.edu/~rschalle/moorelaw.html.
From ACM News, March 14, 2003
"Social Software and the Politics of Groups"
InternetWeek (03/10/03); Shirky, Clay- Thanks
to the advent of the Internet and social software that facilitates group
communications, large numbers of people can now converse with each other
without being inconvenienced by conventional barriers of physical location
and time. This in turn has caused new social patterns--chatrooms, Weblogs,
mailing lists, etc.--to emerge. Social software also sets up groups as entities,
giving rise to behavior that cannot be anticipated by analyzing individuals.
In defiance of earlier projections about the Internet's social impact, many
successful online communities have limited their growth or set up size boundaries;
erected non-trivial blocks to joining or becoming a member in good standing;
and are enforcing criteria that restrict individual freedoms. The tension
between the individual and the community inherent in social interactions,
whether online or offline, must be addressed in a group-supportive system
by rules that outline the relationship between individuals and the group
and set limits on certain kinds of interactions. Designers of social software
must consider a wide range of issues, including how good group experience
can be tested, how software supports group goals, and the best barriers to
group membership. In terms of advancement, user software is ahead of social
software because developers are more familiar with single-user rather than
group experience. Another contributing factor is greater developer emphasis
on software's technical aspects instead of its social implications.
From Edupage, March 10, 2003:
- CITY SUPPLEMENTS ALARM WITH PC NOTICES:
The city of Lincoln, Nebraska, is about to introduce a new system that
allows the public to download a new emergency alert application from
the city's Web site. When government officials have urgent warnings
for the community, such as notices about weather or about national or
local security, computer users who have downloaded the application will
hear an alarm and then will see the warning in a pop-up box.
Information about the warning, as well as URLs for further information,
will be included. The system will work in conjunction with existing
alert systems for television and radio. The system also allows targeted
alerts to particular groups of users, such as school administrators in
the event of a school shooting. An official from the city said the
system will later be available for PDAs, cell phones, and beepers.
Federal Computer Week, 10 March 2003.
http://www.fcw.com/geb/articles/2003/0310/web-lincoln-03-10-03.asp
For those of you who want to have some kind of testing in your system, you could piggy back on it. The test is MAPP and is at http://www.assessment.com/MAPPMembers/Welcome.asp?accnum=06-5570-000.00.
From ACM News, February 28, 2003
"Turning the Desktop Into a Meeting Place"
New York Times (02/27/03) P. E6; Boutin, Paul- Software
engineer Robb Beal's Spring computer interface differs from traditional desktop
interfaces by using hypertext representations of people, places, and things
instead of icons for applications and Web sites, thus simplifying frequent
user activities such as Internet communication and e-shopping. Spring, which
runs on Apple Computer's OS X operating system, replaces Mail and Microsoft
Word icons with these representations; users, for example, could ask someone
to meet them at a specific place by placing a cursor over the person's representative
icon, clicking on it, and then drawing a line to the icon signifying the
place, which triggers a pop-up menu that offers a range of options, such
as emailing the recipient an invitation, sending directions to the destination,
etc. In addition, Spring might visit a related Web site so that invitations
and scheduling can be completed. A similar setup exists for facilitating
electronic transactions, such as dragging a credit card icon to the image
of a desired item. Spring allows users to deploy multiple displays, or canvases,
with a different set of object icons, such as a canvas for friends and another
canvas for business associates. Steven Johnson, author of "Interface Culture:
How New Technology Transforms the Way We Create and Communicate," praises
Spring for its ability to transform a computer into "a bridge to people,
to things you want to buy, to data you need." The development of Spring closely
follows similar developments at Apple and Microsoft with their respective
Windows XP and iLife products. Hillel Cooperman of Microsoft's Windows User
Experience team notes that "Having metaphors and iconography people could
relate to in the real world was a great bridge for bringing nontechnologists
into the world of the PC." Still, Beal and other experts believe that the
traditional desktop interface has a lot of life left in it. http://www.nytimes.com/2003/02/27/technology/circuits/27inte.html
(Access to this site is free; however, first-time visitors must register.)
From K@W Newsletter, February 26-March 11, 2003
The Mammogram Experiment: How Emotions Can Affect High-Stakes Decision-Making- A
breast cancer scare that turns out to be a false alarm is cause for relief,
but may also trigger delays in future mammogram screenings. In a controlled
experiment that surveyed women waiting for mammograms, Wharton marketing
professors Barbara Kahn and Mary Frances Luce found that the emotional stress
of believing they may have breast cancer causes patients to indicate they
would be likely to delay future mammograms. The findings could have broad
implications for health-care providers and patients, especially as consumers
become more responsible for their own health care decisions. Read the article
From ACM News, February 26, 2003
"Software Uses In-Road Detectors to Alleviate Traffic Jams"
Newswise (02/25/03)
- An Ohio State University engineer has
developed software that could help alleviate traffic jams faster using loop
detectors that are currently used to control traffic lights and scan traffic.
In the March issue of Transportation Research, Benjamin Coifman of Ohio State
describes how he employed these detectors to precisely measure vehicles'
travel time and identify traffic jams. His work started at the University
of California, Berkeley, in 1999, when he equipped control boxes along a
three-mile stretch of road with computer network hardware; traffic data was
collated from loop detectors every third of a mile. Coifman then wrote computer
algorithms that measure vehicle travel time, and could determine the formation
of a traffic jam within three and a half minutes of the initial traffic slowdown.
Coifman also had to instill an accountability for human factors--rubbernecking,
lane changes, etc.--within the software, which can pinpoint delays caused
by accidents long before slowed traffic backs up to a detector. The Ohio
State engineer is currently working with the Ohio Department of Transportation
to improve the travel time estimates gathered by loop detectors and displayed
to motorists along highways so they can avoid traffic. Coifman adds that
road design could also be enhanced with his software, which was produced
with the support of the U.S. Department of Transportation, the Federal Highway
Administration, the California Department of Transportation, and the University
of California's Partners for Advanced Highways and Transit Program. The Texas
Transportation Institute's 2002 Urban Mobility Study estimates that the average
American urban resident annually spends 62 hours stuck in traffic, while
the average city can lose $900 million a year due to traffic jams. http://www.newswise.com/articles/2003/2/TRAFFIC.OSU.html
From ACM News, February 14, 2003
"Professor Directs Two Tech Efforts"
Chicago Sun Times (02/13/03); Lundy, Dave- Kris
Hammond, founder and director of Northwestern University's Intelligent Information
Laboratory (InfoLab) and Information Technology Development Laboratory (DevLab),
studied and developed artificial intelligence for 12 years at the University
of Chicago, and moved to Northwestern to apply his research to real-world
problems. He says the purpose of InfoLab is "to reduce the friction that
people constantly encounter when trying to find information, both online
and offline" through a combination of artificial intelligence, information
retrieval, and cognitive science. Hammond founded DevLab as a facility to
help transfer academically-developed technologies to the commercial sector
and build thriving, Chicago-based tech businesses; both labs dovetail with
Northwestern's goal of nurturing students' programming and software engineering
skills. He says "our students would like to know how to be not just programmers,
but software engineers." Projects under development that Hammond thinks hold
great potential include Watson, an information retrieval system that employs
artificial intelligence, and a program incorporated into a TiVo box that
collates closed-caption information from TV programs and presents relevant
data in a micro-site built in real time. Hammond thinks the local business
community would benefit significantly by using the university as a resource
to solve problems, while academics would gain a better knowledge of business
problems. He says that DevLab's chief purpose right now is to build value
rather than make money, and praises Northwestern for not subscribing to the
traditional license-and-leave practices of many tech transfer programs. http://www.suntimes.com/output/hitechqa/cst-fin-lundy13web.html
"Will Computers Replace Engineers?"
Discover (02/03) Vol. 24, No. 2, P. 40; Haseltine, Eric
- A
roundtable of technology experts debated how computers are encroaching on
the engineering profession by automating engineering tasks, and what this
holds for the future. When asked what he thinks is the most significant effect
computers have had, Stevens Institute of Technology professor Lawrence Bernstein
described a revolution in structural design that has led to, among other
things, earthquake-resistant buildings in Japan; he also anticipated a bioengineering
explosion soon thanks to the use of computers in analyzing proteins, RNA,
and DNA. Meanwhile, Columbia University professor Al Aho, formerly of Bell
Laboratories, said he foresees a time when machines and human beings are
interchangeable, and believes that computers with computational power and
memory equal to that of human beings will emerge within 20 to 30 years. However,
consultant Jeff Harrow did not think computers are "anywhere close" to supplanting
engineers, because they lack creativity and are chiefly concerned with carrying
out "scut work." In his opinion, the most exciting trend in computing is
the application of computers beyond the computing field, computerized surgery
being an example. Nicholas Donofrio of IBM said computers play a key role
in designing new computers, and forecasted that they will be able to learn
more from people in the future; he was very excited that computers are replacing
physical construction and testing of products via simulation, which greatly
streamlines the development process. Harrow, Aho, Donofrio, and others maintained
that there will always be a need for engineers for a number of reasons, including
the flood of new ideas and the widening of the field's scope. Harrow said,
"[I]f we ever get to the point where there's nobody who understands what...[computers
do], we're in deep trouble, because then we'll never be able to make any
additional moves forward." He also speculated that the move to biologic computers
could spawn self-replicating machines.
From ACM News, February 21, 2003
"Has Your Computer Talked Back to You Lately?"
Newswise (02/20/03)
- Israeli professor Dov Dori at the Technion-Israel
Institute of Technology has created a software translation program that allows
users to make programming changes through speech and graphic diagrams. Though
Dori, who is also a research affiliate at MIT, wants to configure his OPCAT
program for all types of computer interfaces, he says the current focus is
on its industrial application. He likens OPCAT to CAD applications that eliminated
the need for draftsmen, or word processors that made typists obsolete. By
speaking to the computer, users can pull up a graphical representation of
programming options, which they can then implement without having to know
the back-end code. Conversely, users can manipulate graphic diagrams and
listen to the computer's audio response. Dori says this versatility allows
people to interact with their programs comfortably, no matter what their
learning style. Dori pioneered a concept called Object Process Methodology,
in which he says everything is either an object or a process that changes
an object. Based on this premise, OPCAT works from a model to generate computer
code automatically, based on users' instructions. Pratt & Whitney Canada
principal engineering and applications architect, Mark Richer, says he used
a beta OPCAT version for analyzing aerospace concepts. He says the software
automatically generated thousands of diagrams and statements that would have
been impossible to derive using traditional methods. http://www.newswise.com/articles/2003/2/TRANSLAT.IIT.html
"Old School"
CNet (02/18/03); Kanellos, Michael
- A diversity of computer applications
and technologies, including artificial intelligence, robotics, and data searching,
are using probability theory outlined by 18th-century clergyman Thomas Bayes.
Bayesian theory dictates that the probability of future events can be determined
by calculating their frequency in the past, and its credibility has been
given a major boost in the last 10 years thanks to advances in mathematical
models and computer speed, as well as laboratory experiments. The predictions
are reinforced using real-world data, and the results are altered accordingly
when the data changes. As computers become more powerful, the number of calculations
needed to predict phenomena has been reduced thanks to the introduction of
improved Bayesian models. Microsoft's upcoming Notification Platform will
enable computer and cell phones to automatically filter messages, schedule
meetings free of human interaction, and organize approaches for contacting
other people using probability modeling. For instance, the platform's Coordinate
application collates data from personal calendars, cameras, and other sources
to build a profile of a person's lifestyle, which can be applied to the delivery
of information to application users. Meanwhile, researchers at the University
of Rochester employ Bayesian models to detect anomalies in a person's walk
using data from cameras fed into a PC. Eric Horvitz of Microsoft Research's
Adaptive Systems and Interaction Group explains that Bayesian theory was
given little credence in the computing world until it became clear that logical
systems could not predict all unforeseen variables. http://news.com.com/2009-1001-984695.html
From ACM News, February 12, 2003
"Goodbye GUI? Ambient Orb a Computer 'Mood Ring'"
Mass High Tech (02/10/03); Miller, Jeff
- Forthcoming products developed by Ambient
Devices have the potential to dramatically change computer/person interaction,
according to Ambient executives. One of the products is a large orb that
is wirelessly linked to Internet data feeds via pager frequencies, and can
glow in virtually any color thanks to a digital LED. The device can respond
to changes in the Dow Jones average, for example, glowing green when the
market is up or red when the market is down. But users can also configure
the orb to other pre-set channels--homeland security threat levels, temperature
forecasts, etc.--using a touch-tone phone. Another product Ambient will sell
features a clock-like face with one hand and a quartet of illuminated indicators;
Ambient President David Rose says that its default channel will probably
monitor the weather, with the hand tracking temperature forecasts and the
indicators displaying weather conditions. The device will be shipped with
multiple faces so consumers can use it for different channels. Ambient uses
pager frequencies to deliver data because they can penetrate deeper into
buildings than CDMA-based networks, and because most of the United States
is ensconced by pager networks. Rose says that many of Ambient's products
have been inspired by the work of MIT's Tangible Media Group under the direction
of Hiroshi Ishii, who has been researching alternatives to the traditional
graphical user interface (GUI). Ishii says that using your hands is just
as important as using your mind when creating something. Ishii says, "In
the current interface, the eyes are in charge and the hands are underemployed."
http://www.masshightech.com/displayarticledetail.asp?art_id=61794
"Falling Prey to Machines?"
Newswise (02/11/03)
- John Holland, recipient of the
first computer science Ph.D in 1959, says artificial intelligence is possible,
but will take far more work on the conceptual side. Holland is now a computer
science and psychology professor at the University of Michigan and created
genetic algorithms in the 1960s, the basis of optimization models today that
find the most efficient way to manage energy, design engines, or operate
distribution systems. Michael Crichton used Holland's work as the scientific
basis of his recent novel Prey, in which nano-scale machines threaten humanity.
Holland says computers today cannot evolve human-like thinking because programmers
do not know how to define the parameters of their goal. This, he says, reflects
the growing gap between computing power, which doubles about every two years,
and software performance, which takes at least 20 years to double. Holland
also says that computer processors do not have the sophisticated network
of connections human brains have, called fanout. While each element in today's
high-end computers connect to about 10 other elements, elements in the human
brain are networked to about 10,000 other nodes. Holland says future computers
with higher scales of fanout will not be comparable to today's machines in
terms of what they are capable of. Holland says true breakthroughs in artificial
intelligence will come when computers use the same processes as humans, not
just find the same conclusions using different processes. For this reason,
a comprehensive theory is needed to guide research, but that will probably
take decades, he says. http://www.newswise.com/articles/2003/2/PREY.MIE.html
From ACM News, February 7, 2003
"What Are the Chances?"
New York Times (02/06/03) P. E1; Schiesel, Seth
- Evaluating the risk of "low-probability,
high-consequence events"--natural disasters, nuclear accidents, and spacecraft
catastrophes, for example--lies at the core of probabilistic risk assessment,
which is used by mathematicians, engineers, insurance executives, businesses,
and federal agencies thanks to the availability of conceptual and computing
tools, while recent gains in computing power have boosted users' confidence
in these methods. Probabilistic risk assessment relies on mathematics to
measure the odds of a specific outcome using what is known or estimated about
the myriad variables that might contribute to that outcome. A NASA consultant
used probabilistic risk assessment in 1995 to determine that the odds of
a catastrophic space shuttle failure were 1 in 145; similar techniques are
used to make nuclear labs safer, gauge the health risks posed by toxic-waste
sites, determine how safe and reliable cars and aircraft are, estimate insurance
rates, and weigh the odds of terrorist attacks. For example, insurance companies
employ probabilistic modeling to simulate how a hurricane might behave based
on historical data in which a dozen variables--frequency, size, intensity,
etc.--are involved. The potential storm patterns that emerge, which can range
from 5,000 to 10,000, are tested randomly on models of the properties insured
by a specific firm, a process known as Monte Carlo analysis. Using probabilistic
risk assessment in industrial situations is even more complicated, because
the variables can number in the thousands, tens of thousands, or even hundreds
of thousands; rather than referring to a historical database, an engineer,
for example, must use a computerized model to assess the physical and electromagnetic
traits of each component in the machine he is designing prior to probabilistic
analysis. The best application for industrial probabilistic models is in
the design phase rather than after the machine or product has been put into
operation. http://www.nytimes.com/2003/02/06/technology/circuits/06risk.html
"Pervasive Computing: You Are What You Compute"
HBS Working Knowledge (02/03/03); Silverthorne, Sean
- Panelists at the recent Cyberposium
2003 focused on pervasive computing, and took the opportunity to note their
respective companies and institutions' advances in that area. Stephen Intille
of MIT commented that researchers there are investigating how minuscule sensors
distributed throughout a house can biometrically monitor the health of its
residents, which could be a very useful--and cheaper--alternative to conventional
health care. It is estimated that there are 7.5 billion micro controllers
worldwide; these sensor and controller chips are used for a wide array of
operations, such as heating, ventilation, and air conditioning. Ember, represented
on the panel by CTO Robert Poor, is developing wireless, self-healing networks
to interconnect these myriad chips. Axeda's Richard Barnwell talked about
how his company sells real-time performance tracking devices that help anticipate
equipment failures, maintain machines remotely, and show manufacturers how
their products are used by customers. When asked about how their data-gathering
devices could potentially affect personal privacy, Intille replied that MIT's
home sensors would be used only with user permission, and cited cell phones
with GPS tracking capabilities as a more worrisome device to be concerned
with. Barnwell said that the use of monitoring devices would be strictly
regulated in certain spaces, such as the health care sector. One attendee
asked who will be responsible for replacing empty batteries on such devices,
and MIT professor and panelist Sandy Pentland suggested batteries that draw
power from radio signals and other technologies as one solution. Click Here to View Full Article
"Chaos, Inc."
Red Herring (01/03) No. 121; Waldrop, M. Mitchell
- Agent-based computer simulations
based on complexity science are being used by companies to improve their
bottom lines. Complexity science promotes the theory that all complex systems
have common characteristics: They are massively parallel, consisting of adaptive,
quasi-independent "agents" that interact simultaneously in a decentralized
configuration. By following this theory, agent-based simulations can map
out system behavior that spontaneously stems from many low-level interactions.
The simulations are appealing to company executives because they are easier
to understand than the highly abstract and mathematical underpinnings of
conventional modeling programs, while Fred Siebel of BiosGroup notes that
they allow "what if" scenarios to be played out over a much larger canvas.
For instance, Southwest Airlines hired BiosGroup to model its freight delivery
operations in order to make them more efficient; agents that represented
freight handlers, packages, planes, and other interactive elements were developed
and put through their paces, after which the rules of the system were changed
and tested to find the most efficient behavioral pathway. By following the
strategy outlined by the simulation, Southwest was able to shave as much
as 85 percent off the freight transfer rate at the busiest airports, and
save $10 million over five years. Other sectors that are taking an interest
in agent-based simulation include the U.S. military, which is using it to
coordinate the flights of unmanned reconnaissance aircraft, and the insurance
industry, which wants to employ better risk management strategies. However,
the agent-based simulation industry primarily consists of a handful of struggling
startups, and one of the current drawbacks of their services is that the
technology may be too state-of-the-art for most businesses, according to
Assuratech CEO Terry Dunn. http://www.redherring.com/insider/2003/01/chaos012203.html
From ACM News, February 3, 2003
"Blogs Open Doors for Developers"
CNet (01/31/03); Becker, David- Business
software developers have started to see the value of sharing information
online through Web logs (blogs), message boards, and other forms of communication
from the outset in order to build a base of potential customers, not to mention
fellow developers. Lotus founder Mitch Kapor explains that he started a blog
to tell users about an personal information manager upgrade so as to solicit
their ideas and get feedback while the project was in a very early developmental
stage. "It's part of a long-term process of building a user community," he
notes. Kapor keeps potential users up to date on the project's progress and
new ideas he comes up with. VisiCalc co-inventor Dan Bricklin is also soliciting
user feedback on the SMBmeta specification, and comments that such public
communication channels are a tremendous advance over traditional beta testing,
in which a small group of testers are chosen by developers to try out early
versions of a program. Being open to users throughout product development
is essential for makers of online games, which rely on a user community's
interest in their products, according to Sony Online Entertainment's Scott
McDaniel. However, to take full advantage of blogging and other forms of
communication, developers must be willing to sift through a lot of e-mail,
discussion group postings, and other submissions for good suggestions. They
must also be able to set clear limits on tired or unproductive discussion
threads. http://news.com.com/2100-1001-982854.html
"IBM: Pervasive Computing Is the Future"
ZDNet Australia (01/30/03); Pearce, James
- Pervasive computing devices, which
IBM describes as any non-PC computing device, will increase to 1 billion
in 2005, compared to 325 million in 2002, the firm predicts. Pervasive computers
can take the form of smart cards, cell phones, cameras, Web-enabled refrigerators,
and even smart houses, says Michael Karasick, IBM's director of embedded
development for the pervasive computing division. On U.S. university campuses,
IBM has launched the eSuds service, which allows students to use their cell
phones to reserve washing machines, make payments, and be informed when the
laundering is complete. Honda has also used pervasive computers in the 2002
Accord, allowing users to ask questions using normal language and displaying
the information in the car's navigation system. The computer also connects
the brake and airbag systems to the navigation system, so any damages as
a result of an accident can quickly be investigated and repaired. Andrew
Dutton, vice-president of IBM software group, Asia-Pacific, says "That piece
of information changes the entire structure of the automotive industry."
He says access to such information lets car companies expand their services
to include towing, autobody repair, finance, and insurance. 0n the other
hand, many fear such abundant information will allow people's personal information
to be gathered and monitored. Karasick says customers should try to take
control of the situation by demanding that companies give them discretion
over what data is gathered and how it is used. Click Here to View Full Article
From ACM News, January 31, 2003
"Intelligent Storage"
Computerworld (01/27/03) Vol. 37, No. 4, P. 28; Mearian, Lucas
- Storage devices imbued with intelligence,
also known as object-based storage devices (OSDs), allow for limitless system
scalability since they assume the low-level storage management duties previously
completed by the storage server. Because those read/write blocks are not
passed to the file server, input-output configurations are much more efficient
and the file server is no longer a bottleneck in the system. Scott A. Brandt,
assistant professor at the University of California, Santa Cruz's Storage
Systems Research Center, says storage devices can be added to OSD systems
just like hard drives are added to a PC. He notes that streamlined communications
between file servers and storage devices results in fewer errors as well.
The Storage Networking Industry Association and the International Committee
for Information Technology Standards have joined to form the T10 Technical
Committee, which is working out specifications for object-based storage.
Storage vendor EMC has already released what experts say is the first true
object-based storage arrays, called Centera. Besides more efficient networking
and greater scalability, OSD systems also have better security because it
is assigned to each individual object instead of the device. Click Here to View Full Article
From ACM News, January 29, 2003
"Software Innovator David Gelernter Says the Desktop Is Obsolete"
Application Development Trends Online (01/28/03); Vaughan, Jack- Yale
University computer scientist and veteran developer David Gelernter says
he is now focusing on creating tools that make it easier for users to find
"stuff" on their computers and otherwise improve the end user's computer
experience. Gelernter says the mouse, icon, and windows metaphors are no
longer able to manage the flood of information on most people's PCs. He says,
"As e-mail and the Web became a big thing, it was clear that the hierarchical
file systems and tools we've inherited from the 70s would not work." To solve
this problem, his company, Mirror Worlds Technologies, has released a beta
version of Scopeware, software that runs atop normal desktop operating systems.
Scopeware, available free via download, allows users to search for standard
documents on their PC by keyword, but presents the results as a visual, time-sequenced
narrative. Gelernter says the user should determine the presentation of information,
not the machine. "I want my information management software to have the same
shape as my life, which is a series of events in time," he says. "I want
the flow to determine the shape of the picture I see on the screen." Gelernter
says that future iterations of Scopeware could allow a community of users
to share documents pertinent to them through peer-to-peer systems. Gelernter
was instrumental in devising the parallel programming techniques that allowed
for the Linda language; his work also laid the foundation for Java and distributed
memory architectures. He says it's now time to create software "for the user
as an everyday tool," not to meet the needs of code developers. http://www.adtmag.com/article.asp?id=7187
From ACM News, January 24, 2003
"Of Pawns, Knights, Bits, Bytes"
Wired News (01/23/03); Kahney, Leander- International
chess champion Garry Kasparov will face off against a machine in a six-game
tournament beginning Jan. 26. His opponent will be Deep Junior, an aggressive
chess-playing program considered to be the best in the world, and the computer
chess champion for three years running. Deep Junior, which was developed
by Israeli programmers and a chess grandmaster, is different from the usual
computerized players because of the human way it plays, often sacrificing
pieces instead of preserving them. It also assesses the moves that have the
most potential, unlike early programs that relied on brute force searches.
Artificial intelligence expert Jonathan Schaeffer, who will act as a judge
during the tournament, believes Deep Junior evaluates chess positions with
standard weighting algorithms, such as the mobility of pieces and the safety
of the king; the former is highly rated by aggressive programs such as Deep
Junior. More sophisticated algorithms enable programs to only consider the
most promising maneuvers. Schaeffer notes that the tournament offers Kasparov
an opportunity to get some payback after his 1997 loss to IBM's Deep Blue
program. The event is also the first human/machine chess competition to be
endorsed by the World Chess Federation, a distinction that chess experts
say is a sign of respect toward computers as worthy players. http://www.wired.com/news/culture/0,1284,57345,00.html
"Senate Votes to Curb Project to Search for Terrorists in Databases and Internet Mail"
New York Times (01/24/03) P. A12; Clymer, Adam
- The
Senate voted unanimously on Thursday to constrain the implementation of the
Pentagon's Total Information Awareness (TIA) Program, an initiative to conduct
searches for terrorists by mining Internet mail and online financial, health,
and travel records. The legislation gives a 60-day window for the Defense
Department to furnish a report detailing the program's costs, motives, its
prospective chances for successfully thwarting terrorists, and its impact
on civil liberties and privacy; failing to do so after the deadline would
result in the suspension of TIA research and development. Meanwhile, use
of the system would be restricted to legally sanctioned military and foreign
intelligence operations, barring congressional authorization to employ the
system within the United States. The restrictions were bundled into a series
of amendments to an omnibus spending bill, and authored by Sen. Ron Wyden
(D-Ore.), who attributed their swift passage to the dismay Republican senators
felt over the project's implications for surveillance on innocent U.S. citizens.
Included in his amendment was a statement that Congress should be consulted
in matters whereby TIA programs could be used to develop technologies to
monitor Americans. "I hope that today's action demonstrated Congress' willingness
to perform oversight of the executive branch and challenge attempts to undermine
constitutional liberties," declared People for the American Way leader Ralph
Neas following the vote. Sens. Charles E. Grassley (R-Iowa) and Dianne Feinstein
(D-Calif.), both sponsors of Wyden's bill, agreed that the legislation ensures
that the TIA program will balance civil liberties with efforts to protect
Americans from terrorism. http://www.nytimes.com/2003/01/24/politics/24PRIV.html
(Access to this site is free; however, first-time visitors must register.)
From ACM News, January 24, 2003
"THE Key to User-Friendly Computers?"
Business Week Online (01/22/03); Salkever, Alex
- Jef Raskin, who co-designed Apple's
trend-setting graphical user interface (GUI), is also one of its most outspoken
critics. In an effort to repair what he terms "fundamental flaws" that are
the result of "incompatibilities between the designs of both GUIs and command-line
interfaces and the way our brains are wired," Raskin and a team of volunteers
are developing "The Humane Environment" (THE), a command architecture that
integrates the GUI's advantages with more flexible command-line systems.
BusinessWeek's Alex Salkever, who tried out a THE-based freeware text editor
that runs on the Mac Classic operating system, notes that the tool's biggest
plus is visibility, in which the user sees information only when necessary.
A flashing blue block represents the cursor, while a single letter or text
command appears within the block; the advantages include being able to see
tabs using the mouse rather than switching back and forth between viewing
modes, and being able to realign sentences and eliminate shortened lines
easier. Typed commands flash in blue text on the screen background, which
enables the user to quickly spot and undo any command executed by an accidentally
pressed key. Navigating a page using THE is done with a methodology Raskin
calls LEAP. LEAP is enabled by punching the shift key and then hitting and
releasing the space key, which causes a "Command" prompt to appear under
the text; leaping from one part of the page to another is achieved by hitting
the ">" key, then typing in a set of characters. Salkever finds that using
a mouse or scroll arrow is still more intuitive than LEAP, while overall
he prefers the traditional GUI over THE. Raskin acknowledges that a learning
curve is necessary when users transition between computer interfaces. http://www.businessweek.com/technology/content/jan2003/tc20030122_7027.htm
"Interfaces of the Future"
Technology & Business Magazine (01/13/03); Withers, Stephen
- Futuristic computer interfaces
such as those envisioned by science fiction writers--machines that can read
a user's gestures or facial expressions, for example--are still on the drawing
board or in the lab, and there has not been a sudden appearance of a revolutionary
mass-market technology since the introduction of the graphical user interface
in the mid 1980s. However, in Australia, some sophisticated interfaces have
been making a gradual penetration into niche IT markets, virtual reality
(VR) being one of them. Alan Ryner of SGI reports that immersive VR has become
"core infrastructure" in some sectors, including the military, manufacturing,
and mining, oil, and gas exploration. VR enables military personnel to deal
with vast amounts of data in command and control systems by representing
it visually, while the automotive industry, which started using VR to simulate
car crashes, has moved on to more advanced applications such as styling and
design. Ryner also reports that VR-based design can speed up time to market
and make Australian companies more competitive by allowing far-flung teams
to collaborate on the same data set. Oil and gas companies can use VR simulations
derived from seismic data to work out the best areas to drill. Ryner lists
"hazard perception and situation awareness" as a developing market for VR,
in which people can be trained and retained using an interactive artificial
environment that can model people's behavior. VR can be especially useful
in analytical situations as a way to better serve people who are reluctant
to handle large volumes of data. Click Here to View Full Article
"Remote Monitoring Aids Data Access"
Technology Research News (01/22/03); Patch, Kimberly
- Researchers at Sandia National
Laboratories have discovered a new way to access large sets of remote data
in close to real time. Often, businesses and researchers have a hard time
visualizing and manipulating complex data over distances because of increased
lag time, which hampers usability, according to principal technical staff
member John Eldridge. Instead of sending entire chunks of data back and forth,
the scientists developed a video card system that just sends the video signals.
Tweaking advanced graphics cards originally developed for computer games,
Eldridge and his colleagues compressed the signals and sent them to a Gigabit
Ethernet interface card, which then routed the signals over the Internet.
On the receiving side, special hardware recreates the video stream from the
packets, decompresses it, and displays it on a monitor. While the process
of sending video instead of data is also bandwidth-intensive, it is faster
for the huge repositories of data today's businesses and researchers accumulate.
Doctors, for example, could use it to view magnetic resonance imaging (MRI)
files when diagnosing patients. Eldridge says the system uses a few tricks
to speed the system, including sending only changes in the screen display
and using reprogrammable logic chips to encode and decode video signals,
instead of separate components. He notes that Sandia is looking for a partner
to commercialize the system, and plans to adapt it for use with multi-screen
displays. Click Here to View Full Article
"Electric Paper"
New Scientist (01/18/03) Vol. 177, No. 2378, P. 34; Fildes, Jonathan
- Scientists are working to transform
regular paper into an electronic display for moving images, changing colors,
and text. After asking paper makers four years ago if they were interested
in having electronic circuits, sensors, and displays added to their newspaper
and packaging, Magnus Berggren and colleagues at Linkoping University and
the Advanced Center for Research in Electronics and Optics in Sweden last
year unveiled traditional paper that featured electronic displays. The display's
"active matrix" resembled a laptop's thin-film transistor screen, with semiconducting
polymers printed on paper to make its transistors and display cells. Berggren,
who envisions electronic paper being used for large, low-resolution displays,
is currently demonstrating a seven-segment display that resembles a digital
clock. The technology could lead to chameleon-like wallpaper, flashing cereal
boxes and toy packaging, poster-like displays in shops, and changing text
in magazines within five years. However, challenges remain, including concerns
about how to power electronic paper. Researchers also must find a way to
make the displays change faster, and improve their color quality.
The impact of using a tool to design pages
- Original Page
- Edited Page
DSS Security?
- Help Wanted: Steal This Database
- Worm exposes apathy, Microsoft flaws
- Net Worm Causes More Disruptions
- Tracking the Worm
Artificial intelligence is again in the news. Garry Kasparov will again
face an IBM chess program with artificial intelligence to see who plays
the better chess. Read the NYT article.
From DSS News, March 2, 2003:
What do I need to know about Data Warehousing/OLAP?
The answer to this question depends upon who is asking. Managers need to
be familiar with some DW/OLAP terminology (the basic what questions) and
they need to have an idea of the benefits and limitations of these
decision support components (the why questions). More technical people
in Information Systems need to know how and when to develop systems
using these components. This short DW/OLAP FAQ consolidates answers from
some recent email questions and from a number of questions previously
answered at DSSResources.COM. The bias in this FAQ is definitely towards
what managers need to know. Some more technical question related to
DW/OLAP were answered in the Ask Dan! of February 17, 2002. That Ask
Dan! answered the following questions: Is a Data Warehouse a DSS? What
is a star schema? How does a snowflake schema differ from a star schema?
Also, for people who want definitions for technical terms like derived
data, hypercube, pivot and slice and dice the OLAP Council glossary
(1995) is online at http://dssresources.com/glossary/olaptrms.html.
Q. What is a Data Warehouse?
- A. A data warehouse is a database designed to support a broad range of
decision tasks in a specific organization. It is usually batch updated
and structured for rapid online queries and managerial summaries. Data
warehouses contain large amounts of historical data. The term data
warehousing is often used to describe the process of creating, managing
and using a data warehouse.
Q. What is On-line Analytical Processing (OLAP)?
- A. OLAP is software for manipulating multidimensional data from a
variety of sources. The data is often stored in data warehouse. OLAP
software helps a user create queries, views, representations and
reports. OLAP tools can provide a "front-end" for a data-driven DSS.
Q. What is the difference between data warehousing and OLAP?
- A. The terms data warehousing and OLAP are often used interchangeably.
As the definitions suggest, warehousing refers to the organization and
storage of data from a variety of sources so that it can be analyzed and
retrieved easily. OLAP deals with the software and the process of
analyzing data, managing aggregations, and partitioning information into
cubes for in-depth analysis, retrieval and visualization. Some vendors
are replacing the term OLAP with the terms analytical software and
business intelligence.
Q. When should a company consider implementing a data warehouse?
- A. Data warehouses or a more focused database called a data mart should
be considered when a significant number of potential users are
requesting access to a large amount of related historical information
for analysis and reporting purposes. So-called active or real-time data
warehouses can provided advanced decision support capabilities.
Q. What data is stored in a data warehouse?
- A. In general, organized data about business transactions and business
operations is stored in a data warehouse. But, any data used to manage a
business or any type of data that has value to a business should be
evaluated for storage in the warehouse. Some static data may be compiled
for initial loading into the warehouse. Any data that comes from
mainframe, client/server, or web-based systems can then be periodically
loaded into the warehouse. The idea behind a data warehouse is to
capture and maintain useful data in a central location. Once data is
organized, managers and analysts can use software tools like OLAP to
link different types of data together and potentially turn that data
into valuable information that can be used for a variety of business
decision support needs, including analysis, discovery, reporting and
planning.
Q. Database administrators (DBAs) have always said that having
non-normalized or de-normalized data is bad. Why is de-normalized data
now okay when it's used for Decision Support?
- A. Normalization of a relational database for transaction processing
avoids processing anomalies and results in the most efficient use of
database storage. A data warehouse for Decision Support is not intended
to achieve these same goals. For Data-driven Decision Support, the main
concern is to provide information to the user as fast as possible.
Because of this, storing data in a de-normalized fashion, including
storing redundant data and pre-summarizing data, provides the best
retrieval results. Also, data warehouse data is usually static so
anomolies will not occur from operations like add, delete and update a
record or field.
Q. How often should data be loaded into a data warehouse from
transaction processing and other source systems?
- A. It all depends on the needs of the users, how fast data changes and
the volume of information that is to be loaded into the data warehouse.
It is common to schedule daily, weekly or monthly dumps from operational
data stores during periods of low activity (for example, at night or on
weekends). The longer the gap between loads, the longer the processing
times for the load when it does run. A technical IS/IT staffer should
make some calculations and consult with potential users to develop a
schedule to load new data.
Q. What are the benefits of data warehousing?
- A. Some of the potential benefits of putting data into a data warehouse
include: 1. improving turnaround time for data access and reporting; 2.
standardizing data across the organization so there will be one view of
the "truth"; 3. merging data from various source systems to create a
more comprehensive information source; 4. lowering costs to create and
distribute information and reports; 5. sharing data and allowing others
to access and analyse the data; 6. encouraging and improving fact-based
decision making.
Q. What are the limitations of data warehousing?
- A. The major limitations associated with data warehousing are related to
user expectations, lack of data and poor data quality. Building a data
warehouse creates some unrealistic expectations that need to be managed.
A data warehouse doesn't meet all decision support needs. If needed
data is not currently collected, transaction systems need to be altered
to collect the data. If data quality is a problem, the problem should be
corrected in the source system before the data warehouse is built.
Software can provide only limited support for cleaning and transforming
data. Missing and inaccurate data can not be "fixed" using software.
Historical data can be collected manually, coded and "fixed", but at
some point source systems need to provide quality data that can be
loaded into the data warehouse without manual clerical intervention.
Q. How does my company get started with data warehousing?
- A. Build one! The easiest way to get started with data warehousing is to analyze some existing transaction processing systems and see what
type of historical trends and comparisons might be interesting to
examine to support decision making. See if there is a "real" user need
for integrating the data. If there is, then IS/IT staff can develop a
data model for a new schema and load it with some current data and start
creating a decision support data store using a database management
system (DBMS). Find some software for query and reporting and build a
decision support interface that's easy to use. Although the initial data
warehouse/data-driven DSS may seem to meet only limited needs, it is a
"first step". Start small and build more sophisticated systems based
upon experience and successes.
|
From DSS News, January 20, 2002:
According to Herbert Simon in a 1986 report, "There are no more
promising or important targets for basic scientific research than
understanding how human minds, with and without the help of computers,
solve problems and make decisions effectively, and improving our
problem-solving and decision-making capabilities."
Simon, Herbert A. "Decision Making and Problem Solving." Research
Briefings 1986: Report of the Research Briefing Panel on Decision Making
and Problem Solving. Washington, DC: National Academy Press, 1986
(http://dieoff.org/page163.htm).
|
From DSS News, January 6, 2002:
According to Knight and McDaniel (1979), "Basically, there are three
occasions when organizations are faced with nonroutine decision
situations and must use collegial or political structures to make
choices. The first occasion arises when the organization is faced with
scarce resources. Then the organization must answer the question 'What
are we doing that we can stop?' The second case occurs when the
organization has excess resources. The question is 'Can we do something
that we haven't done before?' The third case develops when the
organization feels the need for systems improvement. The fundamental
question in this case is 'Can we do what we are now doing better?' (p.
142)"
Knight, K.E., and R.R. McDaniel, Jr. Organizations: An Information
Systems Perspective. Belmont, CA: Wadsworth Publishing Co., 1979.
|
From DSS News, September 9, 2001:
Aaron Wildavsky studied budgeting and the use of information in
organizations. His findings emphasize the need for Decision Support
Systems and the difficulty in constructing them. In a 1983 article,
Wildavsky concludes "The very structure of organizations -- the units,
the levels, the hierarchy -- is designed to reduce data to manageable
and manipulatable proportions. ... at each level there is not only
compression of data but absorption of uncertainty. It is not the things
in themselves but data-reduction summaries that are passed up until, at
the end, executives are left with mere chains of inferences. Whichever
way they go, error is endemic: If they seek original sources, they are
easily overwhelmed; if they rely on what they get, they are easily
misled." --
from Wildavsky, A., "Information as an Organizational Problem," Journal
of Management Studies, January, 1983, p. 29.
|
From DSS News, July 1, 2001:
In his 1971 book, C. West Churchman discussed many topics related to
supporting decision makers. Early in that book he stated "Knowledge can
be considered as a collection of information, or as an activity, or as a
potential. If we think of it as a collection of information, then the
analogy of a computer's memory is helpful, for we can say that knowledge
about something is like the storage of meaningful and true strings of
symbols in a computer. ... Put otherwise, to conceive of knowledge as a
collection of information seems to rob the concept of all its life. ...
knowledge resides in the user and not in the collection. It is how the
user reacts to a collection of information that matters. ... Thus
knowledge is a potential for a certain type of action, by which we mean
that the action would occur if certain tests were run. For example, a
library plus its user has knowledge if a certain type of response will
be evoked under a given set of stipulations ... (p. 9-11)"
Churchman, C.W. The Design of Inquiring Systems, Basic Books, New York,
NY, 1971.
|
For those wondering when artificial intelligence will truly take
root, here's a bulletin: it already has. Look at this New York Times
article.
From DSS News, June3, 2001:
According to Gordon Davis (1974), "The value of information is the value
of the change in decision behavior because of the information (less the
cost of the information). An interesting aspect of this concept is that
information has value only to those who have the background knowledge to
use it in a decision. The most qualified person generally uses
information most effectively but may need less information since
experience (frame of reference) has already reduced uncertainty when
compared with the less-experienced decision maker." (p. 180)
Davis, Gordon B., Management Information Systems: Coonceptual
Foundations, Structure, and Development. New York: McGraw-Hill, 1974.
|
From DSS News, June 3, 2001:
From "Ask Dan": Is there a Theory of Decision Support Systems?
- Yes and No ... This question has not been addressed extensively in the
academic Decision Support Systems literature. I can't discuss the
answer or answers to this question adequately in this column, but I'll
try to provide a starting point for a more complete paper.
Let me begin by briefly reviewing what I consider the broadest set of
ideas or propositions that come closest to the start of a theory of
decision support or decision support systems. The propositions all come
from the work of the late Herbert Simon.
From Simon's classic Administrative Behavior (1945) ...
- Simon's Proposition 1: Information stored in computers can increase
human rationality if it accessible when it is needed for the making of
decisions.
- Simon's Proposition 2: Specialization of decision-making functions is
largely dependent upon the possibility of developing adequate channels
of communication to and from decision centers.
- Simon's Proposition 3: Where a particular item of knowledge is needed
repeatedly in decision, the organization can anticipate this need and,
by providing the individual with this knowledge prior to decision, can
extend his area of rationality. This is particularly important when
there are time limits on decisions.
From Simon's paper on "Applying Information Technology to Organization
Design", we can identify 3 additional propositions in a Theory of DSS.
- Simon's Proposition 4: "In the post-industrial society, the central
problem is not how to organize to produce efficiently (although this
will always remain an important consideration), but how to organize to
make decisions--that is, to process information."
- Simon's Proposition 5: From the information processing point of view,
division of labor means factoring the total system of decisions that
need to be made into relatively independent subsystems, each one of
which can be designed with only minimal concern for its interactions
with the others.
Simon's Proposition 6: The key to the successful design of information
systems lies in matching the technology to the limits of the attentional
resources... In general, an additional component (man or machine) for an
information-processing system will improve the system's performance only
if:
- 1. Its output is small in comparison with its input, so that it
conserves attention instead of making additional demands on attention;
- 2. It incorporates effective indexes of both passive and active kinds
(active indexes are processes that automatically select and filter
information for subsequent transmission);
- 3. It incorporates analytic and synthetic models that are capable not
merely of storing and retrieving information, but of solving problems,
evaluating solutions, and making decisions.
A number of other authors have discussed topics related to a theory of
DSS and perhaps in a later column I can examine ideas about when DSS are
and should be used and ideas related to the design and development of
DSS. Simon's propositions address the need for and effectiveness of
decision support systems.
Simon, Herbert A., Administrative Behavior, A study of decision-making
processes in administrative organization (3rd edition). New York: The
Free Press, 1945, 1965, 1976.
Simon, Herbert A., "Applying Information Technology to Organization
Design", Public Administration Review, Vol. 33, pp. 268-78, 1973.
|
What companies have gained a competitive advantage by building a DSS? from DSS News, May 6, 2001
The problem in answering this question is that firms want to
maintain the advantage they gain and hence they are reluctant to release
many details about strategic information systems. Also, DSS that provide
an advantage at one point in time may seem dated or ordinary after only
a few years has elapsed. The advantage can be fleeting and short-term
(cf., Feeny and Ives, 1990)
Porter and Millar (1985) provided a major theoretical perspective on how
information could provide competitive advantage. A number of other
theories related to competitive advantage suggest that deployment of
resources like innovative decision support systems can provide a
sustainable business advantage.
DSS can be important and useful and necessary and yet not provide a
competitive advantage. Many consulting firms and vendors focus on
gaining competitive advantage from a data warehouse or a business
intelligence system and that can happen. Many DSS projects don't however
deliver such results and they probably weren't intended to create
competitive advantage.
In a now classic study, Kettinger et al. (1994) identified a number of
companies that had gained an advantage from Information Systems. Some of
those systems were Decision Support Systems, but most were Transaction
Processing Systems. The following DSS examples are from their paper:
Air Products -- vehicle scheduling system
Cigna -- risk assessment system
DEC -- expert system for computer configuration
First National Bank -- asset management system
IBM -- marketing management system
McGraw Hill -- marketing system
Merrill Lynch -- cash management system
Owens-Corning -- materials selection system
Proctor & Gamble -- customer response system
Time and technology have had a negative impact on how some of the above
systems are perceived. A major lesson learned is that a company needs to
continually invest in a Strategic DSS to maintain any advantage.
INTERFACES Volume 13, No. 6 Nov-Dec 1983 had a number of DSS Case
studies that have since then become classics. For example, Distribution
of Industrial Gases at Air Products and Chemicals, the ASSESSOR Pre-Test
Market Evaluation System, Southern Railway's Computer Aided Train
Dispatching system.
Chapter 2 of my Power's Hyperbook furher explores the question of gaining
competitive advantage from DSS. In the chapter, examples include
decision support systems at Frito-Lay, L.L. Bean, Lockheed-Georgia,
Wal-Mart and Mrs. Field's Cookies.
If a company is trying to develop a Decision Support System that
provides a competitive advantage, managers and analysts should ask how
the proposed DSS affects company costs, customer and supplier relations
and managerial effectiveness. Managers should also attempt to assess
how the proposed strategic system will impact the structure of the
industry and the behavior of competitors.
|
DSS Wisdom from DSS News by D. J. Power, 2(8),
April 8, 2001
Alvin Toffler (1970) argued "Information must flow faster than ever
before. At the same time, rapid change, by increasing the number of
novel, unexpected problems, increases the amount of information needed.
It takes more information to cope with a novel problem than one we have
solved a dozen or a hundred times before. It is this combined demand
for more information at faster speeds that is now undermining the great
vertical hierarchies so typical of bureaucracy." (Toffler, Alvin., Future Shock, New York: Random House, 1970, p. 121)
|
Is there an "information culture" that encourages building Decision
Support Systems? This is the response from DSS Wisdom from DSS News by D. J. Power, 2(8),
April 8, 2001
Let's assume there is such a phenomenon as an "information culture".
Culture refers to shared assumptions, beliefs and ideas
of a group. Information culture would then refer to shared assumptions,
beliefs and ideas about obtaining, processing, sharing and using
information in decision-making and organizational management.
Rick Tanler, who founded Information Advantage, identified
four different information cultures. In a Decision Wire column titled
"Becoming the Competitor All Others Fear", Tanler stated "The four
information cultures are Spectator (observes changes within their
market); Competitor (initiates change within their market); Predator
(attacks market principles); and Information Anarchy (the dysfunctional
information culture)."
Tanler noted in that same column that "Almost every data warehouse is
justified to senior management in terms of the competitive advantages
that will accrue to the enterprise if better information is available
to decision-makers." Tanler argued the Competitor Culture will encourage
managers to develop better information systems and that will lead to
better decisions and better corporate performance. This conclusion is
very optimistic ... and it assumes that initiating change always leads
to positive outcomes.
Also, Tanler argued many companies have a Spectator Culture and
need to move to a Competitor Culture. Tanler believed the
"difference between the Spectator Culture and the Competitor Culture
is that the former focuses on decision-support (What information do
users need?) and the latter focuses on decision-implementation (What are
users doing with the information?)."
Tanler's four culture categories create a "buzzword" approach to
organizational change. It sounds like he is really concerned about how
to design systems rather than about culture. Certainly we need to ask
what are and what might users do with information and how can we better
support their decision-making. Building a DSS is much more than asking
potential users what information they need. Relying on managers to
"divine" what information will be or is needed won't work; such an
approach is much too passive to succeed.
Tanler noted we need to examine the "role of information within the
context of the entire decision cycle." We need to understand what a
decision cycle is to bring about this change. In the management and
decision making literature, a decision cycle or process starts with the
identification of an opportunity or recognition of a problem. The cycle
includes analysis and formulation of decision alternatives. The cycle
also includes approval of a decision and communications and actions
needed to implement the decision and measure its impact.
Tanler argued the "objective is to compress the decision cycle". He
concluded that by "moving from a Spectator Culture to a Competitor
Culture, an organization can make smarter decisions in shorter cycle
times to ultimately become the competitor that all others fear."
Reducing the cycle time is a desirable goal, but in and of itself a
shorter decision cycle does not improve decision making and if decision
support is provided inappropriately to reduce cycle time, then decisions
can be negatively impacted and results will be much worse and not
better.
We need to maintain a humble attitude when our goal is to improve human
decision behavior. Decision-making is as much art as science and we may
be able to inform decision-making with facts and analysis.
In my opinion, a positive information culture encourages active
information use and recognizes that technology can help with a variety
of decision tasks and can speed up the clerical side of those tasks, but
that people remain the thinkers and decision makers who must assume
responsibility for organizational actions.
Businesses aren't intelligent, people are. Decision support has to focus
on helping managers make decisions.
Well ... I didn't set out to critique Tanler's ideas on information
culture and successful implementation of technologies to support
decision making, but in a general way this Ask Dan has done that. Let me
know what you think of when you hear the term information culture ... Is
there a proactive, decision support culture?
Tanler, Rick. "Becoming the Competitor All Others Fear", DecisionWire,
Vol. 1, Issue 11, January 1999. |
DSS Wisdom from DSS News, Volume 2, Number 6, March 11, 2001.
Peter Drucker (1954) wrote in his chapter titled "Making Decisions" that
".. management is always a decision-making process." He notes in the
same chapter that in regard to the "new" decision tools from Operations
Research managers "must understand the basic method involved in making
decisions. Without such understanding he will either be unable to use
the new tools at all, or he will overemphasize their contribution and
see in them the key to problem-solving which can only result in the
substitution of gadgets for thinking, and of mechanics for judgment.
Instead of being helped by the new tools, the manager who does not
understand decision-making as a process in which he has to define, to
analyze, to judge, to take risks, and to lead to effective action, will,
like the Sorcerer's Apprentice, become the victim of his own bag of
tricks." (p. 368)
Drucker, P. The Practice of Management. New York: Harper and Brothers,
1954. |
"Moody Computers," Interactive Week (02/26/01) Vol. 8, No. 8, P. 47; Steinert-Threlkeld, Tom (as seen in Tech News, Volume 3, Issue 172: Monday, March 5, 2001):
- Martin Minsky, co-founder of the Artificial Intelligence Laboratory at the Massachusetts Institute
of Technology and author of "Society of Mind" and its forthcoming sequel, "The Emotion
Machine," argues that emotions are merely another way in which human beings think, rather
than a process independent of or antithetical to thinking. His central idea, he says, "is that each
of the major emotions is quite different. They have different management organizations for how
you are thinking you will proceed." Minsky contends that common-sense reasoning is what
allows us to handle and manipulate these different emotions, to choose which emotion is best
for handling which situation, even though we are not aware when each type of thinking is
occurring. This is also, he says, what separates machine thinking from human thinking.
Machines are not able to see the same piece of knowledge represented in multiple ways. Minsky
says, "You have to build a system that looks at two representations, two expressions or two
data structures, and quickly says in what ways are they similar and what ways are they different.
Then another knowledge base says which kinds of differences are important for which kind of
preference." He contends that such ways of thinking could, for example, benefit search engines,
allowing software to consider how to organize and execute a search based on what human users
might want rather than relying on keywords and algorithms. The ability to approach a problem
from many different ways and then solve it is how Minsky defines intelligence and is what he
means by an intelligent, emotional machine. He dismisses the fear that "emotional" machines
could somehow become irrational, as an emotional human being can become irrational and
commit an act that may endanger or harm others, because that again reflects the human bias
that emotions and thinking are two entirely different things.
http://www.zdnet.com/intweek/stories/news/0,4164,2690670,00.html
More on Professor Herbert Simon can be found at Center for Economic Policy Analysis. This includes pdf versions
of some of his papers including:
- "Rational Decision Making in Business Organizations", 1979, AER.
- "Decision Making and Problem Solving",
1986
James March wrote in 1978, "Prescriptive theories of choice are
dedicated to perfecting the intelligence of human action by imagining
that action stems from reason and by improving the technology of
decision. Descriptive theories of choice are dedicated to perfecting the
understanding of human action by imagining that action makes sense. Not
all behavior makes sense; some of it is unreasonable. Not all decision
technology is intelligent; some of it is foolish." (p. 604) --
from March, J. G. "Bounded Rationality, Ambiguity, and the Engineering
of Choice", Bell Journal of Economics, Vol. 9, 1978, pp. 587-608.
"New business procedures would then be analogous to new mutations in nature. Of a number of procedures, none of which can be
shown either at the time or subsequently to be truly rational, some may supplant others because they do in fact lead to better results.
Thus while they may have originated by accident, it would not be by accident that they are still used. For this reason, if an economist
finds a procedure widely established in fact, he ought to regard it with more respect than he would be inclined to give in the light of his
own analytic method." (Roy F. Harrod, 1939, Oxford EP) .... from The Maximization Debates.
As seen in Edupage (2/22/2001) -- SOFTWARE TRIES 'CONCEPT MAPPING':
New concept mapping software is available for free download from
the University of Florida's Institute for Human and Machine
Cognition. Researchers at the institute are working to make
computers easier to use, exactly the theory behind concept
mapping, which links information in a direct and understandable
way. The researchers expect that concept maps, or Cmaps, will
help change the information navigation on Web sites by providing
a graphical depiction of how that information is linked and
organized rather than by following the traditional method of
organizing information page by page. Funded by NASA and the Navy
as part of a larger project to create similar learning tools, the
software is among the best Cmap programs available, claims Barry
Brosch of Cincom, a commercial firm negotiating a software
license from the University of Florida. Having already made the
software available for nonprofit use, the institute is in the
process of determining how it will offer the software for
commercial application. (Associated Press, 19 February 2001)
Read about Professor Herbert
Simon
"As Easy As Breathing"
Boston Globe (02/04/01) P. H1; Weisman, Robert Michael Dertouzos, director of
MIT's Laboratory for Computer Science, is pioneering the Oxygen research
project, an initiative to develop what Dertouzos calls "human-centric
computing." Human-centric computing revolves around highly intuitive
technology so pervasive as to be invisible, Dertouzos explains. "From now on,
computer systems should focus on our needs and capabilities, instead of
forcing us to bow down to their complex, incomprehensible, and mechanistic
details," Dertouzos writes in his upcoming book, "The Unfinished Revolution:
Human-Centered Computers and What They Can Do For Us." Private industry and
the Pentagon are underwriting the MIT Lab's research, a five-year, $50 million
project involving 150 to 200 researchers. The Oxygen Alliance includes such
industry leaders as Hewlett-Packard, Philips Research, and Nokia Research
Center. At a feedback session in mid-January, Fred Kitson of HP Labs advised
Dertouzos to concentrate on creating a "pervasive computing ecosystem" to
narrow the gap between slow idea development and commercialization. "Initially
it will be difficult because it requires taking a customer-centric rather than
a technology-centric point of view," explains Adrian J. Slywotzsky of Mercer
Management Consulting. In his book, Dertouzos describes three primary
technologies the Oxygen project is exploring. The Handy 21 would be a handheld
device that incorporates the functions of most palm-sized products currently
on the market. The Enviro 21 would be a computing environment the size of a
room or office capable of speech recognition, face recognition, motion
detection, and wall-mounted displays. The third type of technology, the N21
Network, would link the Handy and the Enviro together. Dertouzos expects
human-centric computing to be realized in the next 10 to 20 years. Click
Here to View Full Article