Decision Support Systems Foundations
These bits of wisdom all appeared electronic newsletters. Citations have been provided to their original source.
From ACM TechNews, December 29, 2003.
"This Car Can Talk. What It Says May Cause Concern."
New York Times (12/29/03) P. C1; Schwartz, John
- Privacy proponents say the relationship between American motorists and their cars is changing with the emergence of advanced automotive technologies such as the OnStar system, a location tracking service popular for its promise to thwart carjackers, yet which experts such as Cornell University's Curt Dunnam argue could just as easily be used by law enforcement or even hackers to monitor car owners' whereabouts and activities. OnStar's Geri Lama assures that her company cannot release customers' location data to law enforcement officials except under court order, while only the craftiest hackers can crack the code needed to track location-based data or unlock car doors. Nevertheless, Privacy Rights Clearinghouse founder Beth Givens, talking about the erasure of motorists' personal freedom with the advent of monitoring systems, declares, "Now, the car is Big Brother." Other automotive technologies that have privacy advocates worried include electronic toll systems, chips installed within tires, and most notably, "black box" sensors designed to relay critical information in the last few seconds before a collision. Though Sally Greenberg of Consumers Union acknowledges that such technology can save lives, she wants the federal government to exercise caution in making sure the technology is not used to the detriment of personal privacy. The government is currently weighing regulations to standardize black box data as well as how that data is collected, while the Institute of Electrical and Electronics Engineers is working out a global black box standard. However, cases of privacy infringement involving vehicle tracking systems have already cropped up: Data recorded by an OnStar system was employed to convict a man of a fatal hit-and-run accident in 2001, while last year a woman was stalked by a man who installed a tracking device in her car.
Click Here to View Full Article
-
From Edupage, December 22, 2003.
EU Feud Over Sharing Traveler Info With U.S.
- Some members of the European Parliament have raised objections to a ruling by the European Commission that allows member countries to share personal information about international travelers with the United States. The European Commission said it was able to negotiate sufficient compromises with officials in the United States that the transfer of sensitive information would be permitted, despite EU laws that explicitly prohibit such actions. Included in the compromised proposal are provisions that reduce the amount of information transferred about each traveler, reduce the amount of time the information will be kept, and prevent using the information for domestic law enforcement. Those in the European Parliament objecting to the agreement noted "that the transfer is without the consent of the passengers, that the transfer in itself is illegal according to EU data-protection laws, and that the U.S. has no proper data-protection laws nor a fully independent data-protection officer." Wired News, 22 December 2003 http://www.wired.com/news/politics/0,1283,61680,00.html
From ACM Tech News, December 19, 2003.
"AI Think, Therefore I Am"
APC (12/03); Braue, David
- Virtual agents--autonomous, self-directing computer programs that are social and reactive--are being developed for numerous tasks ranging from the simple to the highly sophisticated, but making them effective requires a delicate balance between psychology and technology. A virtual agent is ineffective if it is incapable of building trust within the user with whom it is interacting, and one of the biggest hindrances to establishing trust is an unrealistic appearance, such as jerky motions, unnatural expressions, and poor speech-to-lip synchronization. A U.S. study found that smiling, blinking, and other facial cues make a dramatic difference in users' perceptions of agents; in addition, even visually appealing agents can be less effective if they are too attentive or inaccurate. The results of the study indicate that the best virtual agents are bodiless, and such agents are being employed to collect new data rather than retrieve old data. The behavior of disembodied agents is not directed by their personalities, but by set parameters such as how far and how deep their search for data should extend, and many researchers believe these programs will be well-suited as personal assistants tasked with categorizing, indexing, and presenting information meaningfully. Cutting-edge virtual agents can be found in a joint venture between the U.S. Army and the University of Southern California's Information Sciences Institute, in which peacekeepers are trained to deal with angry or hostile people in war-torn regions by interacting with simulated characters--each imbued with its own personality and emotional expressions--in a virtual setting. Robotic vehicles driven by tireless agents that use cooperation and negotiation tactics to interact with one another are being used at Australian mining sites, notes Hugh Durrant-Whyte of Sydney University's ARC Center of Excellence in Autonomous Systems.
Click Here to View Full Article
"Web Tools Don't Always Mesh With How People Work"
Newswise (12/17/03)
- There are numerous techniques Web users employ to recall the Web pages they visit (sending emails to themselves or writing sticky notes, for example), but most people do not avail themselves of such methods when they decide to revisit pages, say University of Washington's William Jones and Harry Bruce and Microsoft Research's Susan Dumais. Bruce says, "People should have fast, easy access to the right information, at the right time, in the right place, in the right quantity to complete the task at hand." The researchers have studied this phenomenon, with funding from the National Science Foundation, in the hopes that more useful tools for keeping track of information can be developed. Their work implies that "keeping" methods stem from the different ways people plan to use the data, but bookmarks, which are the chief "keeping" instrument of most Web browsers, lack many of the advantages users desire. Furthermore, Dumais, Jones, and Bruce have learned that no matter what "keeping" technique a user prefers, most users often attempt to return to a Web site using three other methods: Directly entering a URL in the Web browser, conducting search engine queries, or using another Web site or portal to access the page. Jones and Bruce, along with their students, enhanced a Web browser with an "Add to Favorites" dialog, which allowed people to add comments about a link, send it through email, or save the page to their hard drive from a single dialog; however, most testers did not adopt this option, since they had fallen out of the habit of using bookmarks. Now the researchers are devising a conceptual architecture for how people decide to retain information to use later, which Bruce terms PAIN (personal anticipation of information need). The team is also seeking a patent for tools and techniques to address the problem of "information fragmentation" by meshing all the data scattered across different documents and media into a single "My Life" taxonomy.
Click Here to View Full Article
From K@E Newsletter, December 17, 2003.
Privacy vs. Security: Who Draws the Line?
- Derek Smith wants to know more about you. Why? He wants to protect you.
As the Chairman and CEO of ChoicePoint, an Alpharetta, Georgia based company that specializes in providing identification and credential verification to business and government clients, Smith hopes to create a safer and more secure society through the responsible use of information. In the second in a series of articles, Smith and faculty at Emory University's Goizueta Business School debate the issue: where should we draw the line between an individual's right to privacy and creating a more safe and secure society?
Read the article
It's All in the Numbers. Or Is It?
- Quantitative analysis is widely believed to enhance the persuasiveness of business cases and proposals. But just how does quantification influence managerial decision-making? In their paper, "The Persuasive Effects of Quantification in Managerial Judgment," Kathryn Kadous and Kristy Towry of Emory University's Goizueta Business School join a coauthor in testing exactly how quantification influences persuasion.
Read the article
From ACM Tech News, December 17, 2003.
"Web Services Put GIS on the Map"
Computerworld (12/15/03) Vol. 31, No. 56, P. 30; Mitchell, Robert L.
- Web services technology is popularizing third-party geographic information system (GIS) information, even at companies that maintain their own internal GIS databases and systems. Commercial developer Edens & Avant uses GIS Web services to create quick overlay maps of prospective shopping center sites, integrating relevant Census Bureau, Environmental Protection Agency, local government, and commercial data. Edens & Avant systems manager David Beitz says the Web services model is for prospecting, but in-depth analysis involves in-house data. As Web services support for GIS continues to grow, analysts expect more industries to use GIS in greater capacity. Previously, specialists acted as gatekeepers to GIS systems, but now decision-makers and other GIS users have direct access to GIS data. Developers are also taking advantage of Web services to integrate GIS into their applications, using offerings such as Microsoft's MapPoint Web Services, for example. The Open GIS Consortium (OGC) is behind standards to support these Web services, including Web Map Service, Web Feature Service, and Geography Markup Language, which is based on XML. The payoff for Florida Farm Bureau Insurance is up-to-date information used to approve homeowner policy applications, says senior strategic planner Steve Wallace. Still, GIS over Web Services is hampered by differing standards at the base level, and OGC specification program director Carl Reed says the OGC is in discussions with states and counties to iron out simple definitions such as road width.
Click Here to View Full Article
From New York Times, December 14, 2003.
Over-Reliance On Powerpoint Leads To Simplistic Thinking
- NASA's Columbia Accident Investigation Board has fingered
the agency's over-reliance on Microsoft PowerPoint
presentations as one of the elements leading to last
February's shuttle disaster. The Board's report notes that
NASA engineers tasked with assessing possible wing damage
during the mission presented their findings in a confusing
PowerPoint slide so crammed with bulleted items that it was
almost impossible to analyze. "It is easy to understand how
a senior manager might read this PowerPoint slide and not
realize that it addresses a life-threatening situation,"
says the report. NASA's findings are echoed in a pamphlet
titled "The Cognitive Style of PowerPoint," authored by
information presentation theorist Edward Tufte,
who says the software forces users to contort data beyond
reasonable comprehension. Because only about 40 words fit on
each slide, a viewer can zip through a series of slides
quickly, spending barely 8 seconds on each one. And the
format encourages bulleted lists -- a "faux analytical"
technique that sidesteps the presenter's responsibility to
link the information together in a cohesive argument,
according to Tufte, who concludes that ultimately,
PowerPoint software oozes "an attitude of commercialism that
turns everything into a sales pitch." (New York Times 14
Dec 2003)
Perception Is Reality by Peter Coffee in eWeek, December 1, 2003.
From the ACM TechNews, October 3, 2003.
"Researchers Create Super-Fast Quantum Computer Simulator"
NewsFactor Network (10/02/03); Martin, Mike
- Japanese researchers have devised a tool designed to boost the speed at which classical computers can run quantum algorithms, which engineers hope will aid the design of quantum-computer hardware and software. The researchers used a "quantum index processor" to manipulate an algorithm formulated by AT&T researcher Peter Shor in which the time it takes to factor a number increases only as a polynomial function of the number's size. With traditional factoring algorithms, the time required to factor a number rises exponentially with the number's size. Running quantum algorithms on classical computers is problematic: For instance, because Shor's algorithm only generates a "highly probable" correct result, it must be run continuously to boost the outcome's likelihood. The amount of simulated quantum bits needed to process these repetitions becomes unwieldy after a short time. Running Shor's algorithm with the quantum index processor sidesteps the complexity needed to run the algo!
rithm on a classical system employing complementary metal oxide semiconductor technology, according to Texas A&M electrical engineering professor Laszlo Kish. He estimates that the quantum index processor is 100 trillion trillion times speedier than A&M's workstations, and overtakes any other quantum simulator currently in existence. "The proposed system will be a powerful tool for the development of quantum algorithms," declare Minoru Fujishima, Kento Inai, Tetsuro Kitasho and Koichiro Hoh of the University of Tokyo.
Click Here to View Full Article http://www.newsfactor.com/perl/story/22407.html
"Machines Learn to Mimic Speech"
Wired News (10/03/03); Delio, Michelle
- Attendees at this week's SpeechTek tradeshow said speech technology companies have started to take a more realistic view in realizing that voice technology has not yet reached the point where computers can actually understand human speech. "Now that the magic is gone, we don't believe in using speech technology unless it serves a viable purpose--making it easier for people to work with a computer system, making systems more secure or even making computers more fun," remarked speech application programmer Frank Vertram. Still, SpeechTek showcased some impressive products: One ATM product was designed to aid visually handicapped or technology-evasive users by allowing them to hear descriptions of onscreen options through headphones. Nuance displayed a "say anything" natural language application that employs a database to interpret users' intent from "freestyle conversations." Cepstral unveiled two sets of computer voices, one geared for the American market and the other for th!
e Canadian market--the American voices are imbued with a casual tone, while the Canadian voices speak with a French-Canadian accent. IBM highlighted WebSphere speech offerings upgraded with VoiceXML 2.0 support, which allows speech technology to be embedded within Web sites. SpeechTek's Speech Solutions challenge, which was set up to prove that programming speech applications does not necessarily have to be a frustrating experience, tasked seven teams with developing a workable application capable of identifying car trouble and scheduling a session at a repair shop by the end of the day; all seven teams met the challenge by 5:00 p.m. "Once we get past the mistaken idea that computers should be able to really understand us or that we can engage in meaningful conversations with machines, the new voice and speech technology is absolutely amazing," declared SpeechTek organizer James Larson.
Click Here to View Full Article http://www.wired.com/news/technology/0,1282,60677,00.html
From the ACM TechNews, September 24, 2003.
"Putting Your Calls Into Context"
Wired News (09/23/03); Gardiner, Debbi
- Researchers at Carnegie Mellon University's Institute of Technology (CIT) have devised SenSay, a context-aware cell-phone technology that keeps track of sent emails, phone calls, and the user's location while employing sensors to analyze the environment so that users can be alerted to calls appropriately and non-intrusively. "Because people can see when you are available, the time it takes to hand off or receive information is greatly reduced," explains Dr. Asim Smailagic of Carnegie Mellon's Institute for Complex Engineered Systems. SenSay features an armband containing motion sensors, a microphone, galvanic skin-response sensors, and a heat-flux sensor to measure body temperature, while a global positioning system device relays the user's position; based on these readings, the phone can automatically adjust ringer volume, vibration, and phone alerts, and assign variable levels of urgency to calls. Intel helped finance SenSay's development and is interested in being the CIT!
lab's manufacturing partner, while the military has also expressed an interest. SenSay will not be able to expand its usability until storage and computational capacity is added and one-piece integration is achieved, while privacy experts are concerned that the technology could be abused. "Something that is a tracking device can, for social reasons, become something that tracks you," notes Lee Tien of the Electronic Frontier Foundation. Smailagic says all of these issues have been considered by CIT researchers, who are working on ways to address them. Carter Driscoll of the Independent Research Group thinks SenSay may only have limited market appeal among travelers or high-tech executives who need to be contacted at any time.
Click Here to View Full Article http://www.wired.com/news/technology/0,1282,60428,00.html
From the Wharton, September 8, 2003.
- The Input Bias: How Managers Misuse Information When Making Decisions Fans of the hit TV comedy “The Jerry Seinfeld Show” may remember an episode in which Jerry’s friend George leaves his car parked at work so that the boss will think George is putting in long hours, even when he’s not. The idea, of course, is that George’s apparent productivity will net him a better performance review and a higher raise or bonus. Wharton professor Maurice Schweitzer would call George’s behavior “an attempt to invoke the input bias – the use of input information (in this case the false impression of long hours) to judge outcomes.
09/10/03 http://knowledge.wharton.upenn.edu/articles.cfm?catid=13&articleid=840
From the ACM TechNews, July 30, 2003.
"Computer Helps Translate Gap Between 'He Said, She Said'"
Toledo Blade (09/08/03); Woods, Michael
- The Winnow computer program developed by researchers at the Illinois Institute of Technology can determine whether anonymous messages are written by men or women with over 80 percent accuracy, and such technology could be used to increase the effectiveness of textbooks, improve crime-solving techniques, or enhance commercial and workplace communications, among other things. Project leader Dr. Shlomo Argamon says the effort differs from other research initiatives into gender-specific communications in that it focuses on textual rather than oral exchanges. Winnow studied more than 600 documents in the British National Corpus, scanning for specific linguistic patterns, or "determiners," culled from analysis of documents known to be written by male or female authors. Determiners that Winnow relies on to categorize author gender include women's preference for pronouns, such as "I," "you," "she," "her," "their," "myself," "herself," and "yourself" and men's tendency to use pronoun!
s like "it," "this," "that," "these," "those," and "they." The program was able to correctly identify author gender in 73 percent of the scientific documents it analyzed, indicating that sex-related differences are apparent even in highly technical texts. Argamon thinks that revelations about distinctive writing styles between men and women uncovered by Winnow could have a profound effect on education, paving the way for gender-specific textbooks, for example. His team is attempting to refine the method to establish the age, educational level, and ethnicity of anonymous authors, as well as their gender, a breakthrough that could help police identify writers of ransom notes. Meanwhile, Georgetown University linguistics professor Dr. Deborah Tannen believes Argamon's research could help bridge a sexual communications gap in the workplace.
Click Here to View Full Article http://www.toledoblade.com/apps/pbcs.dll/article?AID=/20030908/NEWS08/109080101
"BA Predicts the Future"
InfoWorld (09/01/03) Vol. 25, No. 34, P. 46; Angus, Jeff
- Business analytics (BA) software focuses on future trends, not on summarizing and reporting historical data, as does business intelligence (BI) software. Both systems are linked to databases and allow analysis, and so have been confused in the marketplace; companies employing BA, however, should have been able to foresee the economic downturn three years ago. Data Warehouse Institute founder Herb Edelstein warns business managers from becoming too distracted by BI software that they ignore the special, predictive attributes of BA. SAS Institute analytical intelligence director Anne Milley explains that BI is more about canned information, while BA is about exploration and finding answers to new questions; BA tools allow analytical professionals to quickly investigate and follow up on highlighted trends, automatically generating new views based on user queries. The ability to generate many reports and bring the most relevant ones to the fore is what makes BA unique, says Kxen!
's Joerg Rathenberg. BA applications work best when operating in data-rich environments, such as CRM and ERP systems, and can help telecom marketing groups identify at-risk customers or pharmaceutical firms narrow drug discovery testing, for example. Because BA today is mainly for professionals with statistics experience, IT personnel do not need to be involved as heavily as with BI, where IT staff work with business executives to define report models; International Data's Henry Morris, who first coined the term "business analytics," says the next wave of such systems will be "policy hubs" uniting relevant BI and BA systems for more intelligent analysis.
Click Here to View Full Article http://www.infoworld.com/article/03/08/29/34FEbusan_1.html
From the ACM TechNews, September 5, 2003.
"It's Tricky, Grafting Brando's Sneer to Bogart's Shrug"
New York Times (09/04/03) P. E8; Taub, Eric A.
- Researchers at the University of Southern California are attempting to dissect human movement and speech in order to produce software that can generate virtual humans that are utterly authentic both in appearance and action. Such a breakthrough would be especially attractive to Hollywood, allowing filmmakers to create realistic yet inexpensive artificial characters whose physical and personality traits can be patched together from existing actors. This is not an easy challenge--Dr. Ulrich Neumann of USC's Integrated Media Systems Center explains, "There is such intricacy and detail and proper timing involved in the science of human expressiveness that when something is not right we know it, but we can't explain it." One project at the center involves researchers employing photos to measure the distance between points on a face; they have programmed a computer to build caricatures of one person that incorporate the features of another by overlaying the distances of the first !
face onto the second. Another part of the research involves filming people to get a sense of how emotions are physically expressed: Their faces are divided into nine areas, and the movements of three muscles in each area are plotted out by distance and timing. From these measurements is derived a formula that expresses the movement of interacting muscles over time to form specific emotional expressions. This formula can be used to manipulate the muscles on a virtual actor to re-create those same expressions. The next step is to add mouth and eye movements that look unpredictable, and the final step is to make the mouth and face move synchronously with the words being spoken while including reflection of what has been said and what is about to be said. http://www.technewsworld.com/perl/story/31199.html
"Does IM Have Business Value?"
Business Communications Review (08/03) Vol. 33, No. 8, P. 40; Bellman, Bob
- Instant messaging is valued among enterprises for its presence, which allows users to know ahead of time who is available and unavailable to chat; near-real-time message delivery, which offers a higher level of interaction than email; and multiple correspondence, which enables users to be more efficient and productive. "IM lets you work more effectively in an information-rich, time-critical world," declares Jon Sakoda of IMLogic. Other benefits of IM include significant savings in international phone calls and other forms of communication--a February report from Osterman Research estimates that almost 81% of responding companies lowered phone use and 67% reduced email use through IM. In addition, IM does not cause network congestion, nor does IM inhibit network operations. Though some IM services are free, the companies that offer them expect to realize new revenue by bundling IM with other products and value-added services, or via IM "bot" applications. However, IM's availa!
bility to anyone worries managers concerned with upholding network security; viruses and hacks can piggyback on IM-enabled file transfers, and IM easily allows business transactions to be carried out and proprietary data to be disseminated without an audit trail. Other drawbacks to IM include incompatible IM applications, the intense difficulty in deactivating IM once it is activated, and IM's potential to interrupt important tasks. A number of years will pass before IM standards are mature enough to facilitate interoperability, and before companies understand the best ways to leverage IM.
Click Here to View Full Article http://www.technewsworld.com/perl/story/31199.html
From the ACM TechNews, September 3, 2003.
"'Conversational'" Isn't Always What You Think It Is"
Speech Technology (08/03) Vol. 8, No. 4, P. 16; Byrne, Dr. Bill
- Dr. Bill Byrne of Stanford University argues that, while "conversational" speech interfaces should boost usability overall, their true range of applications is limited by designers' tendency to have "conversational" refer to the type of exchange a person expects to have with a customer-based call center agent who has no previous relationship with the caller. This hurdle will have to be overcome if the technology is to penetrate the enterprise; the interface must support a conversational style aligned to both the task at hand and the expected relationship between the caller and the virtual agent. Byrne observes that the most efficient conversational speech interfaces may seem brusque and even rude when taken out of context. Imbuing conversational interfaces with "personality" can be another barrier to usability: Byrne cites Stanford University's Byron Reeves, who recently stated that "Personality [at least on the street] usually means 'a lot' of personality. That often result!
s in over-the-top interfaces that can overdo what real people [even those with great personality] would do in similar face-to-face encounters." Byrne reports that designers often make the mistake of adding bells and whistles--witty turns of phrase, for example--that may impress first-time callers, but become irritating with frequent use. The challenge lies in achieving a balance between the maintenance of the conversational interface's dynamic quality and its usability. Byrne suggests that history trackers be embedded into the code or application design tool so prompts can be changed to suit the level of user familiarity, and adds that designers must realize that not all call situations apply to certain basic design credos.
http://www.speechtechmag.com/issues/8_4/cover/2183-1.html
From the ACM TechNews, August 29, 2003.
"Software Self-Defense"
ABCNews.com (08/27/03); Eng, Paul
- Computer security experts say that users are the weakest link in the defense against computer viruses and worms, and that automated security updates and PC scanning are needed to fill the gap. The SoBig virus, which has infected over 100,000 PCs since Aug. 18, is only activated when users open an email attachment. Central Command COO Keith Peer says the software security industry's continual drumming about not opening suspicious email attachments is not working because users are "glazing over." Furthermore, the MSBlaster virus could have been stopped if many users had updated their Windows systems with a new software patch. Microsoft is considering shipping Windows XP with Auto Update on by default, so that non-technical users would not have to figure out what software patches do and how to install them. Network Associates' McAfee VirusScan and Symantec's Norton AntiVirus already use automatic updates and might even scan users' computers for suspicious activity signaling an !
unidentified infection; any program collecting email addresses from the hard drive or changing Web browser settings would be flagged and possibly disabled remotely by the software firm. Electronic Frontier Foundation technologist Seth Schoen says taking control away from the user is dangerous, and suggests security companies might introduce code that would discourage use of competitors' products. In addition, license agreements often waive manufacturers' responsibilities in case of defects. Schoen would approve of intrusive security measures if vendors give users a clear understanding and choice to reverse updates. However, Network Associates' Bryson Gordon warns that even with stringent software protections, viruses will continue to proliferate by way of social engineering tricks rather than technical prowess.
Click Here to View Full Article http://www.technewsworld.com/perl/story/31199.html
"Upgrade and Archive: The Ongoing Threat of Data Extinction"
TechNewsWorld (08/28/03); Hook, Brian R.
- Unlike printed documents and microfilm records, electronic records cannot be preserved without the maintenance of all the distributed data and metadata, explains Andrew Lawrence of Eastman Kodak's commercial imaging group. Paper and microfilm are self-contained, but digital files cannot be continuously accessed without their associated operating systems and applications; archived digital documents must be regularly updated to newer formats because nearly all software developers ultimately cease support for their older formats. "Over time, the problem is that media decays and hardware and software platforms evolve, placing the electronically stored information at risk," Lawrence observes. His advice is to keep electronic records available in native formats for the short term, and preserve those same documents in analog-based form as a long-term reference archive. Artesia products director Dan Schonfeld stresses the importance of archiving the viewers, players, and readers tha!
t are needed to access digital files along with the files themselves, and notes that his company's software performs that function. He also advises companies to keep tabs on the applications required to view or read media files, which can be a difficult task. Meanwhile, Glenn Widener of SwiftView thinks the use of easily convertible print formats--Hewlett-Packard's Printer Control Language (PCL) in particular--is the best archival solution. There are PCL viewers available that can view documents 15 to 20 years old, and Widener is confident that "There will always be commercial tools readily available to read it."
Click Here to View Full Article http://www.technewsworld.com/perl/story/31199.html
"Gap Analysis"
Government Enterprise (08/03); Chabrow, Eric
- Approximately 50 percent of the federal government's IT workforce, as well as a significant portion of state government IT professionals, will reach retirement age in a few years, which has sparked both negative and positive outlooks on how this development will affect IT project management and legacy system maintenance; the optimists are confident that there will not be a mass exodus of IT personnel because of the current economy, while most IT professionals recently polled by the Office of Personnel Management report a deep personal satisfaction in their work and its importance to their agencies' goals. Additionally, optimists believe that the departure of retirees with legacy skills will be offset by the recruitment of younger workers skilled in security, networking, and the Internet. However, officials such as Interior Department CIO Hord Tipton and former NASA CIO Paul Strassmann note that many soon-to-retire federal IT workers are either ready to leave or are tired of !
bureaucracy as well as continuous fighting between departments and agency directors, congressional appropriators, and the Office of Management and Budget. U.S. comptroller general David Walker argues that the government must expand its effort to lure new IT workers and retain veterans by becoming more user-friendly, while Congress is debating a salary raise for federal employees. Meanwhile, Steven Kelman of Harvard University says the government should ease midcareer hiring and recruit talent to supervise outsourced IT projects. Although Texas CIO Carolyn Purcell admits that seasoned workers make ideal project managers, she does not rule out the possibility that younger pros could also handle the job. Some CIOs are concerned that younger IT workers will balk at the prospect of learning legacy technologies, but Strassmann and Treasury Department CIO Drew Ladner say there is no reason to think such a thing will happen, as long as managers are able to keep workers motivated an!
d excited. Agency-wide IT consolidation is one of the unanticipated pl
uses of the focus on a possible IT worker shortage.
Click Here to View Full Article http://www.technewsworld.com/perl/story/31199.html
From the ACM TechNews, August 25, 2003.
"Tool Blazes Virtual Trails"
Technology Research News (08/20/03); Patch, Kimberley
- A new virtual prototyping tool helps users keep their bearings when navigating the computer aided design (CAD) representation of ships, airplanes, or buildings. Developed at the University of North Carolina, the system uses algorithms and a graph map to keep users' avatars from floating through the free space of the virtual design. Instead, avatars walk along the floor and cannot pass through walls, allowing users to understand the virtual model better. The system is based on polygon models and requires preprocessing to link mapped graph nodes. After the global navigation mode is built, a local navigation mode lets users literally plot their own course in the virtual design, specifying where they want to go and how they want to get there. University of North Carolina researcher Brian Salomon says preprocessing the graph map for a 12-million polygon power plant model took over 12 hours, but that the actual graph takes surprisingly little storage. Salomon notes that industry p!
artners include Boeing, Newport News Shipbuilding, and large architectural firms. The system can be used with current CAD products.
Click Here to View Full Article http://www.technewsworld.com/perl/story/31199.html
"State of Speech Standards"
Speech Technology (08/03) Vol. 8, No. 4, P. 20; Larson, James A.
- The World Wide Web Consortium (W3C), the Internet Engineering Task Force (IETF), and the European Telecommunications Standards Institute (ETSI) are developing standards for speech systems, which consist of a cell phone, telephone, or other user device; a document server where scripts and files are stored; an application server housing a voice browser to download and interpret documents; a speech server with technology modules tasked with the recognition and generation of speech; and a telephone connection device with a call control manager. The VoiceXML Forum submitted version 1.0 of the Voice Extensible Markup Language (VoiceXML), a dialog language for writing speech applications to the W3C, in March 2000, which was refined into VoiceXML 2.0 by the Voice Browser Working Group, which also distilled and polished the Speech Recognition Grammar Specification and Speech Synthesis Markup Language as distinct specifications. Most Voice Browser Working Group members have agreed to !
a royalty-free licensing scheme, though there currently are only a small number of non-royalty-free patents which may be key to VoiceXML 2.0. Last February, W3C founded a multimodal interaction working group which will soon post working drafts of Ink Markup Language, Extended MultiModal Annotation, and Multimodal Framework Note. Meanwhile, the IETF's Speech Services Control Working Group is developing protocols for managing remote speech recognition, speaker identification and confirmation, and speech synthesis; proposed standards thus far submitted by the working group include requirements for distributed control of ASR, SI/SV, and TTS Resources, and Protocol Evaluation. The American National Standards Institute's InterNational Committee for Information Technology Standards, in conjunction with the BioAPI Consortium, has proposed the Biometrics Application Programming Interface (BioAPI), which supports the enrollment, verification, and identification of users. ETSI's Auror!
a project has yielded a distributed speech recognition standard that b
oosts feature extraction on a client and enlists the server to handle the remainder of speech recognition; ETSI has also built a list of vocal commands in English, Spanish, Italian, German, and French.
http://www.speechtechmag.com/issues/8_4/cover/2182-1.html
From the ACM TechNews, August 22, 2003.
"Technology Key to Anticipating Outages"
Associated Press (08/22/03); Jesdanun, Anick
- It is hoped that the antiquated national grid will be upgraded to anticipate power failures such as those that caused the recent cascading blackout with the deployment of sophisticated monitoring technology, although such a vision is 10 years away and could cost tens of billions of dollars. Luther Dow of the Electric Power Research Institute explains that the goal is to develop an intelligent, self-repairing grid capable of monitoring and evaluating its performance, as well as taking steps to eliminate reliability problems. PJM Interconnection uses computers that abstract thousands of power-flow measurements into a graphic representation and run simulations of outages. PJM general manager Robert Hinkel thinks a cascading power failure triggered by local problems could be prevented by implementing automated sharing between neighboring utilities, while advanced artificial intelligence that can project sudden power load changes would also be beneficial. SmartSignal, IBM, and ot!
hers are also developing improved methods for data analysis using wireless sensors. Meanwhile, Tom Glock of Arizona Public Service reports that his company has been testing computer systems that can help operators gain a wider perspective of grid operations rather than keeping track of myriad lines and substations by simultaneously monitoring separate displays.
http://www.eweek.com/article2/0,3959,1228542,00.asp
"Total Information Overload"
Technology Review (08/03) Vol. 106, No. 6, P. 68; Jonietz, Erika
- Privacy advocates allege that the Defense Department's Terrorism Information Awareness (TIA) project would merge public and private databases into a vast "metabase" that would be mined to gather data on innocent American citizens, but Robert L. Popp of the Defense Advanced Research Projects Agency's (DARPA) Information Awareness Office denies these allegations, insisting that TIA's purpose "is developing a variety of information technologies into a prototype system/network to detect and preempt foreign terrorist attacks." He explains that DARPA is supplying operational agencies within the Defense Department and the intelligence community with analytical counterterrorism tools, adding that these agencies are using only the data and databases that existing legislation, policies, and regulations give them access to. Popp says TIA is not devising data-mining technologies to sift through transactional data such as the purchase of plane tickets to potential sites of terrorist atta!
cks, emails, phone conversations, and newswire stories; instead, TIA is focused on the development and integration of tools that facilitate collaboration, analytics, and decision support, as well as biometrics, security, pattern recognition and predictive modeling, and foreign-language translation. He discusses the two threads that make up TIA activity--an operational thread and a pure R&D thread. The operational thread is built upon the premise that government-owned databases already contain the data needed for an effective counterterrorism strategy, while the R&D thread seeks to determine whether that strategy could be improved if the government had wider access to the information space, as well as address any related privacy issues. Popp attributes the privacy community's backlash against TIA to a misinterpretation of the project's purpose picked up by many news outlets and Web sites last November, yet admits that DARPA ought to have been more straightforward with Congre!
ss and the public.
To read more about TIA, visit http://www.acm.org/usacm.
"Helping the Group to Think Straight"
Darwin (08/03); Chapman, Rod
- Group decision support systems (GDSS)--software tools designed to enhance collaboration and boost productivity in face-to-face meetings--are growing more popular and rewriting the rules of decision-making on the executive level. GDSS encourages equal participation in conferences by allowing participants to remain anonymous and by imposing a turn-taking scheme, with the result being more sensible and unbiased decisions. A GDSS architecture usually consists of a meeting facilitator and a bunch of local area network-connected computers running software that streamlines collaborative jobs such as brainstorming, the classification and appraisal of ideas, voting, and designating importance to alternative concepts. Electronic brainstorming via GDSS enables participants to concurrently contribute ideas, allowing more ideas to be offered in less time than in traditional conferences. A typical GDSS package displays the results of such collaborative sessions on a large screen and at pa!
rticipants' individual workstations so that group input can be viewed as a whole. Items are prioritized with the help of consensus building tools, while an agenda-setting element can narrow the group's focus. Thoughts and ideas are collected, organized, and edited through a multi-window setup, and the results can be published immediately after the meeting wraps up. The drawbacks of GDSS include low participation rates by people who type slowly or who do not like technology, a loss of body language and other nonverbal cues that are important communicative signposts, and confusion sowed by the lack of a skilled facilitator.
http://www.darwinmag.com/read/080103/group.html
Welcome to Marketplace Online, the Web's most comprehensive buyer's guide to the products and services within the business intelligence and data warehousing industry. As you browse the categories below, you will encounter larger listings at the top, indicating the solution provider has purchased this premium space. If the listing interests you, click the mail icon to request more information. Or, simply scroll past these listings to view a more comprehensive collection of products in this category.
http://www.dw-institute.com/marketplace/index.asp
From the ACM TechNews, August 20, 2003.
"AI Depends on Your Point of View"
Wired News (07/29/03); Shachtman, Noah
- The Information Processing Technology Office (IPTO) of the Defense Advanced Research Projects Agency (DARPA) has launched an effort to develop computers that can think for themselves, and the Real-World Reasoning project is part of this effort. The project seeks to give computers the ability to study situations from multiple angles and learn from experience, possibly through the integration of straight-up logic, probabilistic reasoning, game theory, and strategic thinking. Human beings do not merely feed new information into a database, notes IPTO chief Ron Brachman. He says that "[the data's] got to jive with what we know already. Or we've got [to] adjust our previous understanding." One technique people use to facilitate that adjustment is to look at the situation from a different context. It is doubtful that the Real-World Reasoning project will yield computers that exhibit the same mental flexibility as people, although it is hoped that the program will improve their mode of reasoning.
Click Here to View Full Article
From the ACM TechNews, August 18, 2003.
"Smart Chips Making Daily Life Easier"
BBC News (08/13/03)
- European researchers with the Smart-Its Project continue to make progress on "ubiquitous computing." During the recent computer graphics Siggraph exhibition in the United States, Smart-Its Project researcher Martin Strohbach explained that his colleagues at Lancaster University and other institutions in Zurich, Germany, Sweden, and Finland envision embedding all kinds of everyday household items with programmable microchip sensors, which would give them smarts. "For example, we have used a table as a mouse pointing interface so you can control the TV or computer," says Strohbach. Bookshelves that warn people when they are overloaded and water bottles that tell users when their contents need to be cooled are additional fun ideas for such technology, but ubiquitous computing could have more serious applications, and may even help save lives. Sensors placed in floors would be able to determine that an elderly person has fallen and is unable to stand up. And a medicine cabinet c!
ould be transformed into a unit that tracks its contents and guides people through taking medicine. DIY flatpack chips have been developed that sense movement and use a voice to warn people when they are making a mistake in assembling products.
http://news.bbc.co.uk/go/pr/fr/-/2/hi/technology/3144405.stm
"CMU Professor Wins Award for Program That Aids Decision-Making Process"
Pittsburgh Post-Gazette (08/11/03); Spice, Byron
- Artificial intelligence can be used to find the best overall solution in a competitive decision-making environment, according to work done by Carnegie Mellon University computer scientist Tuomas Sandholm; those decisions include real-life political ones, such as where to build a bridge or whether or not to award a gambling license. But Sandholm's startup company, CombineNet, has so far focused only on business solutions, saving companies such as Bayer, H.J. Heinz, and Procter & Gamble a total of $300 million through electronic auctions worth between $3 billion and $4 billion. For his so-called "combinatorial optimization technology," Sandholm will be given the Computers and Thought Award at the International Joint Conference on Artificial Intelligence in Mexico this month. Carnegie Mellon Center for Automated Learning and Discovery director Tom Mitchell says mechanisms such as combinatorial optimization technology are important for artificial intelligence because they harnes!
s group dynamics, similar to how the brain's nerve cells work together to make decisions. "Even though you feel like you're one person, you're actually 10 billion neurons," Mitchell says. Sandholm's work can be applied to any problem where a number of competing interests demand representation in a decision. By calculating each interest's need with others, the decision-making rules find the best solution for all parties involved. Sandholm says, "Clearing the market--deciding who should win--is a very hard optimization problem." Sandholm has won a patent for his method, which he says is the world's fastest for these NP-complete problems--a class of problem that is best illustrated by the Traveling Salesman question, where the fastest route through a large set of cities is needed.
http://www.post-gazette.com/pg/03223/210920.stm
From the ACM TechNews, August 11, 2003.
"Robot Challenge: Putting Artificial Intelligence to Work"
Voice of America News (08/04/03); Skirble, Rosanne
- Grace, the robot built in response to a general challenge by the American Society for Artificial Intelligence, is now preparing for more ambitious missions. Formally named the Graduate Robot Attending Conference in Edmonton, Grace successfully registered at the 18th annual National Conference on Artificial Intelligence in Canada, found her way to the meeting room, and then proceeded to deliver a prepared speech. She is a barrel-chested robot with no arms or legs, and moves on wheels guided by sensors near the ground. The robot features a flat panel computer screen that displays Grace's animated face, as well as a camera, speech synthesizer, and a microphone. The five research teams that contributed to Grace aim to improve her performance at the upcoming International Joint Conference on Artificial Intelligence in Acapulco, Mexico, from Aug. 9-15. Reid Simmons of the Carnegie Mellon University Robotics Institute, leads the team that designed Grace's software and hardware stru!
cture. He says Grace previously did in one hour what a person would take 10 minutes to do. The goal this time is to improve her performance by allowing her to do multiple tasks at once--finding the end of the registration line while en route, for example. In Canada, Grace embarrassed her creators when she cut in line at the registration table. In addition to improved speed and movement, the Grace team also intends to bolster her conversational skills so that she can engage in more natural exchanges with human attendees. Grace will be accompanied in Mexico by George, her male counterpart who is identical except for voice and display; in the future, the researchers plan to make the two robots collaborate to finish tasks faster.
Click Here to View Full Article http://www.technewsworld.com/perl/story/31199.html
"Glove Won't Speak for the Deaf"
Wired News (08/07/03); Batista, Elisa
- Some hearing-impaired people have conflicting feelings about technology designed to translate American Sign Language into spoken and written speech, the latest example being Jose Hernandez-Rebollar's AcceleGlove, a sensor-laden glove that converts hand and arm movements into vocalizations or text messages. Previous glove technologies take a long time to spell out words and have a small vocabulary. The AcceleGlove can translate almost 200 words and a few simple phrases, and can understand both the alphabet and dynamic gestures. George Washington University doctoral student Hernandez-Rebollar notes that a two-glove system will be needed to allow the wearer to communicate the entire ASL vocabulary, while a preinstalled dictionary would expand the range of gestures the glove can translate. However, fellow glove translator inventor Ryan Patterson of the University of Colorado observes that the technology is limited because it cannot take facial expressions into account. American !
Sign Language Institute director Paul Mitchell says that deaf people may be resentful of such technology, not just because of its limited vocabulary, but because it goes against their own cultural view that deafness is a unique trait rather than a disability. Certain organizations believe that imposing such technology on deaf people as a "cure" for their condition would force the hearing-impaired to radically alter their lives at the behest of the hearing world, rather than let the hearing community accommodate them.
http://www.wired.com/news/technology/0,1282,59912,00.html
From the ACM TechNews, August 8, 2003.
"Reasonable Computers"
ABCNews.com (08/05/03); Eng, Paul
- The Defense Advanced Research Projects Agency's (DARPA) Perceptive Assistant that Learns (PAL) program is an initiative to develop cognitive computer systems that can automatically perform many of the routine tasks that decision-makers are currently burdened with--chores such as answering email, scheduling meetings, and furnishing reports. Decision-makers' efficiency would be enhanced through the deployment of such digital assistants, which would be programmed to adapt to their users' needs via software that mimics the way people think and learn. DARPA has invested $22 million in SRI International's Cognitive Agent that Learns and Observes (CALO) program, a PAL-related project that seeks to mesh various expert software and technology developed by the military into a cognitive system that manages the many tasks and data typical of military decision-making. DARPA has thus far earmarked $7 million for Carnegie Mellon University's Reflective Agents with Distributed Adaptive Reas!
oning (RADAR) project. The idea behind RADAR, like CALO, is the consolidation of expert systems into a whole whose components can interact with each other. CMU researcher Scott Fuhlman illustrates this concept by projecting that RADAR's email element would be trained to notify the scheduling component when it recognizes key phrases and associated times; the scheduler may then communicate with the email sender's agent to set up the most convenient meeting time. Both SRI and CMU researchers believe that smarter, more adaptive systems will be the inevitable result of increasing computing power.http://www.technewsworld.com/perl/story/31199.html
"Smart Rooms"
Computerworld (08/04/03) Vol. 31, No. 37, P. 29; Anthes, Gary H.
- Carnegie Mellon University's "Barn" is a prototype conference room capable of recording everything that happens during a meeting through an array of microphones, cameras, projectors, and other equipment. Faculty advisor Asim Smailagic says the Barn was designed for meetings that aim to flesh out designs. "It's for brainstorming, idea generation, knowledge generation and knowledge transfer," he notes. Conference participants register their presence by donning radio-frequency identification tags, while wearable sensors allow the Barn to confirm their identity and constantly track their location; "social geometry" is used to adjust lighting and microphones according to attendees' physical position. A key component of the meeting area is a digital whiteboard outfitted with an intelligent interactive display, or "Thinking Surface," where concepts can be projected and updated via PC connections. Major decisions or brainstorms are flagged in meeting logs when someone pushes a "that!
was important" (TWI) button on his computer. TWI markers are useful for people who miss meetings and need to be brought up to speed quickly. Director of CMU's Human-Computer Interaction Institute Dan Siewiorek says future Barn research will focus on avoiding contradictory decisions among semi-independent subgroups within large project teams--and the headache of resolving those problems later on--by allowing liaisons in each subgroup to remotely audit the other groups' meetings quickly through the deployment of keyword recognition systems throughout the conference room. Siewiorek boasts that one of the standout characteristics of CMU researchers is their dedication to building technology around human issues, rather than vice-versa.http://www.technewsworld.com/perl/story/31199.html
"Is the Pen Mightier?"
CIO Insight (07/03) Vol. 1, No. 28, P. 67; Bolles, Gary A.
- Tablet PCs promise to boost worker productivity and support more flexible collaboration by capturing data and graphic information more efficiently, but there have been few major white-collar tablet PC implementations thus far, and many companies lack an efficient strategy for managing data once it is captured and stored. Tablet PCs are optimal for enterprises that already employ mobile computers that follow a similar design paradigm, such as pharmaceutical companies, retail stock management services, and warehouse inventory management; tablets may also be favored by companies that find personal digital assistants inadequate in terms of screen size or data storage capacity. In the end, however, "it really comes down to what the workflow is," observes John Keane of ArcStream Solutions. Experts such as Gartner VP Ken Dulaney consider the current crop of tablet PC products to be first-generation, which leaves plenty of room for improvement. The latest models owe a lot to progres!
s in flat-panel display, microprocessor, disk storage, and wireless networking technologies. There are no solid forecasts as to where further tablet PC growth will occur: Acer America's Sumit Agnihotry says his company expects tablet PCs to further penetrate the mainstream in the last six months of 2003, but he acknowledges that most current tablet buyers are existing tablet customers. Additional barriers to adoption include the high cost of tablet PCs and their potential support costs, if off-the-shelf products do not fulfill corporate needs. The best approach is to test commercial tablet PC hardware and software by selecting the IT staff most likely to benefit from the technology as a test group; the incorporation of Wi-Fi into the networking infrastructure is also a plus.
http://www.cioinsight.com/article2/0,3959,1193351,00.asp
From the ACM TechNews, August 4, 2003.
"VR Accommodates Reality"
Technology Research News (08/06/03); Smalley, Eric
- Flight simulators and other virtual reality systems incorporate concrete elements to give artificial environments a ring of authenticity, and University of North Carolina and Disney Corporation researchers have developed a system that mixes real and virtual objects in an artificial reality. In such a hybrid virtual environment, people can, for instance, interact with real window drapes while viewing a virtual representation of their hands parting simulated drapes to reveal a simulated view out a simulated window. Benjamin Lok, currently at the University of Florida, says the system's core component is a technique for ascertaining when actual and artificial objects collide and supplying a realistic response. The shapes and positions of real objects in the virtual space are determined by a quartet of cameras and object recognition software; the data gathered by the cameras is used to flesh out 3D shells that conform to the objects' shapes. When real and virtual objects collide!
, only the virtual objects deform or move so as to prevent overlap between objects and shells. The researchers used the environment to simulate a space shuttle payload assembly task that NASA engineers interacted with; Lok says the test results concluded that the system is an improvement over fully virtual environments when it comes to assessing hardware designs and planning assembly tasks. He adds that hybrid environments not only have a greater sense of authenticity, but their ease of installation allows them to be employed earlier in the design process. Lok says the researchers are currently working to improve the virtual representation of real objects, and believes that hybrid systems will not be ready for general applications for two decades. The researchers presented their work at ACM's Symposium on Interactive 3D Graphics in April.
Click Here to View Full Article
http://www.trnmag.com/Stories/2003/073003/VR_accommodates_reality_073003.html
"Inventor Designs Sign Language Glove"
Associated Press ; (08/03/03); Hartman, Carl
- George Washington University researcher Jose Hernandez-Rebollar's AcceleGlove is a system in which a glove outfitted with sensors and a wearable computer can convert American Sign Language (ASL) gestures into spoken words or text as an aid to the hearing-disabled. Hernandez-Rebollar notes that his invention is more advanced than others because it is also capable of translating some of ASL's more complicated arm and body movements into words and simple phrases. The glove can produce signs that correspond to all 26 alphabetical characters, allowing any word to be spelled out, although the process is slow. The device can thus far generate less than 200 words that can be signed with one hand, and a limited number of simple sentences. Institute for Disabilities, Research, and Training director Corinne K. Vinopol thinks the AcceleGlove could be especially useful for deaf parents with hearing children as well as normal parents whose children are hearing-disabled; of particular !
interest to Vinopol is Hernandez-Rebollar's work to enhance the AcceleGlove so that it can translate ASL into Spanish as well as English. The AcceleGlove's inventor believes a one-handed version could hit the market as early as 2004, while a more sophisticated two-handed model could debut the following year. Hernandez-Rebollar adds that the glove could be integrated with existing wireless gear and be used as a vibration- or text-based communications device for squad commanders to relay orders to concealed soldiers.
http://www.siliconvalley.com/mld/siliconvalley/6454646.htm
"Virtual Reality Conquers Sense of Taste"
New Scientist (07/30/03); Ananthaswamy, Anil
- Researchers at the University of Tsukuba in Japan have developed a virtual reality device that is able to simulate the taste of food and the way food feels in the mouth. Researchers already have been able to simulate vision, hearing, touch, and smell, but not the sense of taste, which uses chemical and auditory cues in feeling food in the mouth. The device makes use of a thin-film force sensor placed in the mouth to measure and record the force needed to bite through a piece of food, biological sensors made up of lipid and polymer members to record the chemical makeup of the food's taste, and a microphone that records the vibrations of the jawbone while chewing. Cloth and rubber covers the mechanical part of the simulator, which resists the bite in a manner that is similar to what occurs with real food. The device also uses a thin tube that shoots a mixture of flavorings onto the tongue to stimulate basic taste sensations, and a tiny speaker plays back the sound of chewing i!
n the ear. Cheese, crackers, confectionary, and Japanese snacks have been among the foods simulated with the device, but the team still must find a way to use a vaporizer to deliver smells to the nose. Hiroo Iwata and colleagues at the university presented their research at the recent SIGGRAPH 03 computer graphics and interactivity conference in San Diego.
http://www.newscientist.com/news/news.jsp?id=ns99994006
"The End of Handicaps"
eSchool News (07/03) Vol. 6, No. 7, P. 40; Kurzweil, Ray
- In an address to the CSUN 18th Annual Conference on "Technology and Persons with Disabilities," futurist and National Medal of Technology recipient Ray Kurzweil presented his vision of the sweeping technological changes he expects to take place over the next few decades--in fact, he argued that some of these changes have already begun. Kurzweil estimates that the rate of progress doubles every decade--by that reckoning, 21st century progress will be roughly 1,000 times greater than 20th century progress. Kurzweil envisions ubiquitous computers with always-on Internet connections, systems that allow people to fully immerse themselves in virtual environments, and artificial intelligence embedded into Web sites by 2010. The futurist also projects that 3D molecular computing will be a reality by the time Moore's Law reaches its limits, while nanotechnology will emerge by the 2020s. Kurzweil predicts that the human brain will have been fully reverse-engineered by 2020, which will!
result in computers with enough power to equal human intelligence. He forecasts the emergence of systems that provide subtitles for deaf people around the world, as well as listening systems also geared toward hearing-impaired users, while blind people should be able to take advantage of pocket-sized reading devices in a few years. Kurzweil believes that people with spinal cord injuries will be able to resume fully functional lives by 2020, either through the development of exoskeletal robotic systems or a technique to mend severed nerve pathways, possibly by wirelessly transmitting nerve impulses to muscles. All of these developments are expected to reach maturity and culminate in enhanced human intelligence by 2029.
Click Here to View Full Article http://www.technewsworld.com/perl/story/31199.html
From the ACM TechNews, July 30, 2003.
"AI Depends on Your Point of View"
Wired News (07/29/03); Shachtman, Noah
- The Information Processing Technology Office (IPTO) of the Defense Advanced Research Projects Agency (DARPA) has launched an effort to develop computers that can think for themselves, and the Real-World Reasoning project is part of this effort. The project seeks to give computers the ability to study situations from multiple angles and learn from experience, possibly through the integration of straight-up logic, probabilistic reasoning, game theory, and strategic thinking. Human beings do not merely feed new information into a database, notes IPTO chief Ron Brachman. He says that "[the data's] got to jive with what we know already. Or we've got [to] adjust our previous understanding." One technique people use to facilitate that adjustment is to look at the situation from a different context. It is doubtful that the Real-World Reasoning project will yield computers that exhibit the same mental flexibility as people, although it is hoped that the program will improve their!
mode of reasoning.
http://www.technewsworld.com/perl/story/31172.html
"Helping Machines Think Different"
Wired News (07/29/03); Shachtman, Noah
- The Defense Advanced Research Projects Agency (DARPA) says it has embarked on a series of projects in recent months that on the surface may seem isolated, but are in fact part of an overarching push to make computers capable of thinking for themselves. LifeLog, the most well-known of these projects, seeks to track all aspects of a person's life and feed this information into a database; a program is supposed to extract narrative threads from this data to deduce that person's relationships, experiences, and traits. "Our ultimate goal is to build a new generation of computer systems that are substantially more robust, secure, helpful, long-lasting and adaptive to their users and tasks," explains American Association for Artificial Intelligence President Ron Brachman, who was recently appointed head of DARPA's Information Processing Technology Office (IPTO). "These systems will need to reason, learn and respond intelligently to things they've never encountered before." Brac!
hman insists that LifeLog is not a profiling or terrorist-tracking tool, but an episodic-memory system forming the basis of a sophisticated electronic assistant that intuits the habits and preferences of its boss. IPTO's $29 million Perceptive System that Learns (PAL) program is an effort to build self-improving software that references episodic memory to automate the scheduling of meetings and other tasks. Brachman observed in a recent presentation that the growing complexity of computer systems increases their fragility and vulnerability to attack. So that a computer can learn and adapt to such factors, it must build a catalog of existence the same way people do.
http://www.wired.com/news/privacy/0,1848,59787,00.html
"The Future of Human Knowledge: The Semantic Web"
TechNewsWorld (07/28/03); Koprowski, Gene J.
- An international team of scientists is defining and devising standards, protocols, and technologies that will form the foundation of the Semantic Web, a more context-aware version of the Internet envisioned as an online source for the collective scientific, business, and artistic knowledge of the human race. The Semantic Web, which optimistic forecasters believe is just a few years away, will be able to conduct searches based on ordinary language rather than keywords. The project is being spearheaded by the World Wide Web Consortium; participants include researchers at computer companies and major academic institutions such as Kyoto University, Stanford University, and MIT, while the Pentagon's Defense Advanced Research Projects Agency (DARPA) is a Semantic Web underwriter. Maturing technologies thought to be the strongest candidates for implementing the Semantic Web include DARPA agent markup language, resource description framework (RDF), and Web ontology language (OWL!
). Many businesses are too impatient to wait for the Semantic Web to emerge and are investigating other stop-gap solutions, such as the automation of routine e-commerce tasks by software agents. Meanwhile, niche search engines that seek out data based on user preferences are being fleshed out by Penn State researchers and others. Extreme Logic strategic consultant Steve Woods notes that analysts expect the Semantic Web to supplant the current Web within two years, although his firm considers 2010 to be a more realistic projection for such a development.
http://www.technewsworld.com/perl/story/31199.html
"And Now, Here Comes 'Spray-On Electronics'"
International Herald Tribune ; (07/25/03); Schenker, Jennifer L.
- IBM, Philips, and Bell Labs are developing organic transistors as a basis for plastic electronics, but Plastic Logic of Cambridge, England, claims to have a competitive advantage with a patented technique for printing "spray-on" polymer-based circuits. Although Plastic Logic chips do not boast the speed of crystalline silicon chips, Plastic Logic co-founder Richard Friend says they are sufficiently cheap and fast enough to supplant silicon in updateable active matrix displays incorporated into electronic signs and electronic newspapers. Plastic electronics promise to eliminate the need for manufacturing chips in sterile "clean rooms" and the associated costs, and make ubiquitous electronics an achievable goal. Plastic electronics could be the keystone of a range of new products and applications, including flexible e-paper and e-textiles, disposable electronics, advanced biosensors, electronic labels, intelligent packaging, and roll-up displays for mobile phones. One of P!
lastic Logic's partners, Cambridge Display Technologies, is developing organic light-emitting displays as a complementary technology for plastic electronics. "The flexibility that plastic electronics offers allows us to enter new markets and enter new product lines," declares Jim Welch of Gyricon. Plastic electronics' initial applications are expected to be in retail stores, probably as price displays that can be remotely updated via computer. More advanced products such as changeable e-books will not emerge until wireless technologies proliferate and the cost of display materials significantly declines.
http://www.wired.com/news/privacy/0,1848,59799,00.html
From the ACM TechNews, July 25, 2003.
"Touch Technology: Internet May Let Us 'Feel' the Stars"
Christian Science Monitor (07/24/03) P. 1; Valigra, Lori
- Researchers at NASA's Jet Propulsion Laboratory and the State University of New York at Buffalo are developing and experimenting with network and sensor technologies designed to allow people to experience virtual tactile sensations. Adriane Hooke of the Jet Propulsion Laboratory is spearheading an initiative to build networking protocols that could form the basis of an outer-space Internet linking satellites, robots, and other types of interplanetary equipment. The effort is part of NASA's Deep Space Network, a project involving a trio of Earth-based antenna arrays designed to communicate with spacecraft and carry out scientific probes. Hooke forecasts that the next 15 years will witness the emergence of interplanetary telepresence, and says that one day, "You can have data sent back from a robotic sensor in space and recreate the information in a virtual reality-like environment on Earth, so you could feel like you were roaming around on Mars." A key element of such a d!
evice is haptics technology, and University of Buffalo researchers led by Virtual Reality Lab director Thenkurussi Kesavadas have made a significant breakthrough with a sensor glove that allows its wearer to feel the sensations experienced by another person through an Internet connection. So far glove users can only feel hard or soft objects and the contour of specific shapes, though Kesavadas expects the next-generation Internet will help refine the glove's haptics ability so that users can feel fabrics or skin, for example. He explains that interacting with the glove is similar to showing how a child to write by guiding his or her hand, adding that "With our technology, you can do and feel, which leads to learning." Kesavadas predicts that simple Internet touch applications will find their way into the game industry within a few years.
http://www.csmonitor.com/2003/0724/p14s01-stin.html
"Socially Intelligent Software: Agents Go Mainstream"
TechNewsWorld (07/23/03); Koprowski, Gene J.
- Companies that wish to make customer service more efficient and effective are using software agents that interact with clients in order to identify and solve their problems faster, but the technology's applications are not restricted to consumer interfaces--the U.S. military is also using virtual agents to enhance combat tactics. Furthermore, researchers are trying to embed social intelligence into the agents, providing users with a more dynamic computer interface that can intuit their emotional states and take appropriate measures. LiveWire Logic's RealDialog Agents, a product that combines computational linguistics and artificial intelligence, is programmed to respond to customer inquiries accurately, consistently, and immediately via interactive text-based conversations that obviate the need for human intervention. LiveWire reports that the software can defuse the stress on all customer support touch points and reduce customer support and call center costs. RealDialog!
's utilization of drag-and-drop and automated-authoring technology makes design and management easy, even for nonprogrammers. Meanwhile, Vanderbilt University and the University of Southern California recently won a major contract with the Office of Naval Research to supply computer software that can assist battlefield tactics by lowering risk, raising the odds for mission success, and supporting the objectives of mission commanders, according to Robert Neches of USC's Information Sciences Unit. The Vanderbilt-USC technology, Autonomous Negotiating Teamware, helps coordinate combat air squadrons through the communication of individual software modules that exchange information and make balanced decisions. University of Southampton researchers have devised a software agent with learning algorithms so it can adjust to user requirements and manage users' schedules like a virtual butler.
http://www.technewsworld.com/perl/story/31172.html
"MIT's Tablet Tech Gets a Look-See From Microsoft"
Mass High Tech (07/21/03); Miller, Jeff
- MIT researchers are exploring ways to radically change the computer interface. The person who integrated typewriter functions with the computer made one of the worst mistakes in computer engineering, according to MIT computer science professor Randall Davis. His graduate students are working on a number of innovations that will allow users to sketch and audibly describe concepts for the computer. Davis says the inspiration for this work is a short Disney film he saw as a boy, where the animations came to life after being drawn on paper. Microsoft is interested in the work of Ph.D. candidate Christine Alvarado, whose sketch application lets engineers describe basic objects and concepts, such as wheels, axles, slopes, and springs, by drawing them. Alvarado's work is unique because the application recognizes drawn objects in the context of others, unlike other writing applications such as Palm's graffiti, which requires users to draw in specific ways and cannot evaluate cha!
racters in context. Ph.D. candidate Tracy Hammond is working on a similar tool, but meant for universal modeling language programmers. Users can create their own "shape vocabulary" defining pieces of information that is then turned into code by IBM's Rational Rose system. Davis wants to combine voice and gesture with the sketch applications, allowing someone to draw a recognized object and then manipulate it with vocal commands. Eventually, the work will contribute to a computer that is not just a desktop system, but is the desktop itself that users write on.
From the ACM TechNews, July 16, 2003.
"Computer Simulations: Modeling the Future"
TechNewsWorld (07/15/03); Koprowski, Gene J.
- State-of-the-art computer simulation technologies are being developed and employed for commercial, medical, and military projects. The University of Southern California's Information Sciences Institute, in conjunction with Options Technology, incorporated artificial intelligence, cluster computing, and high-speed networks to perform a simulation for the Joint Forces Command involving 1 million computer-generated vehicles capable of autonomous movement and environmental response, whereas previous simulations could only support about 100,000 vehicles. "We are within sight of being able to create a large-scale, high-resolution battlefield environment detailed enough to let us experiment and see how a given system might perform," declared USC's Robert Lucas. This summer will witness a real-word military exercise at the same scale to give analysts the ability to track vehicle and troop movements by controlling sensors in the field. Companies such as Insight and Clockwork Solu!
tions are building modeling and simulation technology that can be applied to supply chain management: Insight's solution simulates the impact of a natural disaster or terrorist attack on an individual organization's supply chain, while Clockwork Solutions is working on software that can model industrial systems to anticipate the effects of wear and tear on factory operations over time. Meanwhile, researchers at the Case Western Reserve University School of Engineering have devised the Model Integrated Metabolic system as a tool that can simulate how the heart, liver, and brain react to physical exertion. Bayer, Organon, and other pharmaceutical companies are employing Entelos' PhysioLabs software, which can model different disease states to see how afflicted systems will respond to drugs.
http://www.technewsworld.com/perl/story/31116.html
From the ACM TechNews, July 14, 2003.
"New Software Allows You to Log on By Laughing"
New Scientist (07/09/03); Nowak, Rachel
- Computer scientists at Monash University in Melbourne, Australia, have developed a program that will automatically log someone onto the nearest computer at the sound of their voice or laughter. In an effort to make it easier for their staff to log onto networked computers, the researchers have created SoundHunters, software that makes use of sound recognition and intelligent agent technology. With the aid of microphones on computers, SoundHunters is designed to recognize a voice, and then use strategically placed intelligent agents to determine the location of the individual in relation to the nearest computer. Intelligent agents can listen to the footsteps of a person moving throughout an office. "Once the agents have worked out the direction the person is going, they would even be able to stay one or two steps ahead," says researcher Arkady Zaslavsky. Although the researchers have more work to do to improve SoundHunters' ability to distinguish voices, marrying sound re!
cognition and intelligent agent technology has become a reality.
Click Here to View Full Article
From NY Times, July 11, 2003
Data in Conflict: Why Economists Tend to Weep
By DANIEL ALTMAN
- Is the economy pulling itself out of a slump, or is it sinking deeper? The answer could be either, based on the data from government agencies these days. Because they measure the same things — employment, incomes and prices — in different ways, it can be hard to tell what is really happening.
Read the whole article
From DSS News, July 6, 2003
Ask Dan: How can simulation be used for decision support?
- Questions about using simulation for building a DSS are reasonably
frequent in my Ask Dan! email. So this column has been in the works for
some time, but my summer research project on advanced decision and
planning support motivated me to move this column to the "front burner".
Coincidentally, I received an email on Friday, July 4, 2003 from John
Walker ( http://jbwalker.com ). John wrote "I appreciate your
newsletter. Keep 'em coming!" Thanks for the positive feedback. Also,
John thought I might be interested in a June 16, 2003 interview with
Eric Bonabeau at CIOInsight.com. Eric is the founder of Icosystem Corp.,
Cambridge, MA ( http://Icosystem.com ). Icosystem develops agent-based
models and simulations. Agent-based or multi-agent simulations are the
"latest and greatest" technology or approach in the simulation toolkit.
Before I discuss agent-based simulations, let's review the basics of
simulation. According to a number of sources, simulation is the most
frequently used quantitative approach for solving business problems and
supporting business decision making. That generalization may be true,
but simulation is still the province of management science
"specialists". Simulation has not been made "manager friendly".
- Simulation is a broad term that refers to an approach for imitating the
behavior of an actual or anticipated human or physical system. The terms
simulation and model, especially quantitative and behavioral models, are
closely linked. From my perspective, a model shows the relationships and
attributes of interest in the system under study. A quantitative or
behavioral model is by design a simplified view of some of the objects
in a system. A model used in a simulation can capture much detail about
a specific system, but how complex the model is or should be depends
upon the purpose of the simulation that will be "run" using the model.
With a simulation study and when simulation provides the functionality
for a DSS, multiple tests, experiments or "runs" of the simulation are
conducted, the results of each test are recorded and then the aggregate
results of the tests are analyzed to try to answer specific questions.
In a simulation, the decision variables in the model are the inputs that
are manipulated in the tests.
- In my DSS book (Power, 2002), Chapter 10 on Building Model-Driven
Decision Support Systems notes "In a DSS context, simulation generally
refers to a technique for conducting experiments with a computer-based
model. One method of simulating a system involves identifying the
various states of a system and then modifying those states by executing
specific events. A wide variety of problems can be evaluated using
simulation including inventory control and stock-out, manpower planning
and assignment, queuing and congestion, reliability and replacement
policy, and sequencing and scheduling (p. 172)."
- There are several types of simulation and a variety of terms are used to
identify them. When you read about simulation you will find references
to Monte Carlo simulation, traditional mathematical simulation,
activity-scanning simulation, event-driven simulation, process-based
model simulation, real-time simulation, data-driven simulation,
agent-based and multi-agent simulation, time dependent simulation,
andvisual simulation.
- In a Monte Carlo or probabilistic simulation one or more of the
independent variables is specified as a probability distribution of
values. A probabilistic simulation helps take risk and uncertainty in a
system into account in the results. Time dependent or discrete
simulation refers to a situation where it is important to know exactly
when an event occurs. For example, in waiting line or queuing problems,
it is important to know the precise time of arrival to determine if a
customer will have to wait or not. According to Evan and Olson (2002)
and others, activity-scanning simulation models involve describing
activities that occur during a fixed interval of time and then
simulating for multiple future periods the consequences of the
activities while process-driven simulation focuses on modeling a logical
sequence of events rather than activities. An event-driven simulation
also identifies "events" that occur in a system, but the focus is on a
time ordering of the events rather than a causal or logical ordering.
- Simulation can assist in either a static or a dynamic analysis of a
system. A dynamic analysis is enhanced with software that shows the
time sequenced operation of the system that is being predicted or
analyzed. Simulation is a descriptive tool that can be used for both
prediction and exploration of the behavior of a specific system. A
complex simulation can help a decision maker plan activities, anticipate
the effects of specific resource allocations and assess the consequences
of actions and events. In a business simulation course, text materials
usually focus on static, Monte-Carlo simulations and dynamic, system
simulations (cf., Evan and Olson, 2002).
- In many situations simulation specialists build a simulation and then
conduct the special study and report their results to management. Evans
and Olson (2002) discuss examples of how simulation has been used to
support business and engineering decision making. They report a number
of special decision support studies including one that evaluated the
number of Hotel reservations to accept to effectively utilize capacity
to create an overbooking policy (p. 161-163), a Call Center staffing
capacity analysis (p. 163-165), a study comparing new incinerating
system options for a municipal garbage recycling center (p. 176-179), a
study evaluating government policy options, and various studies for
designing facilities.Examples of model-driven DSS built with a
simulation as the dominant component include: a Monte Carlo simulation
to manage foreign-exchange risks; a spreadsheet-based DSS for assessing
the risk of commercial loans (cf., Decisioneering Staff, 2001), a DSS
for developing a weekly production schedule for hundreds of products at
multiple plants; a program for estimating returns for fixed-income
securities; and a simulation program for setting bids for competitive
lease sales (cf., Evan and Olson, p. 190).
- Sometimes in an effort to provide decision support an actual small-scale
model or ecosystem is built and then it is "used in a simulated
environment". For example, a physical model of an airplane may be built
so that it can be tested in a wind tunnel to examine its design
properties. Today a computer simulation might be used in place of a
"physical model" for much of the design testing. The case "Product
development decision support at Lockheed Martin" by Silicon Graphics
Staff posted at DSSResources.COM October 16, 2002 is an example of this
use of simulation.
- Agent-based or multi-agent simulation does not replace any of the
traditional simulation techniques. But in the last 5 years, agent-based
visual simulations have become an alternative approach for analyzing
some business systems. According to Bonabeau, "People have been thinking
in terms of agent-based modeling for many years but just didn't have the
computing power to actually make it useful until recently. With
agent-based modeling, you describe a system from the bottom up, from the
point of view of its constituent units, as opposed to a top-down
description, where you look at properties at the aggregate level without
worrying about the system's constituent elements."
- Multi-agent simulations can be used to simulate some natural and
man-created systems that traditional simulation techniques can not.
Bonabeau asserts agent-based modeling works best in situations where a
system is "comprised of many constituent units that interact and where
the behavior of the units can be described in simple terms. So it's a
situation where the complexity of the whole system emerges out of
relatively simple behavior at the lowest level." Examples of such
systems include shoppers in a grocery store, passengers, visitors and
employees at an airport or production workers and supervisors at a
factory. What is the objective of an agent-based simulation? According
to Bonabeau, "the objective is to find a robust solution" -- one that
will work fine no matter what happens in the "real world".
- A simulation study can answer questions like how many teller stations
will provide 90% confidence that no one will need to wait in line for
more than 5 minutes or how likely is it that a specific project will be
completed on time and under budget? With a visual simulation decision
makers or analyst can observe an airplane in a wind tunnel, a proposed
factory in operation or customers entering a new bank or a construction
project as "it will occur".
- Based on my observations over the past 25 years, simulation has been
used much more for one-time, special decision support studies than it
has been used as the model-component in building a model-driven DSS.
This is and can change with increased ease in creating visual
simulations. Visuals imulation means managers can see a graphic display
of simulation activities, events and results. Will Wright's games "The
Sims", "SimCoaster" and "SimCity" (cf., http://thesims.ea.com/ ) are the
precursors for advanced, agent-based, model-driven DSS. I am continuing
my research on boids, sims, swarms, ants and other such agent
technologies. So perhaps in another Ask Dan! I can discuss in more
detail complex, realistic visual simulations based upon behavioral
models. My sense is that current technologies can support development of
complex, "faster than real-time", dynamic, agent-based, model-driven DSS
for a wide variety of specific decision situations.
References
- Decisioneering Staff, "SunTrust 'Banks' on Crystal Ball for assessing
the risk of commercial loans", Decisioneering, Inc., November 1998,
posted at DSSResources.COM March 16, 2001.
- Eppen, G.D., F.J. Gould, and C.P. Schmidt. Introductory Management
Science (Fourth Edition), Englewood Cliffs, NJ: Prentice Hall, 1993.
- Evan, J. R. and D. L. Olson, Introduction to Simulation and Risk
Analysis (2nd Edition), Upper Saddle River, NJ: Prentice Hall, 2002.
- Rothfeder, J. "Expert Voices: Icosystem's Eric Bonabeau,"
CIOInsight.com, June 16, 2003,
http://www.cioinsight.com/article2/0,3959,1124316,00.asp.
- Silicon Graphics Staff, "Product development decision support at
Lockheed Martin", sgi, Inc., 2002, posted at DSSResources.COM October
16, 2002.
From the ACM TechNews, June 16, 2003.
"Your Blink Is My Command"
ABCNews.com (06/13/03)
- Ted Selker of MIT and Roel Vertegaal of Ontario's Queen's University are focused on the development of context-aware computers that can pick up on "implicit communication" relayed through eye or body movements to carry out commands. Much of the technology the two colleagues are working on incorporates eye-recognition systems. For example, a toy dog has been modified to bark when it receives infrared signals as a person wearing special eyeglasses stares at it; it is also programmed to stop barking if the person is blinking a lot, or is not looking in its direction. The dog can tell when two people wearing the glasses are looking at each other because lights on both pairs blink when eye contact is made. Selker and Vertegaal are also developing Attentive TV, in which the eyes of a person viewing a program on a computer are monitored with a camera; Vertegaal explains that the program starts or stops depending on where the eyes are focused. Another technology being worked on !
is "Eyepliances" that allow users to activate or deactivate appliances by staring at them and issuing voice commands. Meanwhile, users can more efficiently deal with interruptive phone calls through Eyeproxy, a device with quivering eyeballs that takes messages or patches calls through depending on whether the user decides to look at them. The technology Selker and Vertegaal are developing is five to 15 years away from commercialization.
Click Here to View Full Article
"Poker Playing Computer Will Take on the Best"
Edmonton Journal (06/12/03); Cormier, Ryan
- A team of artificial intelligence researchers at the University of Alberta has spent the last 10 years developing a computer program that can play poker, and they believe the program could conceivably outclass all human players within a year. The pseudo-optimal poker program (PsOpti) is unique in that it is capable of bluffing, and working with imperfect information. "If you do not bluff, you're predictable," notes Jonathan Schaeffer of the university's Games Research Group. "If you're predictable, you can be exploited." PsOpti is based on the game theory formula developed by Nobel Prize-winning mathematician John Nash. Ph.D. student and project researcher Darse Billings says the formula attempts to outline an outcome for the game that is fair to everyone. Schaeffer says that most original game research was based on games with perfect information, and adds that poker and other games with imperfect information have much more critical real-world applications. Reasoning wit!
h imperfect information, as a poker player does, could be useful in areas ranging from international negotiations to purchasing an automobile.
From the New York Times, June 10, 2003.
A Passion to Build a Better Robot, One With Social Skills and a Smile
June 10, 2003
By CLAUDIA DREIFUS
- CAMBRIDGE, Mass. - Dr. Cynthia L. Breazeal of the
Massachusetts Institute of Technology is famous for her
robots, not just because they they are programmed to
perform specific tasks, but because they seem to have
emotional as well as physical reactions to the world around
them. They are "embodied," she says, even "sociable" robots
- experimental machines that act like living creatures.
- As part of its design triennial, the Cooper-Hewitt National
Design Museum in New York is exhibiting a "cyberfloral
installation," by Dr. Breazeal, which features robotic
flowers that sway when a human hand is near and glow in
beautiful bright colors.
- "The installation," said Dr. Breazeal, 35, "communicates my
future vision of robot design that is intellectually
intriguing and remains true to its technological heritage,
but is able to touch us emotionally in the quality of
interaction and their responsiveness to us - more like a
dance, rather than pushing buttons."
- Dr. Breazeal (pronounced bruh-ZILL) wrote about her
adventures as a modern-day Mary Shelley in her book
"Designing Sociable Robots," released this year by M.I.T.
Press. She was also a consultant on the Steven Spielberg
movie "A.I.: Artificial
Intelligence."
Q. What is the root of your passion for robots?
- A. For me,
as for many of us who do robotics, I think it is science
fiction. My most memorable science fiction experience was
"Star Wars" and seeing R2D2
and C3PO. I fell in love with those robots.
Q. R2D2 and C3PO were good robots, friendly. But so many of
the robots of science fiction are either hostile, or at
least misunderstood, like Frankenstein's monster and HAL of
"2001: A Space Odyssey." Why
have fictional robots been so menacing?
- A. We have a lot of suspicion of robots in the West. But if
you look cross-culturally, that isn't true. In Japan, in
their science fiction, robots are seen as good. They have
Astro Boy, this character they've fallen in love with and
he's fundamentally good, always there to help people.
- In a lot of Western science fiction, you need some form of
conflict, whether it's aliens or robots. I think in Western
culture, being more suspicious of science, and hubris,
you'll see a lot of fear of creating something that goes
out of control.
- Also a lot of Western sci-fi books and movies are about the
basic notion of taking responsibility for what you create.
If you're talking about creating any new technology, this
is always an issue.
Q. How did you get into robot building?
- A. I was raised on
technology. I grew up in Livermore, Calif., a town of
physicists and cowboys. My parents worked at the government
laboratories there. So technology was very normal for me.
- Before college, I wanted to be a doctor or an engineer. At
college, U.C. Santa Barbara, I considered NASA and becoming
an astronaut. At college, they had just started up a center
on robotics and it was this cool new thing. I remember
sitting with one of my friends who was talking about
building planetary rovers for NASA, and that seemed so
wonderful.
- So when it came to applying to graduate school, I was
naturally drawn to Prof. Rod Brooks's robotics lab at
M.I.T., where they were doing pioneering work developing
micro-rovers, those robotic vehicles that might do
experiments on other planets for NASA.
Q. The first robots you worked on were made for use in
space?
- A. Yes. A lot of my early work was actually a precursor to
these micro-rovers that are in use today at NASA's Jet
Propulsion Laboratory. Rod Brooks, my adviser, had been
developing these rough-terrain robots that were insectlike
in the way they looked and how their computerized brains
functioned.
- Rod went on a sabbatical and when he came back, he said,
"I've got one big project left in me and we're going to do
a humanoid robot now." The common wisdom was first you do
robot insects, then reptiles, dogs and eventually humans.
- The thing that really intrigued me about a humanoid project
was the chance to work on the robots' ability to interact
with people. This would no longer be the robotics that
others had done: robots in space, in minefields, as
substitutes for humans in dangerous environments. This was
about bringing robots into human environments so that they
could help people in ways that hadn't been possible before.
- I was curious to see if benevolent interactions with people
could accelerate and enrich the learning process of
machines. In short, I wanted to see if I could build a
robot that could learn from people and actually could learn
how to be more socially sophisticated. It was that thinking
that led to Kismet; my work on it was my doctoral thesis.
Q. Does your robot Kismet look like a human?
- A. No. It is
more a robotic cartoon. It doesn't have arms and legs; its
mechanical parts are uncovered. In fact, it is mostly a
face. It has eye-brows, surgical tubing for lips so that it
can smile and frown, pink ears used for expression and
showing arousal and things like that. The engineering on
Kismet was inspired by the social development of human
infants.
- In Japan in the 1980's, they were already starting on
humanoid robots. But Kismet was the first developed to
specialize in face-to-face social interactions with humans.
- Kismet was started in 1997, and I intentionally created it
to provoke the kind of interactions a human adult and a
baby might have. My insight for Kismet was that human
babies learn because adults treat them as social creatures
who can learn; also babies are raised in a friendly
environment with people.
- I hoped that if I built an expressive robot that responded
to people, they might treat it in a similar way to babies
and the robot would learn from that. So, if you spoke to
Kismet in a praising tone, it would smile and perk up. If
you spoke to it in a scolding tone, it was designed to
frown. There were models of emotion conveyed through its
face.
Q. Did your robot Kismet ever learn much from people?
- A.
From an engineering standpoint, Kismet got more
sophisticated. As we continued to add more abilities to the
robot, it could interact with people in richer ways. And
so, we learned a lot about how you could design a robot
that communicated and responded to nonlinguistic cues; we
learned how critical it got for more than language in an
interaction - body language, gaze, physical responses,
facial expressions.
- But I think we learned mostly about people from Kismet.
Until it, and another robot built here at M.I.T., Cog, most
robotics had little to do with people. Kismet's big triumph
was that he was able to communicate a kind of emotion and
sociability that humans did indeed respond to, in kind. The
robot and the humans were in a kind of partnership for
learning.
- Our newest robot, Leonardo, is even more expressive. It has
arms, a torso, legs and skin, which is very important in
terms of raising the bar on robotic sophistication.
Leonardo's facial expressions are characteristic of human
facial expressions. The gestures it can make are
characteristic of human gestures.
Q. What is the purpose of building sophisticated robots?
Some might say that you're just building very expensive
Furbys.
- A. We want to see if we can build robots that are more than
tools. I'd like to push robotics to the point where we are
creating machines that cooperate with people as partners.
For instance, right now, I'm working with NASA on
developing Robonaut, which is envisioned as an astronaut's
assistant.
Q. Why did you feel you needed to give your newest robot,
Leonardo, limbs and a skinlike covering?
- A. We wanted to make a robot with more of a body to push
our experiments to the next level. Leonardo has the ability
to shrug its shoulders and sway its hips. It has 32 motors
in the face, so it can do near-human facial expression and
near-human lip synchronization. It's just an incredibly
rich platform for social interaction, and that's what it's
designed for. It can manipulate objects, which is very
different from the armless Kismet.
Q. Do you miss your robots when you're not with them?
- A. I
miss Kismet - I do! What people might not understand is
that when I talk about robots, it's not just a physical
robot in the lab, it's the vision of what I see them
becoming.
- It's almost embarrassing for me to talk about Kismet,
because people think it's so odd that I could have this
attachment to this robot. At scientific conferences, I find
it hard to quantify what you have when you interact with
Kismet and what is so special about it. But the essence of
that is what I am now trying to distill into Leonardo.
Kismet has been retired to the M.I.T. Museum. I would
rather have him stay up at the Media Lab, with me. But he's
done his job. Kismet isn't gone; it's just now taking the
next step in its own evolution through Leonardo.
http://www.nytimes.com/2003/06/10/science/10CONV.html?ex=1056273406&ei=1&en=56817643701b6679
From ACM News, June 9, 2003
"Artful Displays Track Data"
Technology Research News (06/11/03); Sachdev, Chhavi
- Georgia Institute of Technology researchers have developed an aesthetically pleasing data display designed to minimize distraction. The InfoCanvas system displays data as moveable, abstract components within an electronic painting of a desert, a beach, a mountain camp, an aquarium, or a window view. The data elements change as the information changes, while the software is designed to run on an always-on Internet connection. "We're exploring ways of helping people stay aware of secondary information in a peripheral manner, one that does not distract, interrupt, or annoy them," explains Georgia Tech's John Stasko, who uses InfoCanvas on a dedicated screen in his office. Stasko's display consists of a beach scene where a moving sailboat keeps time, clouds in the sky represent weather conditions where his parents live, and a seagull's position symbolizes the Dow Jones performance. An email from Stasko's wife is represented by the appearance of a towel on a beach chair, and !
moving the mouse over the picture causes text balloons to pop up; important images or news headlines can also appear in the picture as text on billboards or signs towed by a plane. The abstract elements are customizable, so users can keep track of sensitive data without worrying that everyone who enters the office will also be privy to the information. Georgia Tech researcher Todd Miller says the InfoCanvas prototypes were designed with the input of potential users, and adds that the researchers are building pictorial customization into the system through interactive software.
Click Here to View Full Article
"Computers That Speak Your Language"
Technology Review (06/03) Vol. 106, No. 5, P. 32; Roush, Wade
- Firms such as Nuance Communications and SpeechWorks are making a splash with interactive voice response software that allows automated call centers to more smoothly interact with customers, but this is only the first step in the rollout of language-processing systems. Projects are underway at IBM, the Palo Alto Research Center (PARC), and elsewhere to develop computers that can understand natural speech; International Data's Steve McClure speculates that, "Whereas the GUI [graphical user interface] was the interface for the 1990s, the NUI, or 'natural' user interface, will be the interface for this decade." A truly interactive language processing system must be able to precisely convert human speech into text the computer can read, deduce meaning by studying vocabulary and sentence structure, and supply human-sounding responses that make sense. Breakthroughs in the area of language understanding stem from awareness that people value machines more for their helpfuln!
ess and efficiency than for their conversational abilities, and that the best language-processing model combines grammatical structure analysis with statistical analysis. However, though this model has yielded very helpful interactive voice response systems for United Airlines, the U.S. Post Office, and others, it does not represent true language understanding. PARC research fellow Ron Kaplan believes that a natural-language interface would be more effective if it were stripped of the need for system customization, but the chief barriers to this achievement are the smallness of language sample databases and statistical algorithms that eliminate ambiguity, which can rob a sentence of its true meaning. His solution is the development of the Xerox Linguistic Environment, grammar-driven software designed to retain ambiguity. Meanwhile, IBM is trying to enhance the management of unindexed, unstructured data on computer networks via natural-language processing software called the!
Unstructured Information Management Architecture.
http://www.technologyreview.com/articles/roush0603.asp
From DSS News, June 8, 2003
Ask Dan: Is tax preparation software an example of a DSS?
- "Is X a DSS?" is a common question in the Ask Dan! email. Let's try to
figure out the answer to this question together.
- First, what is the purpose of the software and system? If the purpose of
the specific software package is only to help a user fill in fields and
print out a tax return, then it is not a DSS. If, however, the purpose
is to help a user minimize the taxes paid and make intelligent decisions
about what deductions to claim, then it is a DSS. If the software also
helps with tax planning and "what if?" analysis, that's also an
indication it's a DSS. The Ask Dan! column of March 30, 2003 identified
seven characteristics of a Decision Support System. The tax preparation
software I'm most familiar with, TurboTax Deluxe, meets all seven. For
example, it's intended for interactive use, it's intended for repeated
use and it's intended to improve decision effectiveness.
- Here in the United States "Tax Day" is April 15 so many U.S. readers
have probably recently used a tax preparation software package. Tax
preparation software is used by professional accountants and by
individual tax payers. My wife Carol uses Turbo Tax Deluxe from Intuit
to prepare our taxes. The program improves her efficiency by performing
calculations and by creating "the forms" that need to be filed, but it
does much more. It guides the user step-by-step through the process of
preparing the return using an "interview technology", it reduces tax
preparation time by importing a person's tax information from the prior
year and it uses that information to prompt for current information, it
has a tax law advisor and it has tax strategy tools. Intuit sells
TurboTax products and provides a web-based DSS called TurboTax for the
Web. In 2002 approximately 15 million returns were filed with TurboTax.
You can find out more at http://www.turbotax.com.
- According to Porter (1994), in 1982 Taxadvisor was developed to solve
problems dealing with income and transfer tax planning for individuals.
In 1985, a program called Financial Advisor was the first commercially
successful system to be used by tax consultants. Porter notes the most
successful and best known tax expert system, ExperTAX, was developed by
Coopers & Lybrand in approximately 1986. It was built using a rule-based
expert system technology. The system started with 2000 rules and the
number of rules increased to more than 3000 by 1994. The program used an
"intelligent questionnaire" for data gathering. It replaced long
questionnaires that tax preparers had to complete. Also, ExperTAX guides
the user through a tax planning analysis to identify decisions that will
affect the client's tax liability for the year. ExperTAX helps identify
issues that need clarification but it leaves the final decisions to
human experts. Research (Shpilberg and Graham, 1986) indicated the
productivity of staff accountants using the system increased.
- So what category of DSS is tax preparation software? The tax preparation
software that I am most familiar with is best categorized as
knowledge-driven DSS. I have not seen the code nor read how specific
packages are programmed, but it seems most likely that "rules" are used
to provide decision support. If rules and a rule engine provide the
functionality and are the dominant component of the DSS, then it should
be categorized as a knowledge-driven DSS. A tax "expert system" or tax
preparation knowledge-driven DSS provides widespread distribution of tax
expertise.
References
- Brown, Carol E. and Daniel E. O'Leary, Introduction to Artificial
Intelligence and Expert Systems, 1994 at URL: http://www2.bus.orst.edu/faculty/brownc/es_tutor/acc_es.htm.
- Kneale, Dennis. "How Coopers & Lybrand Put Expertise into its
Computers." Wall Street Journal, Nov. 14, 1986.
- Porter, Eugene P., "Tax expert systems and future development," The CPA
Journal Online, January 1994, URL: http://www.nysscpa.org/cpajournal/old/14979937.htm.
- Shpilberg, David and Lynford E. Graham, (1986) "Developing ExperTAX: An
Expert System for Corporate Tax Accrual and Planning," Auditing. 6(1),
pp. 75-94.
- Smith, L. Murphy, "Accounting expert systems," The CPA Journal Online,
November 1994, URL: http://www.nysscpa.org/cpajournal/old/16458936.htm.
From ACM News, June 4, 2003
"Are You Ready for Social Software?"
Darwin (05/03); Boyd, Stowe
- A Working Model managing director Stowe Boyd predicts that social software will effect dramatic changes in businesses' marketing strategies and customer interplay, and transform internal and external communication and collaboration. Boyd assumes that the purpose of social software will be the reverse of that of conventional groupware and other project- or organization-centered collaborative tools. Social software will support individuals' desire to combine into groups to pursue personal interests. Groupware and traditional software follows a top-down model that imposes a larger system (an organization or project) on individuals, while social software follows a bottom-up approach oriented around individuals, who establish relationships with others based on their personal goals, preferences, and connections; this intercommunication sets the foundation for a network of groups. Boyd defines social software as that which supports conversation between individuals or groups, so!
cial feedback, and social networks. The second capability enables a digital reputation to be built through group ratings of individual contributions, while the third allows people to digitally express their personal relationships and lay the groundwork for new relationships. Boyd reasons that social software is now poised to take off because low-cost, high-bandwidth tools such as blogs and network systems like Ryze and LinkIn are available. Boyd writes that this availability, "when coupled with the critical mass of millions of self-motivated, gregarious and eager users of the Internet, means social software is certain to make it onto 'the next big thing' list."
http://www.darwinmag.com/read/050103/social.html
From ACM News, June 4, 2003
"New Software Helps Teams Deal With Information Overload"
EurekAlert (06/04/03)
- Collaborative Agents for Simulating Teamwork (CAST), a software program co-developed by John Yen of Penn State University's School of Information Sciences and Technology, is designed to improve teams' decision-making process and augment cooperation between members by focusing on relevant data. "CAST provides support for teams by anticipating what information team members will need, finding commonalities in the available information, and determining how that information should be processed," explains Yen. He believes the customizable software could help military officers handle the hundreds of thousands of ground-sensor and satellite readings they receive every hour, and more rapidly adjust to changing battlefield situations. Yen adds that disease epidemics and potential terrorist threats could also be monitored with CAST. The software incorporates what Yen calls "shared mental models" about team goals, team process and structure knowledge, and postulations about problems!
. "A computer program that acts as a team member may be more efficient in processing information than a human teammate," he notes. The software's development has been financed under the Department of Defense's Multidisciplinary Research Program of the University Research Initiative, which splits the grant between Penn State, Texas A&M, and Wright State University. Yen, who started working on CAST when he was at Texas A&M, continues to refine the software with Texas A&M collaborators Michael Miller and John Volz.
http://www.eurekalert.org/pub_releases/2003-06/ps-nsh053003.php
"In Future, Foot Soldier Will Be Plugged Into a Massive Network"
Associated Press (06/02/03); Regan, Michael P.
- The "Scorpion ensemble" under development at the U.S. Army Soldier Systems Center is intended to increase mobility by reducing the amount of equipment troops must carry while boosting both safety and effectiveness in the field through numerous technological enhancements wired directly into the uniform. The ensemble will connect the wearer to the Future Combat System, a planned network of satellites, unmanned planes, and robot vehicles that provides soldiers with reconnaissance data in order to determine their location and coordinate military operations. Dennis Birch of the Army center's Objective Force Warrior technical program reports that developers are trying to enable soldiers to interact with the system through voice command, although a control panel integrated into the sleeve is also under consideration. The Scorpion ensemble will incorporate a sensor-studded undershirt to monitor the wearer's vital signs; built-in tourniquets that could expand or contract via remo!
te control; an armored load carriage that stores circuitry, batteries, water, and ammunition; and a helmet equipped with cameras to capture concealed enemies, screen attachments to display camera images as well as Future Combat System telemetry, and a laser-engagement system that can distinguish between friends and enemies. Birch notes that new devices can be added to the ensemble as they are developed, and future enhancements in the planning stages include advanced camouflage. Meanwhile, Paula Hammond of MIT's Institute for Soldier Technologies notes that projects are underway to develop exoskeletons that boost soldiers' speed and strength, and apparel that can protect wearers from chemical agents or treat injuries.
Click Here to View Full Article
From ACM News, June 2, 2003
"Mimicry Makes Computers the User's Friend"
New Scientist (05/28/03); Ananthaswamy, Anil
- Noriko Suzuki, a researcher at ATR Media Information Science Laboratories in Kyoto, Japan, believes computers and robots can become more user friendly if they imitate how people speak. He tested this theory by asking volunteers to help a computerized character build toys from blocks and to help it name the toys. The volunteers were told that the character had the speech capabilities of a toddler just beginning to speak. When the character increased its imitation of the volunteers' speech patterns, including rhythm, intonation, and volume, the volunteers gave the character its highest marks in such areas as cooperation, ability to learn, and friendliness. The character received the highest marks when imitating 80 percent of the volunteer's voice, while the 20 percent not imitated indicated the character had some level of independence. Suzuki believes that incorporating imitation into verbal exchanges between people and computers and robots would enhance such experiences. !
Human-computer interaction expert Timothy Bickmore at the Massachusetts Institute of Technology agrees, saying the approach could be used effectively in entertainment, computer games, and toys.
http://www.newscientist.com/news/news.jsp?id=ns99993769
From ACM News, May 23, 2003
"Spinning Around"
Sydney Morning Herald (05/20/03); Adams, David
- Today's knowledge workers are inundated with information--too much to be useful, says Web site content management expert Gerry McGovern. He says society is working under the principals of the physical economy where "more is better," but should in fact adjust to the new digital economy, where hardly anything is scarce. Cheaper devices, faster processors, and more abundant storage all result in too much information to be useful, says McGovern. Author David Shenk wrote in his 1997 book, "Data Smog: Surviving in the Information Glut," that people historically consumed as much as possible in order to survive, and have carried that habit over into the information age. This results in "information obesity," writes Shenk. McGovern, who spoke recently at the Australian Computer Society's annual conference, says part of the responsibility lies with content authors. He says about 70 percent of all Web sites remain unread and that many corporate intranets are self-defeating. In addi!
tion, immersing oneself in data does not result in better decisions because people are too close to the situation to gain proper perspective. Silicon Valley is emblematic of an attitude where the more a person works, reads, and converses, the smarter and more productive they are. A 2000 Pitney Bowes study on information overload found that top performers used self-messaging, previewing, and knowledge indexing techniques to manage information.
http://www.smh.com.au/articles/2003/05/19/1053196515705.html
From ACM News, May 12, 2003
"PCs Have Become Just Like Appliances: Both Are Too Complex"
Wall Street Journal (05/12/03) P. B1; Gomes, Lee
- Consumers are complaining about how complex PCs are, and wishing that the devices were as easy to use as home appliances, writes Lee Gomes. Unfortunately, Gomes observes that appliances are becoming harder to use, while PC difficulty has not decreased. He explains that "a new epidemic of man-machine alienation" has opened up because the chips that add intelligence to PCs are now so cheap that they can be incorporated into every conceivable consumer device. Appliance development is following a similar track to that of software, in that the devices' value is determined by how many features they possess, rather than simplicity. Gomes argues that the feature most lacking in home appliances is good design, and cautions that good design does not necessarily mean simplicity, since following such a tack may rob the appliance of much of its usefulness. "Sometimes, good designers will have the courage of their convictions to force people to spend time teaching themselves how to us!
e something worthwhile," Gomes writes. However, Gomes finds it frustrating that, these days, operating a once easy-to-understand appliance like a toaster requires reading a hefty manual because the device has been enhanced with smart electronics.
"Digital Maps Tell the Time"
BBC News (05/09/03)
- Scientists at MIT's Media Labs Europe in Dublin, Ireland, are trying to incorporate a feature into digital maps that gauges time. Users could know whether they have enough time to walk to the park during lunch, for instance. Based on walking speed and time available, the map will display a bubble plan indicating how far a person can stroll while still arriving on time to a designated place. "The bubble is not a perfect circle as the software is taking account of actual street patterns and the physical features of a city," says scientist Brendan Donovan. He says the system could be made even more useful if merged with various features of a metropolitan area such as cafes or train stations. The Media Labs scientists envision using the software at stalls in hotels or tourist information areas, and say it might also be used on the street when combined with a GPS-equipped handheld computer, allowing users to track their location as they walk around a city.
http://news.bbc.co.uk/1/hi/technology/2986655.stm
"Chinese Lab Hopes to Commercialize Sign-Language Recognition Platform"
EE Times (05/07/03); Clendenin, Mike
- The Institute of Computing Technology (ICT) in China has developed a software platform that can translate spoken and written Chinese and other languages into sign language via a virtual character that hearing-impaired users would read. Digital gloves are used to make signs that the program translates back into verbal or written cues. The software is about 85 percent to 90 percent accurate, according to researchers. The program taps into a vast database of words, facial expressions, and hand gestures the ICT has collated over several years, but the software needs to be refined in several areas. The digital character's face needs to be more expressive, while the correspondence of hand gestures to facial expressions must be improved. "It is very difficult to get the face expressions but it is important because if you do not use the expression, you cannot always understand the meaning," explains Wang Zhaoqi of ICT's Digital Laboratory. Researchers are currently building a da!
tabase of facial expressions using computer-generated models of volunteers who make expressions while wearing an array of sensors. The program is currently restricted to desktop PCs and notebooks with robust chips, and ICT project manager Huang Shu thinks getting the software to work with handheld devices such as personal digital assistants will be key to the technology's commercialization.
http://www.eetonline.com/story/OEG20030307S0011
From Knowledge@Emory, May 7 - June 3, 2003
Mining the Gap: How Cultural Differences Play Out in Communications
- The importance of cross-cultural communication in business is often underestimated. A little ignorance is the only ingredient needed to create an angry customer, an alienated employee, or an irritated partner. To help bridge the gap, Deborah Valentine, a business communications specialist at Emory University’s Goizueta Business School, and coauthor have penned a primer called Guide to Cross-Cultural Communication. The guide offers a short course on some of the basic paradigms used by academics in classifying cultures, along with practical tips on how to overcome cultural barriers.
Read the article
From ACM News, May 7, 2003
"Minding Your Business"
Science News (05/03/03) Vol. 163, No. 18, P. 279; Weiss, Peter
- Researchers such as Queen's University's Roel Vertegaal are developing hardware and software designed to mitigate information overload without restricting users' access to data; Microsoft Research's Eric Horvitz expects these projects to yield digital devices that "become a lot less like tools and a lot more like companions and collaborators who understand your intentions and goals." Several efforts are investigating attentive-user interfaces that can respond to a person's bodily cues (eye movements, gestures, etc.) or electronic resources indicating factors such as a person's location or current activities. Daniel Russell of IBM's Almaden Research Center pioneered video camera technology that fixes on a person's gaze, and his lab has included it in a prototype interface that notes what section of a Web page a person is looking at and how long he or she lingers on it; such a product could become a valuable tool for Web-based education--it could, for instance, tutor users!
better based on its observations. Vertegaal's lab has incorporated the technology into its eyePROXY system as well as cell phones and other devices that can detect when users' attention is directed elsewhere, and store messages and other data so as not to interrupt them. Microsoft Research has devised an enhanced personal digital assistant that collects data from an accelerometer, a touch sensor, and a proximity sensor to determine how mobile users plan to use the device, so it can redirect messages or other information accordingly. Horvitz and other Microsoft researchers are developing software that intuits and anticipates users' activities by studying the data received through their computers and other digital gear. One system they created automatically schedules appointments based on important email. A key component of attentive interfaces is their ability to get users' attention: For example, several research projects include designs for automotive tools that can rouse!
drowsy drivers by turning on the radio or vibrating the steering whee
l.
http://www.sciencenews.org/20030503/bob8.asp
From ACM News, May 5, 2003
"Intel to Release Machine Learning Libraries"
New Scientist (05/02/03); Knight, Will
- Intel plans to release a series of Bayesian network software libraries at the Neural Information Processing Systems 2003 conference on June 6. It is hoped that the release will enable software developers to incorporate improved machine learning capabilities into their programs. Such programs would be able to dynamically learn through the continuous modification of probabilities via a resolute set of rules. The libraries will make the most of microprocessor hardware from Intel, allowing programmers who lack extensive machine learning knowledge to take advantage of Bayesian networks and build software with improved efficiency. Intel development team leader Gary Bradski says the libraries could be applied to data-mining, robotics, computer vision, diagnostics, bioinformatics, and decision-making systems. Programs that already make use of Bayesian networks include applications for identifying unsolicited commercial email by studying previous messages labeled as legitimate or!
illegitimate. The wide employment of Bayesian networks for machine learning has been impeded because the method is computationally intensive, but Bradski says the newest desktop computers have sufficient power to support that level of computation. "A standard software library has yet to emerge, and once one emerges I think that it will be widely used," says Bayesian networks expert Michael Jordan of the University of California at Berkeley.
http://www.newscientist.com/news/news.jsp?id=ns99993691
"Next-Generation Data Storage Gets Interesting"
NewsFactor Network (05/01/03); Brockmeier, Joe
- Forthcoming and already available storage technologies boast features that aim to embed intelligence in storage networks so that companies can better comply with regulations and align data management to business goals. International Data (IDC) research director Charlotte Rancourt expects several vendors to roll out commercial Internet SCSI (iSCSI) products in the next year, while Sun Microsystems' James Staten believes the technology will make a splash "in low-end, small business, [Microsoft] Windows-dominant areas." He notes, however, that many companies lack the 1-gigabit Ethernet infrastructure iSCSI requires, though Ethernet's transition to 10-gigabit in 2005 should give the technology a push. Advanced technology attachment (ATA) drives can now move beyond the desktop thanks to speed and capacity upgrades, and Rancourt expects them to become more prevalent in vendors' midrange offerings. Genevieve Sullivan of Hewlett-Packard's Network Storage Solutions notes that sto!
rage virtualization allows single-interface storage environment management and maintenance, and says that vendors aim to move the technology's intelligence component from storage devices into network switches in order to alleviate performance difficulties. EMC technology analysis director Ken Steinhardt believes it will become critical for storage vendors to offer products that can help companies "meet compliance requirements that regulatory bodies assign without going through difficult personnel challenges." Rancourt does not foresee the emergence of new storage competitors because of "prohibitive" entry hurdles, and doubts that any players will dominate the market.
http://www.newsfactor.com/perl/story/21407.html
"Intel Teaches Computers to Lip-Read"
VNUNet (04/29/03); Dhanendran, Anthony
- Intel's Audio Visual Speech Recognition software is an open-source speech recognition program that uses face detection algorithms from the company's OpenCV computer vision library to read lips in an effort to improve the accuracy of speech recognition programs. The software tracks mouth movements in a process similar to human lip reading, and was designed to boost the efficacy of voice-recognition systems in noisy environments. Intel's Justin Rattner, director of the company's Microprocessor Research Labs, says, "Intel wants to develop technology that allows computers to interact with the world the way humans do."
http://vnunet.com/News/1140546
From ACM News, April 18, 2003
"Women Need Widescreen for Virtual Navigation"
New Scientist (04/17/03); Marks, Paul
- Computer scientists from Microsoft's Redmond, Wash., research lab and Carnegie Mellon University told attendees at a Florida computer usability conference last week that men are better than women when it comes to navigating through virtual environments using typical computer displays and graphics software. This is related to men being generally faster than women in being able to mentally map out the environment and their spatial relation to it--a talent that extends to the real world, according to Microsoft researcher Mary Czerwinski. She and fellow researcher George Robertson, along with Carnegie Mellon's Desney Tan, ran a series of tests on volunteers to see if they could improve females' virtual navigation skills. The results indicated women can become just as adept as men with certain modifications, such as a larger screen to provide a wider field of view and smoother, more realistic animation. "You have to generate each image frame so the optical flow simulates accu!
rately the experience of walking down, say, a hallway," Robertson explained. It is much less disorienting for women if the animation is not jerky, but many 3D software applications do not support smooth image rendering. Female architects, designers, trainee pilots, and video gamers are among those who could benefit from a modified virtual navigation system.
http://www.newscientist.com/news/news.jsp?id=ns99993628
"Battlefield Internet Gets First War Use"
Associated Press (04/16/03); Rising, David
- Wednesday's raid by the U.S. Army's 4th Infantry Division on the Taji air base in Iraq inaugurated the Force 21 Battle Command Brigade and Below (FBCB2), an advanced networking system that monitors the movement of combat vehicles in a sort of "battlefield Internet." The system maps out the position of friendly and unfriendly forces on the battlefield via a videogame-like overview commanders can use to coordinate troop movements from a tactical operation center. The FBCB2 is a mesh network of touchscreen computers installed in vehicles; the screens display icons--blue to represent friendly forces and red to represent enemy forces--that soldiers can touch to access target information or send text messages to other vehicles or the command center, thus allowing battlefield data to be quickly updated with a minimum of radio chatter. Vehicle positions are tracked by a GPS satellite navigation system, thus reducing incidents of friendly fire or deviations from the battlefield s!
trategy. FBCB2 computers are protected from heat by shock-absorbent cases designed by Northrup Grumman, and have no cooling fans to avoid contamination by sand or water; they are also equipped with a remote-controlled self-destruct system, and can re-scale battlefield maps or overlay them with terrain features or satellite imagery thanks to Sun Microsystems' Solaris operating system.
http://www.msnbc.com/news/901268.asp
"Making More of a Noise"
Financial Times - FTIT Survey (04/16/03) P. 11; Hayward, Douglas
- While currently inflexible and expensive to deploy, voice technology is making improvements and gaining acceptance in certain areas. Experts say voice commands would be most useful in factory or vehicle settings where users cannot easily use manual input tools, but so far the largest adopters have been financial services and telecommunications companies. Already, voice technology is used in a large percentage of call centers, augmented by human agents who are brought in for call activity of greater business value. Voice technology also plays a significant role in self-service, letting customers buy tickets on the phone, check flight schedules, and receive audible email and SMS messages. Still, voice technology systems are segmented from other technology components and require special skills to install and fine-tune; companies such as IBM tout a "multimodal" development environment where coders would be able to create voice features at the same time they program a traditi!
onal Web application, for example. But a standards battle between VoiceXML and Microsoft's Salt protocols is creating some hesitancy in the developer community. With those issues unresolved, people in the voice technology industry say more capable systems--that could interact more fluidly with users--are unrealistic for mass deployment. ScanSoft international vice president Peter Hauser cites a voice-enabled PDA due out this year as just the first of a deluge of voice-enabled products, such as cell phones without keypads and voice-responsive remote controls.
From ACM News, April 4, 2003
"Queen's Researchers Invent Computers That 'Pay Attention' to Users"
ScienceDaily (04/02/03)
- Scientists from the Human Media Lab (HML) at Queen's University in Ontario have developed an Attentive User Interface (AUI) designed to relieve users of the morass of messages they receive on their electronic devices by evaluating the user's attention span and the importance of each message as it relates to the user's current activities. "We now need computers that sense when we are busy, when we are available for interruption, and know when to wait their turn--just as we do in human-to-human interactions," explains HML director Dr. Roel Vertegaal. "We're moving computers from the realm of being merely tools, to being 'sociable' appliances that can recognize and respond to some of the non-verbal cues humans use in group conversation." Many of the breakthroughs the Queen's team has implemented are derived from studies focusing on the role of eye contact in human group conversation. This research has led, for example, to an eye contact sensor that enables devices to establ!
ish a user's presence and level of attention. Other AUI applications developed at HML include robotic eyes equipped with eye contact sensors that allow computers to relay their attentiveness to the user; attentive messaging systems that forward email messages to devices in use; a videoconferencing system that uses Web-transmitted video images to communicate eye contact; cell phones outfitted with eye contact sensing glasses that can sense when users are engaged in face-to-face conversations; home appliances that follow a user's voice commands only when the user's gaze is directed at them; and TVs that pause when no one is watching. Dr. Vertegaal and his research team will detail their latest research at next week's ACM CHI 2003 Conference on Human Factors in Computing Systems in Fort Lauderdale, FL, while th March issue of Communications of the ACM features a special section to AUIs.
- http://www.sciencedaily.com/releases/2003/04/030402072712.htm
For more information on CHI, visit http://www.chi2003.org/index.cgi.
"Software Uses Pictures to Represent Info People Monitor"
EurekAlert (04/04/03)
- Research at the Georgia Institute of Technology puts personal information updates on a separate networked display in a way that does not distract the user, but provides a comprehensive and eye-pleasing ambiance. Associate professor of computing John Stasko and his students are testing the application, called InfoCanvas, and plan to present their work at the ACM's CHI 2003 meeting. Instead of cluttering a person's main monitor with information or news they are tracking, the idea is to consolidate that changing information in a theme-based picture on another display. Prototype themes include an aquarium setting, mountain camp site, or desert. Users designate which visual elements will represent certain information, such as stock market indexes, new emails, temperature, or traffic congestion. Stasko's own InfoCanvas, for example, has a seagull that flies lower or higher according to fluctuations in the Dow Jones index. Stasko suggests InfoCanvas could be used on a wall-moun!
ted display, where it would serve as an informational painting. Users could touch elements in order to read more information in a pop-up box, or follow a Web link to the original source of news. The research team is already testing InfoCanvas with three users and plans more studies to determine if InfoCanvas provides easier access to information than a traditional text-based Web portal, for instance.
http://www.eurekalert.org/pub_releases/2003-04/giot-sup040303.php
"Nag-O-Matic"
Business 2.0 (03/03); Needleman, Rafe
- Stanford University researcher B.J. Fogg, an experimental psychologist, is leading the way in an emerging discipline he calls captology--using technology to influence people. He is currently working on ways to get people to consume more water and get more sleep, for example. He uses PDA-based software to offer cues, monitor actions, and provide rewards when a target level is reached, and the software automatically tests a variety of approaches that works best. People tend to perform better when observed, according to Fogg's research. This is evident in Qualcomm's OmniTracs system, a satellite-based system that tracks the location of a vehicle and whether drivers are going too fast or leave a predetermined route. Similarly, Davis Instruments' CarChip monitors fuel consumption and risky driving behaviors in an effort to improve how people drive. Another captology product is Realityworks' Baby Think It Over doll, which simulates the behavior patterns of a real infant in an !
effort to discourage teens from getting pregnant. And Nubella has unveiled a product that tracks grocery purchases via supermarket bar-code scanners and customers' club cards; it mails out coupons to correct people's diet deficiencies. Fogg will discuss the branches of persuasive technologies at ACM's CHI03 Conference next week in Fort Lauderdale, FL.
http://www.business2.com/articles/mag/0,1640,47005,00.html
From DSS News, May 11, 2003
Ask Dan: What are some examples of web-based DSS?
- Most people who read about decision support systems want to try one.
This is understandable given the promise associated with an information
system that is intended to support and improve decision making. A
previous Ask Dan! identified some DSS demonstration software that could
be downloaded (DSS News, 01/05/2003). Those systems show some of the
possibilities, but web-based and web-enabled DSS are more accessible and
as powerful as DSS built using client/server technologies. In general,
it is much easier to try or "test drive" a web-based DSS. Web
technologies have expanded the range of DSS that are built and deployed,
but some "new" DSS are more impressive and consequential than others. A
few vendors have demonstrations on their sites of DSS built using their
software and there are many DSS targeted to the general public that are
available on the web. In general, narrowly targeted or focused DSS and
intra-organizational (internal) DSS are not accessible to the public.
-
Most types of DSS can be found on the web, but it is hard to find
publicly available examples of sophisticated, web-based,
communications-driven DSS. A first generation web-based GDSS, TCBWorks
(cf., Dennis, 1998), was available for academic use in the mid-90s, but
it is no longer accessible. Current demonstrations of web-based,
communications-driven DSS tend to have low levels of functionality and
focus on asynchronous communications.
-
Recently, Prof. Shashidar Kaparthi and I (Power and Kaparthi, 2002)
wrote an article that expanded and updated Chapter 11 in my DSS book.
Also, this semester my students evaluated some web-based DSS. So this
Ask Dan! shares some links and a fun exercise involving categorizing and
evaluating Web-based DSS. Innovative examples of all 5 categories of DSS
can be found on the Web, see if you can sort out the following web-based
DSS.
- Web-based DSS Links
-
Some of these sites have multiple decision support tools and you may
need to "look hard" to find them. Some of the simpler tools are called
"calculators" and the more complex tools are full-fledged DSS.
-
Big Charts -- http://bigcharts.marketwatch.com/
-
Databeacon Demos -- http://www.storydata.com
-
Documentum eRoom - http://www.documentum.com
-
elaws Family and Medical Leave Act Advisor -
http://www.dol.gov/elaws/fmla.htm
-
Fidelity Calculators - http://www.401k.com/401k/tools/tools.htm
-
Inspire - http://interneg.carleton.ca/inspire/
-
MSN Autos - http://autos.msn.com/Default.aspx
-
Pinnacor - http://finance.pinnacor.com/
-
Principal Financial - http://www.principal.com/calculators/index.htm
-
WATERSHEDSS - http://www.water.ncsu.edu/watershedss/
Categorization and Evaluation Exercise
Please evaluate one of the above web-based DSS. What type or category of
DSS is the application? Communications-driven, Data-driven,
Document-driven, Knowledge-driven or Model-driven? What is the purpose
of the specific DSS? Who are the targeted or intended users of the DSS?
What is your evaluation of the DSS? Is it useful? Is it likely to
improve decision making? Please explain and justify your conclusions.
References
Dennis, Alan R., "Lessons from Three Years of Web Development",
Communications of the ACM, v41, n7, July 1998, pp. 112-113.
Power, D., "Where can I find or download a DSS demo?" DSS News,
01/05/2003.
Power, D. and S. Kaparthi, "Building Web-based Decision Support
Systems", Studies in Informatics and Control, Vol. 11, Number 4, Dec.
2002, pp. 291-302, URL: http://www.ici.ro/ici/revista/sic2002_4/art1.pdf
Technology Hits a Midlife Bump, an article from the New York Times
From ACM News, May 2, 2003
"Making Intelligence a Bit Less Artificial"
New York Times (05/01/03) P. E1; Guernsey, Lisa
- Amazon, NetFlix, and other online retail services rely on automated recommender systems to anticipate customer purchases based on past choices; however, a February report from Forrester Research found that just 7.4 percent of online consumers often bought products recommended by such systems, roughly 22 percent ascribed value to those recommendations, and about 42 percent were not interested in the recommended products. To improve the results requires the enhancement of recommendation engines with human intervention, according to TripleHop Technologies President Matt Turck. One of the key ingredients of today's recommendation technology is collaborative filtering, in which a buyer is matched to others who have bought or highly rated similar items. Commonplace problems with this methodology include cold starts, in which predicting purchases is difficult because the system lacks a large database of people with similar tastes, and the popularity effect, whereby the computer!
delivers recommendations that are pedestrian and prosaic. Some companies try to avoid such problems by adding a human element: Barnesandnoble.com, for instance, employs an editorial staff to tweak recommendations. "If it is not vetted and monitored by humans and not complemented by actual hand-selling, as we say in the book industry, it doesn't feel like there is anybody there," notes Barnesandnoble.com's Daniel Blackman. Some recommendation engines, such as Amazon's, can improve results with customer input via continuous editing of consumer profiles and special features--alerting the e-tailer not to make recommendations based on a purchase that is a gift for someone else, for example. Some companies also want recommender systems to prioritize surplus items in order to better manage inventory, a strategy that could engender consumer distrust, say software developers.
http://www.nytimes.com/2003/05/01/technology/circuits/01reco.html
"Artificial Intellect Really Thinking?"
Washington Times (05/01/03) P. C9; Reed, Fred
- A computer can be labeled as artificially intelligent, but the intelligence--if indeed that is what it is--actually resides in the program, writes Fred Reed. However, he points out that such programs, once deconstructed, consist of incremental steps that by themselves do not indicate true intelligence. A program such as the one IBM's Deep Blue used to beat chess champion Garry Kasparov in 1997 works out all potential moves from a given board position simply and mechanically via a "move generator." Just as mechanical are the rules that the program uses to choose the optimal maneuver. However, Deep Blue could be rated as intelligent by mathematician Alan Turing's supposition that an intelligent computer can interact with a person so well that that person cannot distinguish it from a human being. Reed writes that most people can identify intelligence without clearly defining it, so the term itself is subject to interpretation. "For practical purposes, and certainly in the b!
usiness world, the answer seems to be that if it seems to be intelligent, it doesn't matter whether it really is," he notes. The convergence of speech recognition, robotic vision, and other technologies is paving the way for practical machines that at least appear to be intelligent, such as robots designed to care for the elderly in Japan.
http://www.washtimes.com/business/20030501-20933620.htm
"The Great IT Complexity Challenge"
NewsFactor Network (04/30/03); Brockmeier, Joe
- Autonomic computing promises to clear up complexity in company IT operations, freeing people from mundane maintenance tasks and handing those functions over to computers themselves. Major IT vendors are latching on to autonomic computing not only as a way to reduce complexity and save money, but also to make the IT infrastructure more adaptive to business demands. IBM autonomic computing director Miles Barel says there are four required elements of autonomic computing: Self-configuration, self-healing, self-optimization, and self-protection. He also gives a maturation schedule for autonomic computing and says most enterprises have deployed managed services, but do not use prediction or adaptive systems. Gartner analyst Tom Bittman says that standards are a looming issue, and notes that customers are wary of buying all of their products from one vendor in order to get autonomic capability; he explains that autonomic computing eventually will mean less room for actual staf!
f in the IT department since systems will largely run themselves, but at the same time the purpose is primarily increased flexibility and performance, not saved costs. Sun Microsystems' Yael Zheng says autonomic computing will move IT employees up the ladder in terms of what they contribute to the business. Instead of maintaining hardware, they can focus on more critical functions such as writing business applications. Autonomic computing involves virtualizing the IT infrastructure, which also boosts utilization rates since resources can be provisioned across the enterprise unhindered.
http://www.newsfactor.com/perl/story/21393.html
"Personalizing Web Sites With Mixed-Initiative Interaction"
IT Professional (04/03) Vol. 5, No. 2, P. 9; Perugini, Saverio; Ramakrishnan, Naren
- Saverio Perugini and Naren Ramakrishnan of Virginia Polytechnic Institute believe that a truly personalized Web site will personalize the user's interaction, and a mixed-initiative architecture where the user can control the interaction is the best option. They characterize browsing as an example of directed dialogue because the Web site takes the initiative by providing an array of hyperlinked choices that the user must respond to; the disadvantages of such a model, which usually consists of a Web site with multiple browsers, include supporting an exhaustive number of potential browsing scenarios and over-specification of the personalization goal. Perugini and Ramakrishnan believe plugging an out-of-turn interaction toolbar into the browser will support mixed-initiative interactions and enable the user to take charge within the Web site: This will eliminate the site's need to directly uphold all potential interfaces within the hyperlink framework, reduce the interface's!
clutter, and make the interaction more akin to a natural dialogue. Web site personalization is streamlined to a partial evaluation of a representation of interaction, and the authors write that the Extensible Style Sheet Language Transformation (XSLT) engine is a good choice for easy deployment. Programs are first modeled in XML, and then an XSLT style sheet is outlined for each user input and applied on the XML source; dead ends are shaved off via additional post processing transformation, while high-level XSLT functions can expedite link label ordering on recreated Web pages, among other processes. Perugini and Ramakrishnan claim that this approach can unify other types of Web site personalization and model dynamic content. Other areas they are investigating include inverse personalization, multimodal Web interface design, and mixed-initiative functionality based on VoiceXML. Transformation-based personalization strategies will become more important as wireless devices a!
nd the display of only the most relevant data on handheld computers be
come prevalent, the authors contend.
http://www.computer.org/itpro/it2003/f2009.pdf
From ACM News, April 30, 2003
"Who Loves Ya, Baby?"
Discover (04/03) Vol. 24, No. 4; Johnson, Steven
- Social-network software that visualizes the interactions and relationships within groups of people promises to radically transform large organizations. Mapping social interactions has become easier thanks to the advent of email, chat rooms, Web personals, and bulletin boards. Software designer Valdis Krebs' InFlow--the end result of 15 years' development--provides organizational maps that resemble molecular configurations, with an employee representing each individual molecule; these maps are derived from employee surveys used to determine their collaborative relationships and work patterns. MIT grad student Danah Boyd and programmer Jeff Potter have developed Social Network Fragments, a software program that studies emails sent and received, and from them constructs a map of social networks. The program can outline not only the size of different social groups but the bonds between them. "If we're going to spend more of our social life online, how can we improve what tha!
t experience feels like?" asks Judith Donath of MIT Media Lab's Sociable Media Group. "You have this enormous archive of your social interactions, but you need tools for visualizing that history, for feeling like you're actually inhabiting it." Social-network software can also be a useful tool for political analysis.
http://www.discover.com/apr_03/feattech.html
From ACM News, April 25, 2003
"TeleLiving: When Virtual Meets Reality"
Futurist (04/03) Vol. 37, No. 2, P. 44; Halal, William E.
- New technological trends are bringing TeleLiving--conversational human-machine interaction that facilitates a smoother, more comfortable way to educate, shop, do one's job, and even socialize--closer to reality. High-speed broadband communications will supply TeleLiving's backbone, while increasing computational power through the advent of two-dimensional and later three-dimensional chips will also be beneficial. However, there is little consumer demand for broadband or additional computing power because of limited functionality and overcomplicated technology. The key to boosting demand lies in the development of a killer app, and George Washington University's William E. Halal believes TeleLiving's killer app will be the conversational interface. Such a tool will combine animated digital characters or avatars with speech-recognition technology, while the interface itself will be accessible wherever a liquid crystal display (LCD) screen can be installed. Roles the conver!
sational interface is expected to play include a virtual assistant that can help a person set up doctor appointments and consultations remotely, for instance; an enhancement to audiovisual communications that supports high-fidelity real-time images; and a bridge across the digital divide between technological haves and have-nots upheld by the elimination of the need for computer literacy. For TeleLiving to come to fruition, artificial intelligence must advance to the point where a computer can model the cognitive capacity of the human brain, a goal that conservative estimates expect to be reached by 2020.
An example of DSS for Handheld Devices.
From ACM News, April 14, 2003
"GUIs Face Up to the Future"
VNUNet (04/03/03); Sharpe, Richard
- A number of U.K.-based companies are working
to radically enhance the function and usability of graphical computer interfaces
(GUIs). Visual Information (VI) is a family-owned affair whose flagship product
is Vi Business Analyst (ViBA), a front-end database that presents a geographically-based
representation of data, which has the potential to save users vast sums of
money. Edinburgh University spinoff Rhetorical Systems is developing a text-to-speech
computer tool based on language processing research spearheaded by CEO Marc
Moens and CTO Paul Taylor; with such a tool, a user can ask the computer
a question, thus prompting a spoken answer. Rhetorical Systems' goal is the
rapid and cost-effective generation of voices, which can be delivered in
a variety of accents. Meanwhile, Lexicle is building animated "agents" that
can respond to inquiries in real time and in a user-friendly format. A key
component of Lexicle's tool is Rhetorical Systems' voice-generation software.
Lexicle's offering is based on artificial intelligence research carried out
by Patrick Olivier and Suresh Manandhar, with the former's focus being the
generation of mouth movements by an animated figure in response to typed-in
queries. Moens notes that GUI developers should launch products based on
their readiness, rather than the economic climate. http://www.vnunet.com/Features/1139942
"Smart Tools"
BusinessWeek 50 (04/03) No. 3826, P. 154; Port, Otis; Arndt, Michael; Carey, John
- Artificial intelligence is being
employed in many sectors, both public and private, and has led to significant
productivity and efficiency gains across the board. Financial institutions
have reduced incidents of credit-card fraud through the application of neural
networks, which feature circuits arranged in a brain-like configuration that
can infer patterns from data. Several AI technologies--neural nets, expert
systems, and statistical analysis--are implemented in data-mining software
that retailers such as Wal-Mart use to sift through raw data in order to
forecast sales and plan appropriate inventory and promotional strategies.
The medical sector is also taking advantage of data-mining: One application
involves a collaboration between IBM and the Mayo Clinic to detect patterns
in medical records, while another project uses natural-language processing
to map out the "grammar" of amino acid sequences and match them to specific
protein shapes and functions. Government organizations such as the Defense
Department and the National Security Agency are using AI technology for several
efforts related to national security, such as the Echelon telecom monitoring
system. The Defense Advanced Research Projects Agency (DARPA) is a leading
AI research investor, and the breakthroughs that come out of DARPA-funded
projects are more often than not put to civilian rather than military use.
IBM, Hewlett-Packard, Sun Microsystems, and Unisys are also intensely focused
on AI, especially in their pursuit of self-healing, autonomous computer systems
that can automatically adjust their operations in response to fluctuating
demands. Click Here to View Full Article
From ACM News, April 11, 2003
"Designing New Handhelds to Improve Human-Computer Interaction"
SiliconValley.com (04/09/03); Gillmor, Dan- Professionals
in the field of human-computer interaction gathered in Ft. Lauderdale, Fla.,
this week to discuss the latest research projects involving handheld devices.
The annual ACM conference on human-computer interaction, CHI 2003, gave the
world a glimpse of futuristic handhelds. University of California-Berkeley
researcher Ka-Ping Yee used the conference to unveil his "peephole display,"
which virtually enlarged the display of a Palm device. The display acts as
a small window hovering over a larger display, allowing the handheld user
to view a portion of an image. Users move the handheld up-and-down or side-to-side
to see other areas of an image. Researchers at the University of Maryland
and Microsoft have created "DateLens," a smart calendar that offers handheld
users complex scheduling features, such as zooming in so they can highlight
competing events on their schedule. However, there are some questions whether
there is a consumer demand for such scheduling features. A research team
at Carnegie Mellon University and Maya Design are experimenting with devices
based on the PocketPC and mobile phones that would add remote control capabilities
for lights and other household appliances to handhelds. Meanwhile, shorthand
for handhelds, the work of researchers at IBM and in Sweden, appeared to
be one of the more challenging research projects because it would require
handheld users to learn a new way of writing. http://www.siliconvalley.com/mld/siliconvalley/5593541.htm
From Edupage, April 9, 2003
COMPUTERS THAT MONITOR USERS
- Researchers at the Human Media Lab at Queen's University in Ontario
have designed a computer that monitors its user to help with time
management. They have designed devices that determine how much
attention a person is paying to his or her PC and the relative
importance of each message received. One device is an eye contact
sensor the computer employs to determine if the user is present and
looking at the screen to decide if and when to make contact with the
user. The lab's director, Dr. Vertegaal, said, "We now need computers
that sense when we are busy, when we are available for interruption and
know when to wait their turn--just as we do in human-to-human
interaction." BBC, 8 April 2003 http://news.bbc.co.uk/2/hi/technology/2925403.stm
From Edupage, April 4, 2003
"Software Uses Pictures to Represent Info People Monitor"
EurekAlert (04/04/03)- Research
at the Georgia Institute of Technology puts personal information updates
on a separate networked display in a way that does not distract the user,
but provides a comprehensive and eye-pleasing ambiance. Associate professor
of computing John Stasko and his students are testing the application, called
InfoCanvas, and plan to present their work at the ACM's CHI 2003 meeting.
Instead of cluttering a person's main monitor with information or news they
are tracking, the idea is to consolidate that changing information in a theme-based
picture on another display. Prototype themes include an aquarium setting,
mountain camp site, or desert. Users designate which visual elements will
represent certain information, such as stock market indexes, new emails,
temperature, or traffic congestion. Stasko's own InfoCanvas, for example,
has a seagull that flies lower or higher according to fluctuations in the
Dow Jones index. Stasko suggests InfoCanvas could be used on a wall-mounted
display, where it would serve as an informational painting. Users could touch
elements in order to read more information in a pop-up box, or follow a Web
link to the original source of news. The research team is already testing
InfoCanvas with three users and plans more studies to determine if InfoCanvas
provides easier access to information than a traditional text-based Web portal,
for instance. http://www.eurekalert.org/pub_releases/2003-04/giot-sup040303.php
"Interview With the KDE and Gnome UI/Usability Developers"
OSNews.com (03/10/03); Loli-Queru, Eugenia
- It is hoped that the Unix desktop will
be revolutionized when the Gnome and KDE user interfaces (UIs) become interoperable;
the Gnome project's Havoc Pennington discussed usability issues with Waldo
Bastian and Aaron J. Seigo of the KDE project. Seigo commented that KDE users
are highly desirous of configurability, and Bastian explained that improved
UI systems are supposed to retain all functions while making the most of
usability; Pennington said that a balance must be achieved between UI simplicity,
stability, and the rate of development. Seigo says an important trend that
is likely to continue is desktop "componentization," as demonstrated by the
Konqueror interface, which offers context-based functionality. "Already we
have reached the point where people, even those who are quite aware of interface
issues and design, stop distinguishing between individual applications and
instead simply experience the desktop as a coherent entity doing a lot of
great and cool things," he declared. All three participants understood the
value of having applications comply with some form of certification to support
consistency, although the general feeling was that it should be unofficial
rather than official. Two distinct approaches to improving file storage and
organization have recently come into vogue: The setup of a general architecture
and the provision of filetype-specific file management programs; Seigo saw
disadvantages in both techniques, and favored a hybrid program that resides
above the filesystem and below the user application level, while Pennington
said that Gnome is currently using the filetype-specific method. Looking
out over the next five years, Bastian thought that desktop UI technology
will undergo little change and Seigo was confident that leading-edge desktop
software will be defined by KDE, while Pennington predicted that there will
be an emphasis on streamlining. http://www.osnews.com/story.php?news_id=2997
From DSS News, April 13, 2003
Ask Dan: Can using a DSS have unintended negative consequences?
- YES! Researchers and managers often focus too much on the anticipated
positive consequences of using a specific Decision Support System.
Nadeeka Silva emailed me recently (3/14/2003) asking for some help in
answering some provocative questions about unintended negative
consequences of DSS. I'm assuming Nadeeka is taking a DSS class so I'm
broadening the assignment questions and responding publicly in this Ask
Dan! column. My answer is a starting point that leaves many
opportunities for Nadeeka and Ask Dan! readers to extend the analysis.
- The first "assignment" question states "The best DSS cannot overcome a
faulty decision maker. It cannot force a decision maker to make a
request of it, pay attention to its responses, or properly factor its
responses into the decision that is made. Do you agree with this
statement? Justify your answer."
- I generally agree with the statement. It points out that a DSS cannot
completely overcome the ability and attitude limitations of the person
who is using it. We are all "faulty decision makers". Each of us makes
some bad, wrong or incorrect decisions even when supported by a DSS. A
task specific Decision Support System is intended to increase the
quality and effectiveness of a specific decision or decisions. A
well-designed DSS has the potential to assist those decisions makers who
can and do use it. A DSS can improve a decision maker's "batting
average". In some situations a decision maker learns from using a DSS
about criteria, facts or process issues that need to be considered in a
specific decision situation. DSS encourage and promote "rationality" in
decision making. The goals of a DSS are not however always achieved! So
what is the correct conclusion? Companies and individuals that don't
recognize the limitations of DSS and of decision makers will be
surprised when a DSS doesn't improve decision making for some users.
Even though it is an unintended negative consequence, some decision
makers may actually be hindered by a DSS and a poorly designed DSS can
negatively impact even the "best" decision maker.
- Another "assignment question" also raises the issue of unintended
consequences of using a DSS. The question states "There is a DSS danger:
the danger of overdependence on a DSS, of blindly following the DSS, or
of interacting with it in an entirely mechanical and unimaginative
fashion. Do you agree with this statement? Justify your answer."
- Many people believe the above statement is true and it seems reasonable
that these "dangers" can and do happen. I am not however aware of
empirical research that confirms these "dangers". We don't know how
likely "overdependence" is, or if some users will "blindly follow" or
mechanically interact with some or all types of DSS. I'm assuming
"overdependence" means a person can not make a specific decision without
using a DSS. For many DSS, the intent is that users will become
"dependent" on using it. If decisions are improved, then the goal of
training, reinforcements and rewards should be to promote regular and
habitual use of the DSS. Managers and DSS users who recognize the
"dangers" are sensitized to them and that makes the "dangers" less
likely to occur or less likely to cause harm. DSS are intended to
support not replace decision makers so users need to consciously
interact with a DSS to use it effectively. The expectation needs to be
created that the human decision maker is the ultimate authority and that
the user can "over rule" or choose to ignore analyses and
recommendations of the DSS. The "dangers" raised in this question
warrant our attention and certainly they should be studied, but they do
not justify avoiding the use of a DSS or rejecting a proposed DSS
project.
From ACM News, April 7, 2003
"Semantic Applications, or Revenge of the Librarians"
Darwin (03/03); Moschella, David- The
supplier-centric IT industry will become customer-centric when Web services
shift to semantic applications that enable interoperability between computer
systems, thus systematizing data searches and transaction processing, writes
David Moschella, author of "Customer-Driven IT: How Users are Shaping Technology
Industry Growth." Web pioneer and World Wide Web Consortium (W3C) director
Tim Berners-Lee has long championed the concept of a semantic Web that can
interpret the context of content with greater precision. Semantic applications
are often lumped into one of two categories: Web content management and intelligent
applications. Elements common to both categories include metadata, taxonomies,
ontologies, and directly addressable, self-contained information objects,
and initiatives are underway to standardize these various terminologies in
nearly every major industry. A good portion of these projects involve business
exchanges and cross-industry entities. For example, the Defense Advanced
Research Projects Agency's DARPA Agent Markup Language (DAML) initiative
is designed to extend HTML and XML to accommodate ontologies. Improved business
interoperability can only be leveraged if customers can settle on and adopt
common ontologies and taxonomies, and then deploy them consistently and cautiously,
both for structured and unstructured data. New business skills will be needed
to take advantage of semantic systems, and the propagation of such skills
must proceed on an industry-by-industry basis. http://www.darwinmag.com/read/030103/semantic.html
From ACM News, March 31, 2003
"HP Thinks in 3D for Web Browsing"
InternetNews.com (03/25/03); Singer, Michael- Hewlett-Packard
has introduced a new tool for creating three-dimensional views of online
stores, similar to Doom and other video games. Called the VEDA (virtual environment
design automation) project, the application is used as a visualization database,
and allows users to stroll through virtual rooms and corridors using their
mouse. Items are arranged according to the user's chosen categories. The
application was developed by HP Labs researchers Amir Said and Nelson Chang,
who say the tool provides more interaction than a normal Web page, is visually
appealing, and provides a little bit of the feel of a brick-and-mortal store.
The back-end VEDA application is based on OpenGL and XML technologies, and
supports robust audio, video, and 3D models that can be modified, says Chang.
The researchers admit that the software's biggest hurdle is Internet bandwidth
limitations. Said says lower speeds degrade the graphics and high resolution
photographs. But the tool could run on HDTV as well as the Web because of
its advanced technologies, which are more developed compared to 2ce's CubicEye
and other 3D Web browsers. HP Labs is now planning to test the system with
such retailers as Wal-Mart. http://siliconvalley.internet.com/news/article.php/2169931
From Knowledge @ Emory, March 26, 2003
Is The Time Right For Web Services?- Microsoft’s
Bill Gates has called web services “the holy grail,” but some industry insiders
claim there’s more hype than substance in a system that would allow computers
to talk to each other regardless of software or system. Who’s right? Experts
at Emory University’s Goizueta Business School view the current status of
the system and explain why they are cautiously optimistic. http://knowledge.emory.edu/articles.cfm?catid=7&articleid=658
Why the Large Players Stand to Win at Web Services
- In the brave new world of web services,
emerging technologies and new applications are reshaping the way that companies
do business. Not surprisingly, the large companies in the IT realm appear
to be dominating the web services technological wave. Faculty at Emory University’s
Goizueta Business School and industry experts discuss the web services opportunity
and the strategic plans of some of the largest players. http://knowledge.emory.edu/articles.cfm?catid=14&articleid=657
From DSS News, March 30, 2003
Ask Dan: What are the characteristics of a Decision Support System?
- Many faculty who teach DSS courses intend that their students will
master the skill of determining if a specific information system is a
"DSS". Gaining this skill is complicated because the concept "Decision
Support System" is used in various ways by authors, researchers and
practitioners. On March 7, 2003, Chan Chun Kit emailed asking "what are
the characteristics of a Decision Support System?" Also, on March 13,
Juliet Stephen emailed asking about the characteristics of DSS. She
noted "I'm really interested in DSS". This Ask Dan! tackles this
difficult and potentially controversial question.
- DSSResources.COM, my book (Power, 2002) and this column advocates the
"big tent" or umbrella approach to defining DSS. Following the lead of
Alter (1980) and Sprague and Carlson (1982), I have concluded that
"Decision Support Systems (DSS) are a specific class of computerized
information system that support decision-making activities. DSS are
interactive computer-based systems and subsystems intended to help
decision makers use communications technologies, data, documents,
knowledge and/or models to identify and solve problems and make
decisions. Five more specific DSS types include: Communications-driven
DSS, Data-driven DSS, Document-driven DSS, Knowledge-driven DSS, and
Model-driven DSS."
- Turban and Aronson (1995) and others try to narrow the "population of
systems" called DSS. Turban and Aronson define DSS as "an interactive,
flexible, and adaptable CBIS specially developed for supporting the
solution of a nonstructured management problem for improved decision
making (p. 77)". A few paragraphs later, they broaden the definition and
define 13 characteristics and capabilities of DSS. Their first
characteristic is "DSS provide support for decision makers mainly in
semistructured and unstructured situations by bringing together human
judgment and computerized information. Such problems cannot be solved
(or cannot be solved conveniently) by other computerized systems or by
standard quantitative methods or tools". Their list is a useful starting
point.
Alter (1980) identified three major characteristics of DSS:
- 1. DSS are designed specifically to facilitate decision processes,
- 2. DSS should support rather than automate decision making, and
- 3. DSS should be able to respond quickly to the changing needs of
decision makers.
- Clyde Holsapple and Andrew Whinston (1996) identified four
characteristics one should expect to observe in a DSS (see pages
144-145). Their list is very general and provides an even broader
perspective on the DSS concept. Holsapple and Whinston specify that a
DSS must have a body of knowledge, a record-keeping capability that can
present knowledge on an ad hoc basis in various customized ways as well
as in standardized reports, a capability for selecting a desired subset
of stored knowledge for either presentation or for deriving new
knowledge, and must be designed to interact directly with a decision
maker in such a way that the user has a flexible choice and sequence of
knowledge-management activities.
- Turban and Aronson note their list is an ideal set. They state "Because
there is no consensus on exactly what a DSS is, there is obviously no
agreement on standard characteristics and capabilities of DSS". This
conceptual confusion and lack of consensus on a well defined DSS concept
originally prompted me in 1995 to try to more systematically define and
categorize DSS. In seems impossible to conduct meaningful scientific
research about systems that can't be consistently identified and
categorized. A more consistent definition of DSS and set of
"characteristics" should also improve communications about these
important computerized systems with students and DSS practioners.
- So what is a characteristic of a DSS? In this context, it is an
observable feature, peculiarity, property, or attribute of ANY type of
Decision Support System that differentiates a DSS from another type of
computerized system. Why do we develop lists of characteristics and
attribute lists? In general, such lists can identify an object as part
of a class or group of similar objects; it helps us in recognition and
identification!
- The following is my list of the characteristics of a DSS, please
comment!
CHARACTERISTICS OF A DECISION SUPPORT SYSTEM
- 1. DSS facilitate and support specific decision-making activities and/or
decision processes.
- 2. DSS are computer-based systems designed for interactive use by
decision makers or staff users who control the sequence of interaction
and the operations performed.
- 3. DSS can support decision makers at any level in an organization. They
are NOT intended to replace decision makers.
- 4. DSS are intended for repeated use. A specific DSS may be used
routinely or used as needed for ad hoc decision support tasks.
- 5. DSS provide specific capabilities that support one or more tasks
related to decision-making, including: intelligence and data analysis;
identification and design of alternatives; choice among alternatives;
and decision implementation.
- 6. DSS may be independent systems that collect or replicate data from
other information systems OR subsystems of a larger, more integrated
information system.
- 7. DSS are intended to improve the accuracy, timeliness, quality and
overall effectiveness of a specific decision or a set of related
decisions.
References
- Alter, S. Decision Support Systems: Current Practice and Continuing
Challenges. Reading, Mass. : Addison-Wesley, Inc., 1980.
- Holsapple, C. W. and A. B. Whinston. Decision Support Systems: A
Knowledge Based Approach. Minneapolis, MN.: West Publishing, Inc., 1996.
- Power, D. J., Decision Support Systems: Concepts and Resources for
Managers, Westport, CT: Greenwood/Quorum Books, 2002.
- Sprague, R. H. and E. D. Carlson. Building Effective Decision Support
Systems, Englewood Clifts, N.J.: Prentice-Hall, Inc.: 1982
- Turban, E. and J. E. Aronson Decision Support and Intelligent Systems.
(5th edition) Upper Saddle River, N.J.: Prentice-Hall, Inc.: 1995.
From ACM News, March 28, 2003
"Your Brake Pads May Have Something to Say (By E-Mail)"
New York Times (03/27/03) P. F6; Austen, Ian
- Car owners, fleet operators, automotive
manufacturers, and dealers could be alerted to potential vehicle problems
early thanks to in-vehicle computers that collect data from sensor arrays
and transmit them to a central database. Such systems normally jettison such
data except in the event of collision, but researchers postulate that the
information could be recycled to support vehicle maintenance. It is not unusual
for computers in racecars to track vital readings of vehicle systems and
relay them to pit crews so that problems can be detected and repaired quickly,
and theoretically such systems could be easily retooled for consumer vehicles,
and equipped to notify drivers of incipient problems by cell phone or email.
DaimlerChrysler Research and Technology North America CEO Akhar Jameel says
the real challenge is seamless coordination of the various computers in a
car, which are often hobbled by different software and communication barriers.
To solve this problem, the Society of Automotive Engineers is striving to
standardize car computer systems. Another challenge is determining what kind
of operational readings are important and how often such data should be recorded:
For instance, Jameel's research team thinks the activation of antilock brake
systems (ABS) should be tracked, since brake pads could be worn out faster
as a result. If a system records unusually high ABS usage, it could suggest
to the owner that the vehicle be looked over by a mechanic, or call a dealership's
service department to set up an appointment and order replacement parts.
Sally Greenberg of the Consumers Union is concerned that the data gathered
by such telematics system could be misused and violate the owner's privacy;
to prevent this, she argues that consumers should have final say over what
kind of information should be transmitted. http://www.nytimes.com/2003/03/27/technology/circuits/27next.html
"Making Machines See"
Photonics Spectra (03/03) Vol. 37, No. 3, P. 80; Hogan, Hank
- The
long-term target of machine vision vendors and developers is to enable machines
to see in three dimensions, while near-term goals include seeing around corners
using mirrors and prisms (a key component of semiconductor part inspection)
and seeing in color through pixel integration and other methods. As machine
vision systems shrink and become less complex, they are generally transitioning
to a CMOS-based architecture and boasting larger sensors due to increasing
pixel density, a trend that not all vendors have decided to exploit. American
vendors such as Electro Scientific Industries and European companies such
as Machine Vision Systems Consultancy are embracing CMOS technology. Machine
Vision Systems founder Don Braggins says interest in CMOS is driven by the
technology's potential research applications and the abundance of small-scale
manufacturing. On the other hand, Redlake MASD is one company that is sticking
with CCD-based sensors, although higher bandwidth and storage demands will
be a problem. Machine vision systems, despite their advancements, still have
drawbacks, such as motion-induced blurring, which could be resolved in a
number of ways, including sensor fusion, the addition of gyroscopes, or embedding
fiduciary marks in the environment captured by the camera. Investing machine
vision systems with the ability to process 3D information is the goal of
many commercial and academic initiatives, which are researching such possibilities
as stereoscopic vision and laser probes. However, one of the most daunting
challenges will be enabling machine vision systems to mimic a person's ability
to extract 3D data when one eye is closed. Click Here to View Full Article
From ACM News, January 15, 2003
"Business Apps Get Bad Marks in Usability"
CNet (01/14/03); Gilbert, Alorie- Difficult
to use business applications impede many software projects and cost companies
millions of dollars, according to Forrester Research. Forrester says many
enterprise resource planning (ERP) applications are too difficult for ordinary
users, and that many straightforward tasks are hard to accomplish using the
programs. ERP helps automate everyday work duties, such as aspects of human
resource management, order taking, and accounting. The study included ERP
applications from 11 leading vendors. The Forrester analysts involved did
not receive any special training in the programs before testing, but attempted
only what they considered normal, upfront tasks. Those included downloading
updates and software patches, changing security profiles, and adjusting the
program to reflect changes in company organization. Analyst Laurie Orlov
said ERP customers should demand better usability from their software vendors,
especially in the current constrained budget environment. Large firms often
spend millions of dollars implementing these systems, and anywhere from 10
percent to 15 percent of that amount usually goes toward software training
for personnel. Poor usability hinders the effectiveness of ERP projects because
workers spend more time doing things, require more training, and in some
cases abandon the electronic process altogether. http://news.com.com/2100-1017-980648.html
"Games of Infinite Possibilities"
Raleigh News & Observer Online (01/15/03); Cox, Jonathan B.
- North Carolina State University assistant
professor of computer science R. Michael Young is researching artificial
intelligence that allows evolving storylines in computer games. Young says
that instead of following programmers' expectations of how a game player
reacts, these new games study users and change plots to fit their style.
Still, he says there must be some tension between what the player expects
and what makes for a better story, so not everything is entirely predictable.
Such computer games would build upon a model of storytelling Young is refining.
His group recently added artificial intelligence capabilities to the first-person
shooter game Unreal Tournament by linking it to servers running their artificial
intelligence programs. The technology has applications beyond gaming as well,
since it would enable interactive learning methods. Young says that academic
gaming development sometimes crosses the commercial sector, but that the
pressures to meet a product deadline do not accommodate academia's need to
study larger problems. Young became interested in his current area of study
when considering how to have computers solve complex problems, then explain
the solution to humans with understandable instructions. He says the mechanical
aspects of robotics are advancing quickly so that humanoid machines can navigate
themselves and ascend stairs. He expects that artificial intelligence will
slowly pervade people's everyday lives, beginning with traditional computer
devices that can communicate ideas not preprogrammed. Click Here to View Full Article
From ACM News, January 10, 2003
"Computer Linguists Mix Language, Science"
Dallas Morning News Online (01/05/03); Rivera, Patricia V.- The
job of computer linguists involves teaching computers to comprehend spoken
language, speak, and translate text, according to Dr. Gary F. Simons of SIL
International, formerly the Summer Institute of Linguistics. The Internet
has spurred interest in text and speech processing, which has created opportunities
for professionals with computer linguistics expertise. Demand for computer
linguists is especially high in companies that publish multiple-language
catalogs and are seeking ways to reduce spending while still maintaining
quality. Mary Pope of inlingua Dallas explains that more and more customers
want their material to be available in multiple formats and softwares, not
just multiple languages. In addition to text-to-speech, speech recognition,
and machine translation programs, computer linguists also focus on developing
applications that enable computers to answer questions in natural language,
conduct Web-based information searches, and help people learn foreign languages.
Another potentially lucrative application Simon mentions is a program that
automatically screens email. The holy grail for many computer linguists is
to teach computers to understand natural language, but scientists are worried
that such a development could lead to the replacement of telephone operators,
airline reservation agents, and other service professionals. The ideal computer
linguist would be familiar with linguistic theory and artificial intelligence,
proficient in a procedural programming language, and skilled in the areas
of natural language processing or neural networks. Click Here to View Full Article
From DSS News, January 5, 2003
Ask Dan:
Where can I find or download a DSS demo?
What vendors have decision
support demonstration software?- Requests for information about DSS demos is a frequent query to Ask Dan.
The answer changes however depending upon what vendors are doing at
their web sites and with their products. Recently, Peter Keenan posted a
request on ISWorld for information on DSS demos, Hector Diaz wrote
"Where can I download a DSS demo?" and Bassem Farouk wrote "where can I
download a free trial for GDSS software?" So I'll try to provide a
starting point in this response. Some vendors will be missed in this
response and they should contact me with details of their free demos
and/or downloads. Readers may also want to check the DSS Vendor Pages at
http://dssresources.com/vendorlist.
- Buyer Beware! If you are considering downloading demonstration software,
keep in mind that the software may use significant resources on your
computer. Also, the size of some demos often results in slow downloads.
Demonstration software downloads have limitations and restrictions so
read the materials at the web site providing the software carefully. You
will usually need to complete an on-line request form and you may need
to speak with a sales representative.
- A biased sampling. It is hard to find good demonstration downloads
related to building data-driven DSS. A number of vendors do however
showcase web-based demonstrations of DSS built using their software. The
following list includes downloads for some development tasks like ETL
and some web-based demonstrations of data-driven DSS. Downloads of
single user development tools for building knowledge-driven and
model-driven DSS are much easier to find. Finding demos for
communications-driven and Group DSS is also a problem because by design
such software is a multi-user, server based application. The only GDSS
demo download that I've found is at Meetingworks.com. I haven't tried
downloading the 8-participant license of Meetingworks for Windows, but
I'm sure the company hopes you'll try it and then upgrade to
Meetingworks Connect and Meetingworks InternetEdition. Also, please note
I have not tried most of the demonstration downloads listed below ... so
send me feedback if you try any of them ... In general, I think
Microsoft Excel is the best introductory development environment for
learning about building both data-driven and model-driven DSS.
Check the following vendors for decision support related demos:
- 1. Actuate Software Corporation - Actuate Reporting System and
e.Spreadsheet Designer Demo,
http://www.actuate.com/productdemo/index.asp
- 2. Advanced Software Applications - ModelMAX Plus is a modeling,
scoring, and profiling analytical tool,
http://www.asacorp.com/HTML/Products/products.htm
- 3. arcplan, Inc. - dynaSight, there are several web demos available
showing the utilization of dynaSight in different computing
environments,
http://www.dynasight.com/demos_live.html
- 4. ArcView MapObjects 2.1, download a 90-day evaluation version at
http://www.esri.com/company/free.html.
- 5. Attar Software - XpertRule Knowledge Builder, http://www.attar.com/samples/kb_setup.htm, and XpertRule Miner, http://www.attar.com/samples/xm_setup.htm
- 6. Bayesia S.A. - BayesiaLab,
http://www.bayesia.com/GB/produits/bLab/BlabPresentation.php
- 7. Business Objects - WebIntelligence, a query, reporting, and online
analytical processing tool, an online demo, http://www.businessobjects.com/products/webi/
- 8. Brio.Enterprise and Brio.Report self-running demonstrations at
http://www.brio.com/products/demo/index.html
- 9. Business Forecasting Systems, Inc. - Forecast Pro,
http://www.forecastpro.com/downloads_&_demos.htm
- 10. Cygron Pte. Ltd. - DataScope, a data mining and decision support
tool, http://www.cygron.com/TrialDownload.html
- 11. Databeacon Inc. - Databeacon 5.3, online demo and download an
evaluation version, http://www.databeacon.com/
- 12. Decisioneering - Crystal Ball 2000 simulation software demo, an
Excel add-in, http://www.decisioneering.com/dss/
- 13. Decision Systems, Inc. - decisionport,
http://www.decisionsystems.com/dsiport.htm
- 14. DIMENSION 5 - miner3D.excel,
http://www.miner3d.com/m3Dxl/download.html
- 15. Enterprise Solutions, Inc. - MeetingWorks at http://www.Meetingworks.com
- 16. Entopia, Inc. - Quantum Suite,
http://www.entopia.com/products_pg3.3.htm
- 17. Expert Choice at http://www.expertchoice.com (free EC2000 trial version).
- 18. Insightful Corporation - Demos include Insightful Miner, a data
mining workbench, and example web applications,
http://www.insightful.com/products/demos.asp
- 19. Infoharvest - Criterium Decision Plus (an AHP implementation) at
http://www.infoharvest.com (free student version of CDP 3.0).
- 20. KMtechnologies - work2gether, electronic repository for company
documents and ideas, web-based flash demo only works in Internet
Explorer, http://www.kmtechnologies.com/en/Products/flashdemo.asp
- 21. Logic Technologies - BestChoice3 at http://www.logic-gem.com/bc.htm.
- 22. Lumina - ANALYTICA 2.0 from http://www.lumina.com, try it free for 30 days.
- 23. Megaputer Intelligence, Inc - PolyAnalyst Pro, suite of data mining
algorithms, and TextAnalyst, http://www.megaputer.com/php/eval.php3
- 24. Palisade Corp. - DecisionTools Suite, @RISK, PrecisionTree, TopRank,
BestFit and RISKview, http://www.palisade.com/html/trial_versions.html
- 25. Pilot Software at http://www.pilotsw.com has a walkthrough online of its
Balanced Scorecard system.
- 26. Treeage - DATA 4.0 trial download at www.treeage.com.
From ACM News, January 3, 2003
"Getting Smart About Predictive Intelligence"
Boston Globe (12/30/02) P. C1; Kirsner, Scott- Boston
Globe columnist Scott Kirsner expects the major technology debate of 2003
to revolve around the use of predictive intelligence, which is being employed
in the private sector for marketing purposes, but, more importantly, lies
at the heart of the Total Information Awareness system being developed by
the Defense Advanced Projects Research Agency (DARPA). The purpose of the
Total Information Awareness system is to root out terrorists and prevent
terrorist acts by focusing on suspicious online transactions, but organizations
such as the Electronic Privacy Information Center and the Electronic Frontier
Foundation argue that it is in fact little more than an unconstitutional
public surveillance system that supercedes citizens' privacy rights. There
are also objections over the choice of retired Navy Admiral and Iran-Contra
scandal figure John Poindexter to lead the Total Information Awareness project.
Meanwhile, companies that deal in predictive intelligence software and services
see a beneficial side in the business world: The technology helps them collate
profiles of customers in order to more effectively market products. Still,
Genalytics founder Doug Newell acknowledges that projects such as Total Information
Awareness should set limits on what kinds of data can be collected, how long
that data should be retained, and what should be done with it. He adds that
Poindexter's project is unworkable, because there have been so few U.S.-based
terrorist incidents on which to build reliable terrorist profiles. If the
system fails to achieve its primary goal, then it might be used by local
law enforcement to single out people that commit minor crimes such as parking
violations. http://digitalmass.boston.com/news/globe_tech/at_large/2002/1230.html
From ACM News, January 3, 2003
"Realer Than Real"
Nikkei Weekly (12/23/02) Vol. 40, No. 2061, P. 3; Naito, Minoru
- Head-mounted display (HMD) technologies
are an example of "mixed reality" systems that promise to seamlessly integrate
computer-generated data and imagery with the real world, and Japan is leading
the charge in this area: Japanese hospitals have made the technology an essential
tool in surgical procedures, while other uses for HMDs are being found in
the automotive design, entertainment, and disaster preparation sectors. Toppan
Printing, in conjunction with Takashi Kawai of Waseda University, is working
on a mixed reality system that combines HMDs with the Global Positioning
System to offer 3D graphical overlays that show visitors to museums and archeological
digs what ruins must have looked like in ancient times. Meanwhile, Canon
has partnered with SGI Japan to develop a mixed reality product for automotive
manufacturers and parts suppliers that allows potential customers to see
detailed, 3D images of cars as well as experience simulated travel. Communications
Research Laboratory has built a facility designed to simulate natural disasters--earthquakes,
volcanic eruptions, etc.--so that local and national government officials
can better prepare for such catastrophes and implement measures that will
significantly reduce the loss of life and property. Experts such as Shunichi
Kita of the Nomura Research Institute forecast that over the next 10 years
the HMD market will surge in much the same way the cell phone market did.
Mainstream adoption of HMDs and mixed reality systems will depend on researchers
developing lighter, more comfortable products. Optical companies are supplying
hardware for such efforts. Minolta, for example, has developed a 25-gram
holographic HMD that can be attached to the frame of the user's glasses.
From ACM News, January 6, 2003
"Interface Gets the Point"
Technology Research News (01/08/03); Patch, Kimberly- Scientists
at Pennsylvania State University and Advanced Interface Technologies are
developing a computer interface that can recognize the relationship between
prosody and gestures in an attempt to make human-computer interaction more
natural. Penn State researcher and Advanced Interface Technologies President
Rajeev Sharma says the project is a formidable challenge, and notes that
"The same gesture...can exhibit different meanings when associated with a
different spoken context; at the same time, a number of gesture forms can
be used to express the same meaning." He says that computers can recognize
isolated gestures with as much as 95 percent accuracy, and adds that the
precision of gesture recognition systems was raised from 72 percent to approximately
84 percent when the system took prosody into consideration. It is no mean
feat to detect correspondence between visual and audio signals, while speech's
phonological information and intonational characteristics increase the difficulty,
according to Sharma. He notes that a more natural human-computer interface
such as the one his team is working on could be very useful for applications
such as video games, crisis management, and surgery. An important step in
the system's development was the inclusion of speech pitch and hand velocity
into the Hidden Markov Model, which boosted the scientists' understanding
of the connection between prosody and gestures. The researchers are currently
employing their technique in a geographic information system prototype that
incorporates ceiling-attached microphones, cameras that track user gestures,
and a large screen display. Sharma says testing how well the system can interact
with people is the next step. Click Here to View Full Article
"Feeling Blue? This Robot Knows It"
Wired News (01/01/03); Knapp, Louise
- A research team at Vanderbilt University's
Department of Mechanical Engineering is developing a robot equipped with
sensors that are used to determine people's emotions by picking up physiological
cues. The machine is designed to approach a person and offer assistance when
it discerns that the person is in distress. The scientists believe the robot
will be well-suited to perform as an assistant for military personnel under
battlefield conditions, although getting people to accept it will be a major
challenge. The robot can record a person's heartbeat with an electrocardiogram,
notice fluctuations in perspiration via a skin sensor, measure blood pressure,
identify muscular stress in the brow and jaw with an electromyography detector,
and read temperature. Algorithms are used to translate these readings into
a format that the robot can comprehend, explains Vanderbilt researcher Nilanjan
Sarkar. He adds that this data can be processed in real time. Office of Naval
Research (ONR) corporate communications officer John Petrik says that his
organization, which co-sponsors the Vanderbilt project, thinks military robot
aides could become smarter thanks to the researchers' work. However, Carnegie
Mellon University's Takeo Kanade cautions that "we are at a very primitive
stage of understanding the relation between the internal states--what is
observable--and human emotion." http://www.wired.com/news/technology/0,1282,56921,00.html
From ACM News, March 24, 2003
"Why We Should Lose the Back Button"
Computerworld New Zealand (03/18/03); Bell, Stephen
- Zanzara owner Richard Mander counsels
vendors and user groups on user interface design. He says Web designers should
stop being dependent on the Web browser's back button and instead incorporate
more obvious navigation links and indicators on Web sites. He says the action
of the back button is too vague, since it can go to a page on the site or
to another site altogether, and that the user should have an option of where
he or she wants to go specifically. In addition, he says, systems should
inform users upfront about what its different parts are and how they can
be used instead of leaving them to find out on their own. Icons should provide
a hint as to what they give access to or what function they perform, similar
to how buttons on a device are labeled, says Mander. As it is, many Web pages
rarely define page links, application links, and static areas clearly. For
intricate processes, Mander believes users should be guided step-by-step
much like software program "wizards." And if something fails to work properly,
the user should be able to halt the job in such a way that minimally impacts
other phases of the process. http://www.pcworld.com/news/article/0,aid,109856,00.asp
"Knowledge Managing"
InfoWorld (03/17/03) Vol. 25, No. 11, P. 1; Angus, Jeff
- The economic downturn has cleared
a path for the rethinking of knowledge management (KM) and how it can be
incorporated into the enterprise. Now companies face the daunting challenge
of investing in KM and reorganizing their workflow and processes in order
to accommodate it. Past KM projects were characterized by numerous failures
attributed to companies' inaptitude to integrate stored knowledge and human
expertise, concludes Xerox Global Services CTO Bob Bauer. "They got lost
because they got focused on trying to create transactions of data with data
when the fact is that any decision process or action based on valuable transactions
involves people," he explains. More recent KM projects are being driven by
KM's maturity as well as the entrenchment and proliferation of technologies
such as XML and better recognition technology. Open Text's Anik Ganguly believes
that more robust integration is an important component of KM, in that it
deconstructs input, which is perhaps the single biggest hindrance to KM implementations.
Forthcoming technologies that promise great advancements in KM systems' primary
functions--gathering, organizing, refining, and distributing--should spur
KM adoption. Voice-mining technology will be especially crucial for the knowledge
gathering component, since experts reckon that 75 percent of all essential
corporate knowledge is relayed verbally; improved pattern-recognition initiatives
from ClearVision, Autonomy, and others will benefit organization; and Bauer
says better pattern recognition and further XML adoption are helping the
refinement function. http://www.infoworld.com/article/03/03/14/11km_1.html?s=feature
"The Meaning of Computers and Chess"
IEEE Spectrum (03/03); Ross, Philip
- The
last three major human vs. computer chess matches ended in a draw, thus demonstrating
the continued refinement of software and human players' inability to modify
their strategies against such programs; it also signifies that either computer
intelligence is improving, or that playing chess may not necessarily be a
sign of true intelligence. It is a significant development in the field of
artificial intelligence research, which was the reason why a chess-playing
computer program was conceived in the first place. Chess master Garry Kasparov
accused IBM, the creator of his 1997 computerized opponent Deep Blue, of
cheating, arguing that only a person could have prepared to exchange pawns
the way Deep Blue did. Deep Blue designer Feng-hsiung Hsu, in his book "Behind
Deep Blue: Building the Computer that Defeated the World Chess Champion,"
counters that the software was programmed, in consultation with Grandmaster
Joel Benjamin, to consider files that were not just open, but potentially
open, and let its if-this-then-that algorithm dictate the move based on those
variables. Electrical engineer Claude Shannon, who proposed the chess-playing
algorithm over half a century ago, conceived a search function as the first
step, one that generates all possible move sequences to a certain depth,
as determined by the computer's speed and memory. The rub is that a program
with unrestricted search powers can flawlessly play with an assessment function
that can only discern between checkmate and draw, while a program with superior
evaluation powers would be unable to look even one move ahead. The Israeli
chess program Kasparov battled in February, Deep Junior, was not as powerful
as Deep Blue, but it had the advantage of greater knowledge of the game,
and thus understood chess better, according to Israeli grandmaster Boris
Alterman; in addition, Deep Junior distinguished itself by being willing
to sacrifice materials in order to reach intangible goals, such as freedom
of movement or making its opponent's king more vulnerable to attack.
http://www.spectrum.ieee.org/WEBONLY/wonews/mar03/chesscom.html
From ACM News, March 21, 2003
"Recent Advances in Computer Vision"
Industrial Physicist (03/03) Vol. 9, No. 1, P. 18; Piccardi, Massimo; Jan, Tony
- Computer vision technology is being
developed to usher in sophisticated, human-centered applications for human-computer
interfaces (HCIs), augmented perception, automatic media interpretation,
and video surveillance. Computer vision is incorporated into HCIs on the
premise that computers can respond more naturally to human gestures via camera;
notable achievements in this sector include a computer that makes its screen
scroll up or down by following users' eye movements, and a downloadable application
that tracks the movements of the user's nose. Cameras could also act as peripherals
in smart houses, triggering various functions--lighting, temperature control,
and so on--in response to a human presence. Augmented perception tools are
designed to enhance the normal sensory faculties of people, and one interesting
development in this field is The vOICe from Philips Research laboratories.
VOICe uses a camera to accompany people and produces sounds to alert them
to the position and size of objects in their path--a very useful tool for
visually-impaired users. Computer vision is also aiding security personnel
through video surveillance systems programmed to categorize objects--cars,
people, etc.--and track their trajectories in order to determine anomalous
or suspicious behavior. One example is a system designed to single out suspicious
pedestrian behavior in parking lots, which was developed at Sydney's University
of Technology; the system first subtracts an estimated "background image"
to distinguish moving objects from static objects, identifies people based
on form factor, takes samples of each person's speed every 10 seconds to
establish a behavioral pattern, and classifies that behavior with a neural
network classifier. Computer vision utilized for automatic media interpretation
helps users quickly comb through videos for specific scenes and shots: Carnegie
Mellon University's Face Detention Project, for instance, can pinpoint images
containing faces, while the MPEG-4 standard supports consistent visual quality
in compressed digital video by assigning objects in a scene varying degrees
of quality. http://www.aip.org/tip/INPHFA/vol-9/iss-1/p18.html
Some guidelines on web design:
What Makes a Great Web Site?
From ACM News, March 19, 2003
"Flu Shots for Computers"
Economist (03/15/03) Vol. 366, No. 8135, P. 8- Researchers
have applied computing to biology to map the human genome, but now biology
is being applied to computing to fight electronic viruses and worms. Sana
Security has borrowed the concept of the human immune system in its effort
to create software that protects computers from security breaches. Sana's
Primary Response software, based on research done at the University of New
Mexico in Albuquerque, is designed to work similar to the way the body's
natural immune system fights off illness by creating a profile of itself.
Primary Response monitors programs running on computers such as remote-login,
Web, email, and database servers, looking at patterns of system access requests
to build up the profile. This method is distinct from others that rely on
built-in assumptions of what an attack will look like. The software considers
a deviation from the profile an attack, and moves to block all file access
associated with a program under attack, protecting files from being stolen,
modified, and deleted, and stopping new programs from being launched. Primary
Response also does a forensic investigation of file-access details, log files,
and open network connections to determine what happened. Besides hacker break-ins,
Sana Security founder Steven Hofmeyr says the system also alerts administrators
to malfunctions and other cases of irregular behavior. Customers who use
Sana's solution report only a few false alarms each month.
From ACM News, March 17, 2003
Can Sensemaking Keep Us Safe?"
Technology Review (03/03) Vol. 106, No. 2, P. 42; Waldrop, M. Mitchell
- The Sept. 11 attacks created a demand
to leverage the United States' strength in analytical technology and networking
to build what the Markle Foundation's Task Force on National Security in
the Information Age calls a virtual analytic community threaded together
by "sensemaking" technologies designed to mine vast quantities of data for
signs of possible terrorist activity. The first step in building a virtual
intelligence system is to establish an online forum where local officials
can share and analyze intelligence, such as a virtual private network secured
by standard encryption; the next major component is data-sharing between
federal, state, and local agencies. The technical limitations of such a process
could be partially alleviated through distributed computing, but information
stored in antiquated databases can be difficult to access due to incompatible
formats, while the organizations that control the databases are reluctant
to disclose so-called "sensitive" information to outsiders. Oracle thinks
all data should be changed to a standard format and stored in a common data
warehouse, but IBM thinks "federated" information will solve the sensitive
data problem as well as the compatibility problem: With such a system, a
data owner can augment his database with a "wrapper" that allows outsiders
to access portions of his data while keeping sources, methods, and other
sensitive information secret. The key difficulty is extrapolating information
important to security from the gargantuan database of unstructured information,
and Stratify is one of a number of startups funded by the CIA's In-Q-Tel
firm working on a solution. Stratify's Discovery System is programmed to
automatically create a taxonomy of the received information so that it can
be organized into categories representing specific subject matter and concepts,
notes Stratify CTO Ramana Venkata. Meanwhile, U.S. intelligence agencies
are making use of i2's Analyst's Notebook visualization toolkit, which can
map out the progression of events while connecting related transactions,
people, and entities via link analysis charts that also include supporting
evidence tagged with associated data--sources, security levels, reliability,
etc.
From Edupage, March 17, 2003
SPELLING AND GRAMMAR CHECKERS ADD ERRORS
- In a study conducted at the University of Pittsburgh, computer spelling
and grammar checkers actually increased the number of errors for most students.
The study looked at the performance of two groups of students: one with relatively
high SAT verbal scores and one with relatively lower scores. The group with
lower SAT scores made an average of 12.3 mistakes without the spelling and
grammar tools turned on and 17 mistakes with the tools. The students with
higher SAT scores made an average of 5 mistakes without the tools and an
average of 16 errors with the tools. According to Dennis Galletta, a professor
of information systems at the Katz Business School, the problem is one of
behavior rather than of technology. Some students, he said, trust the software
too much. Richard Stern, a speech-recognition technology researcher at Carnegie
Mellon University, said that when computers attempt to identify proper grammar,
the computer has to make some guesses. It becomes "a percentage game," he
said. Wired News, 14 March 2003 http://www.wired.com/news/business/0,1367,58058,00.html
DSS examples and discussions
- Cutting Through the Fog of War
- Gruden's High-Tech Advantage
An example of analysis and decision making: Trust No One at the Airport.
The Big (Data) Dig: Data mining analytics for business intelligence and decision support
From DSS News, March 16, 2003 -- Vol. 4, No. 6
Ask Dan!
by Daniel J. Power
How has and will Moore's Law impact computerized decision support?- There is a certain comfort that comes from identifying predictive
"natural" laws. They simplify and make sense of otherwise complex
phenomena. Moore's Law has provided that type of comfort to many
technologists for almost 40 years. So what is Moore's Law?
- In 1965, Gordon Moore wrote a paper for Electronics magazine in a
feature "The experts look ahead" titled "Cramming more components onto
integrated circuits". He began "The future of integrated electronics is
the future of electronics itself. The advantages of integration will
bring about a proliferation of electronics, pushing this science into
many new areas. Integrated circuits will lead to such wonders as home
computers ..."
- According to the Intel web site, Moore observed an exponential growth in
the number of transistors per integrated circuit and predicted that the
trend would continue. The popularized statement of Moore's Law is that
the number of transistors in an integrated circuit doubles every 18 to
24 months. Intel expects Moore's Law will continue at least through the
end of this decade. The "mission of Intel's technology development team
is to continue to break down barriers to Moore's Law".
- Gordon Moore helped found Fairchild Semiconductor and then Intel. His
efforts and those of his colleagues made sure integrated circuit
technology evolved and improved at the predicted rate of progress.
- The evidence of the past 35 years supports the conclusion Moore reached
in 1965. Intel introduced the 4004 microprocessor in 1971 with 2,250
components. The 8008 chip introduced in 1972 had 2,500. By 1974, the
8080 chip had 5,000 components. The groundbreaking 8086 microprocessor
of 1978 had 29,000 components. In 1982, the 286 chip had 120,000; the
386™ processor in 1985 had 275,000; by 1989 the 486™ DX processor had
1,180,000 components on a small chip. Once the million barrier was
broken, the number and density of components expanded rapidly. In 1993,
the Pentium® processor had 3,100,000 components and the Pentium II
processor in 1997 had 7,500,000. In 1999, Intel introduced the Pentium
III processor with 24,000,000 components. Approximately 18 months later,
Intel announced the Pentium 4 processor with 42,000,000 components. On
March 12, 2003, Intel introduced it's Centrino™ mobile technology
integrating wireless capability.
- The two
most important Integrated Circuit product categories are the microprocessor
and memory devices. These products provide the technology that enables computerized
decision support. As the technology has gotten more powerful and more cost
effective new applications have become feasible.
- Improvements in microelectronics have stimulated and enabled the
development of decision support technologies. The earliest Integrated
Circuits provided some limited decision support capabilities for Apollo
Space missions. The chips of the late 1970s made it possible to develop
spreadsheets and PC-based decision support applications. Specialized
chips in the early 1980s stimulated Artificial Intelligence research.
The 386™ and 486™ DX processor made client-server applications and GDSS
feasible. Improvements in memory size and speed in the early 1990s made
data warehousing feasible. Putting more components on microprocessors
miniturized our computers and supported development of innovative input
and output technologies. Suppliers of innovative microelectronics make
innovative DSS possible.
- There seems to be a 2-3 year lag in the diffusion of improvements in
microelectronics into decision support applications. Currently, the
capability of the Pentium 4 for enhanced graphics and visualization is
reflected more in video games than in DSS. The Centrino™ mobile
innovation can potentially expand the presence of decision support in
our work and personal lives.
- Moore's Law has served as a stimulus and benchmark for developments in
microelectronics and information processing. It has become a driver of
innovation and progress in the semiconductor industry. Expectations
matter! Decision support applications need to exploit the enhanced
capabilities that result from cramming more components on integrated
circuits.
- There has been a mutually beneficial relationship between innovation in
semiconductors and end-use decision support applications. The advance of
technology lets us work to implement what we can envision to create
innovative DSS. Advanced decision support will result from technology
advances, opportunistic and fortuitous circumstances, and from the
active imaginations and dedicated actions of innovators.
References
- Moore, Gordon E., "Cramming more components onto integrated circuits",
Electronics, Vol. 38, No. 8, April 19, 1965, URL
ftp://download.intel.com/research/silicon/moorespaper.pdf
- Schaller,
Bob, "The Origin, Nature, and Implications of 'MOORE'S LAW': The Benchmark
of Progress in Semiconductor Electronics", September 26, 1996, http://mason.gmu.edu/~rschalle/moorelaw.html.
From ACM News, March 14, 2003
"Social Software and the Politics of Groups"
InternetWeek (03/10/03); Shirky, Clay- Thanks
to the advent of the Internet and social software that facilitates group
communications, large numbers of people can now converse with each other
without being inconvenienced by conventional barriers of physical location
and time. This in turn has caused new social patterns--chatrooms, Weblogs,
mailing lists, etc.--to emerge. Social software also sets up groups as entities,
giving rise to behavior that cannot be anticipated by analyzing individuals.
In defiance of earlier projections about the Internet's social impact, many
successful online communities have limited their growth or set up size boundaries;
erected non-trivial blocks to joining or becoming a member in good standing;
and are enforcing criteria that restrict individual freedoms. The tension
between the individual and the community inherent in social interactions,
whether online or offline, must be addressed in a group-supportive system
by rules that outline the relationship between individuals and the group
and set limits on certain kinds of interactions. Designers of social software
must consider a wide range of issues, including how good group experience
can be tested, how software supports group goals, and the best barriers to
group membership. In terms of advancement, user software is ahead of social
software because developers are more familiar with single-user rather than
group experience. Another contributing factor is greater developer emphasis
on software's technical aspects instead of its social implications.
From ACM News, January 13, 2003
"Immobots Take Control"
Technology Review (01/03) Vol. 105, No. 10, P. 36; Roush, Wade- Immobile
robots, or "immobots," use model-based reasoning to obtain a clear picture
of their internal operations and the interaction of their myriad components
so that they can reconfigure themselves for optimal performance and avoidance
of unexpected difficulties. Immobot applications range from office equipment
to vehicle diagnostics to spacecraft, while even more complex systems are
on the drawing board or undergoing testing. Several Xerox printer-copiers
feature model-based scheduling programs designed to optimize moment-to-moment
paper flow and boost productivity, while IBM is developing reconfigurable,
autonomic storage networks and Web servers. Meanwhile, several efforts are
underway in Europe to outfit passenger vehicles with diagnostic immobots
that use model-based programming to detect problems that expert mechanics
may not have considered. Such projects indicate that the initial commercial
application of immobot software will likely be in the automotive sector,
according to Louise Trave-Massuyes of France's Centre National de la Recherchi
Scientifique. Model-based software is seen as far more efficient than hand-coded
heuristics software that most engineers rely on; the problem with the latter
is that reliability is often sacrificed for affordability. The model-based
approach obviates the need for engineers to anticipate every possible contingency,
and leaves that deduction to the immobot software, which builds a step-by-step
plan for solving problems or fulfilling operational parameters based on its
knowledge of its inner workings. Currently being tested in Brazil is one
of the larger immobot projects, an advisor that helps manage five metropolitan
water treatment facilities by monitoring water quality and suggesting solutions
to problems; the project is highly complicated, since it requires the immobot
to model not only system components, but the physical and biological processes
that impact water quality. http://www.technologyreview.com/articles/roush1202.asp
From Edupage, March 10, 2003:
- CITY SUPPLEMENTS ALARM WITH PC NOTICES:
The city of Lincoln, Nebraska, is about to introduce a new system that
allows the public to download a new emergency alert application from
the city's Web site. When government officials have urgent warnings
for the community, such as notices about weather or about national or
local security, computer users who have downloaded the application will
hear an alarm and then will see the warning in a pop-up box.
Information about the warning, as well as URLs for further information,
will be included. The system will work in conjunction with existing
alert systems for television and radio. The system also allows targeted
alerts to particular groups of users, such as school administrators in
the event of a school shooting. An official from the city said the
system will later be available for PDAs, cell phones, and beepers.
Federal Computer Week, 10 March 2003.
http://www.fcw.com/geb/articles/2003/0310/web-lincoln-03-10-03.asp
For those of you who want to have some kind of testing in your system, you could piggy back on it. The test is MAPP and is at http://www.assessment.com/MAPPMembers/Welcome.asp?accnum=06-5570-000.00.
From ACM News, February 28, 2003
"Turning the Desktop Into a Meeting Place"
New York Times (02/27/03) P. E6; Boutin, Paul- Software
engineer Robb Beal's Spring computer interface differs from traditional desktop
interfaces by using hypertext representations of people, places, and things
instead of icons for applications and Web sites, thus simplifying frequent
user activities such as Internet communication and e-shopping. Spring, which
runs on Apple Computer's OS X operating system, replaces Mail and Microsoft
Word icons with these representations; users, for example, could ask someone
to meet them at a specific place by placing a cursor over the person's representative
icon, clicking on it, and then drawing a line to the icon signifying the
place, which triggers a pop-up menu that offers a range of options, such
as emailing the recipient an invitation, sending directions to the destination,
etc. In addition, Spring might visit a related Web site so that invitations
and scheduling can be completed. A similar setup exists for facilitating
electronic transactions, such as dragging a credit card icon to the image
of a desired item. Spring allows users to deploy multiple displays, or canvases,
with a different set of object icons, such as a canvas for friends and another
canvas for business associates. Steven Johnson, author of "Interface Culture:
How New Technology Transforms the Way We Create and Communicate," praises
Spring for its ability to transform a computer into "a bridge to people,
to things you want to buy, to data you need." The development of Spring closely
follows similar developments at Apple and Microsoft with their respective
Windows XP and iLife products. Hillel Cooperman of Microsoft's Windows User
Experience team notes that "Having metaphors and iconography people could
relate to in the real world was a great bridge for bringing nontechnologists
into the world of the PC." Still, Beal and other experts believe that the
traditional desktop interface has a lot of life left in it. http://www.nytimes.com/2003/02/27/technology/circuits/27inte.html
(Access to this site is free; however, first-time visitors must register.)
From K@W Newsletter, February 26-March 11, 2003
The Mammogram Experiment: How Emotions Can Affect High-Stakes Decision-Making- A
breast cancer scare that turns out to be a false alarm is cause for relief,
but may also trigger delays in future mammogram screenings. In a controlled
experiment that surveyed women waiting for mammograms, Wharton marketing
professors Barbara Kahn and Mary Frances Luce found that the emotional stress
of believing they may have breast cancer causes patients to indicate they
would be likely to delay future mammograms. The findings could have broad
implications for health-care providers and patients, especially as consumers
become more responsible for their own health care decisions. Read the article
From ACM News, February 26, 2003
"Software Uses In-Road Detectors to Alleviate Traffic Jams"
Newswise (02/25/03)
- An Ohio State University engineer has
developed software that could help alleviate traffic jams faster using loop
detectors that are currently used to control traffic lights and scan traffic.
In the March issue of Transportation Research, Benjamin Coifman of Ohio State
describes how he employed these detectors to precisely measure vehicles'
travel time and identify traffic jams. His work started at the University
of California, Berkeley, in 1999, when he equipped control boxes along a
three-mile stretch of road with computer network hardware; traffic data was
collated from loop detectors every third of a mile. Coifman then wrote computer
algorithms that measure vehicle travel time, and could determine the formation
of a traffic jam within three and a half minutes of the initial traffic slowdown.
Coifman also had to instill an accountability for human factors--rubbernecking,
lane changes, etc.--within the software, which can pinpoint delays caused
by accidents long before slowed traffic backs up to a detector. The Ohio
State engineer is currently working with the Ohio Department of Transportation
to improve the travel time estimates gathered by loop detectors and displayed
to motorists along highways so they can avoid traffic. Coifman adds that
road design could also be enhanced with his software, which was produced
with the support of the U.S. Department of Transportation, the Federal Highway
Administration, the California Department of Transportation, and the University
of California's Partners for Advanced Highways and Transit Program. The Texas
Transportation Institute's 2002 Urban Mobility Study estimates that the average
American urban resident annually spends 62 hours stuck in traffic, while
the average city can lose $900 million a year due to traffic jams. http://www.newswise.com/articles/2003/2/TRAFFIC.OSU.html
From ACM News, February 14, 2003
"Professor Directs Two Tech Efforts"
Chicago Sun Times (02/13/03); Lundy, Dave- Kris
Hammond, founder and director of Northwestern University's Intelligent Information
Laboratory (InfoLab) and Information Technology Development Laboratory (DevLab),
studied and developed artificial intelligence for 12 years at the University
of Chicago, and moved to Northwestern to apply his research to real-world
problems. He says the purpose of InfoLab is "to reduce the friction that
people constantly encounter when trying to find information, both online
and offline" through a combination of artificial intelligence, information
retrieval, and cognitive science. Hammond founded DevLab as a facility to
help transfer academically-developed technologies to the commercial sector
and build thriving, Chicago-based tech businesses; both labs dovetail with
Northwestern's goal of nurturing students' programming and software engineering
skills. He says "our students would like to know how to be not just programmers,
but software engineers." Projects under development that Hammond thinks hold
great potential include Watson, an information retrieval system that employs
artificial intelligence, and a program incorporated into a TiVo box that
collates closed-caption information from TV programs and presents relevant
data in a micro-site built in real time. Hammond thinks the local business
community would benefit significantly by using the university as a resource
to solve problems, while academics would gain a better knowledge of business
problems. He says that DevLab's chief purpose right now is to build value
rather than make money, and praises Northwestern for not subscribing to the
traditional license-and-leave practices of many tech transfer programs. http://www.suntimes.com/output/hitechqa/cst-fin-lundy13web.html
"Will Computers Replace Engineers?"
Discover (02/03) Vol. 24, No. 2, P. 40; Haseltine, Eric
- A
roundtable of technology experts debated how computers are encroaching on
the engineering profession by automating engineering tasks, and what this
holds for the future. When asked what he thinks is the most significant effect
computers have had, Stevens Institute of Technology professor Lawrence Bernstein
described a revolution in structural design that has led to, among other
things, earthquake-resistant buildings in Japan; he also anticipated a bioengineering
explosion soon thanks to the use of computers in analyzing proteins, RNA,
and DNA. Meanwhile, Columbia University professor Al Aho, formerly of Bell
Laboratories, said he foresees a time when machines and human beings are
interchangeable, and believes that computers with computational power and
memory equal to that of human beings will emerge within 20 to 30 years. However,
consultant Jeff Harrow did not think computers are "anywhere close" to supplanting
engineers, because they lack creativity and are chiefly concerned with carrying
out "scut work." In his opinion, the most exciting trend in computing is
the application of computers beyond the computing field, computerized surgery
being an example. Nicholas Donofrio of IBM said computers play a key role
in designing new computers, and forecasted that they will be able to learn
more from people in the future; he was very excited that computers are replacing
physical construction and testing of products via simulation, which greatly
streamlines the development process. Harrow, Aho, Donofrio, and others maintained
that there will always be a need for engineers for a number of reasons, including
the flood of new ideas and the widening of the field's scope. Harrow said,
"[I]f we ever get to the point where there's nobody who understands what...[computers
do], we're in deep trouble, because then we'll never be able to make any
additional moves forward." He also speculated that the move to biologic computers
could spawn self-replicating machines.
From ACM News, February 21, 2003
"Has Your Computer Talked Back to You Lately?"
Newswise (02/20/03)
- Israeli professor Dov Dori at the Technion-Israel
Institute of Technology has created a software translation program that allows
users to make programming changes through speech and graphic diagrams. Though
Dori, who is also a research affiliate at MIT, wants to configure his OPCAT
program for all types of computer interfaces, he says the current focus is
on its industrial application. He likens OPCAT to CAD applications that eliminated
the need for draftsmen, or word processors that made typists obsolete. By
speaking to the computer, users can pull up a graphical representation of
programming options, which they can then implement without having to know
the back-end code. Conversely, users can manipulate graphic diagrams and
listen to the computer's audio response. Dori says this versatility allows
people to interact with their programs comfortably, no matter what their
learning style. Dori pioneered a concept called Object Process Methodology,
in which he says everything is either an object or a process that changes
an object. Based on this premise, OPCAT works from a model to generate computer
code automatically, based on users' instructions. Pratt & Whitney Canada
principal engineering and applications architect, Mark Richer, says he used
a beta OPCAT version for analyzing aerospace concepts. He says the software
automatically generated thousands of diagrams and statements that would have
been impossible to derive using traditional methods. http://www.newswise.com/articles/2003/2/TRANSLAT.IIT.html
"Old School"
CNet (02/18/03); Kanellos, Michael
- A diversity of computer applications
and technologies, including artificial intelligence, robotics, and data searching,
are using probability theory outlined by 18th-century clergyman Thomas Bayes.
Bayesian theory dictates that the probability of future events can be determined
by calculating their frequency in the past, and its credibility has been
given a major boost in the last 10 years thanks to advances in mathematical
models and computer speed, as well as laboratory experiments. The predictions
are reinforced using real-world data, and the results are altered accordingly
when the data changes. As computers become more powerful, the number of calculations
needed to predict phenomena has been reduced thanks to the introduction of
improved Bayesian models. Microsoft's upcoming Notification Platform will
enable computer and cell phones to automatically filter messages, schedule
meetings free of human interaction, and organize approaches for contacting
other people using probability modeling. For instance, the platform's Coordinate
application collates data from personal calendars, cameras, and other sources
to build a profile of a person's lifestyle, which can be applied to the delivery
of information to application users. Meanwhile, researchers at the University
of Rochester employ Bayesian models to detect anomalies in a person's walk
using data from cameras fed into a PC. Eric Horvitz of Microsoft Research's
Adaptive Systems and Interaction Group explains that Bayesian theory was
given little credence in the computing world until it became clear that logical
systems could not predict all unforeseen variables. http://news.com.com/2009-1001-984695.html
From ACM News, February 12, 2003
"Goodbye GUI? Ambient Orb a Computer 'Mood Ring'"
Mass High Tech (02/10/03); Miller, Jeff
- Forthcoming products developed by Ambient
Devices have the potential to dramatically change computer/person interaction,
according to Ambient executives. One of the products is a large orb that
is wirelessly linked to Internet data feeds via pager frequencies, and can
glow in virtually any color thanks to a digital LED. The device can respond
to changes in the Dow Jones average, for example, glowing green when the
market is up or red when the market is down. But users can also configure
the orb to other pre-set channels--homeland security threat levels, temperature
forecasts, etc.--using a touch-tone phone. Another product Ambient will sell
features a clock-like face with one hand and a quartet of illuminated indicators;
Ambient President David Rose says that its default channel will probably
monitor the weather, with the hand tracking temperature forecasts and the
indicators displaying weather conditions. The device will be shipped with
multiple faces so consumers can use it for different channels. Ambient uses
pager frequencies to deliver data because they can penetrate deeper into
buildings than CDMA-based networks, and because most of the United States
is ensconced by pager networks. Rose says that many of Ambient's products
have been inspired by the work of MIT's Tangible Media Group under the direction
of Hiroshi Ishii, who has been researching alternatives to the traditional
graphical user interface (GUI). Ishii says that using your hands is just
as important as using your mind when creating something. Ishii says, "In
the current interface, the eyes are in charge and the hands are underemployed."
http://www.masshightech.com/displayarticledetail.asp?art_id=61794
"Falling Prey to Machines?"
Newswise (02/11/03)
- John Holland, recipient of the
first computer science Ph.D in 1959, says artificial intelligence is possible,
but will take far more work on the conceptual side. Holland is now a computer
science and psychology professor at the University of Michigan and created
genetic algorithms in the 1960s, the basis of optimization models today that
find the most efficient way to manage energy, design engines, or operate
distribution systems. Michael Crichton used Holland's work as the scientific
basis of his recent novel Prey, in which nano-scale machines threaten humanity.
Holland says computers today cannot evolve human-like thinking because programmers
do not know how to define the parameters of their goal. This, he says, reflects
the growing gap between computing power, which doubles about every two years,
and software performance, which takes at least 20 years to double. Holland
also says that computer processors do not have the sophisticated network
of connections human brains have, called fanout. While each element in today's
high-end computers connect to about 10 other elements, elements in the human
brain are networked to about 10,000 other nodes. Holland says future computers
with higher scales of fanout will not be comparable to today's machines in
terms of what they are capable of. Holland says true breakthroughs in artificial
intelligence will come when computers use the same processes as humans, not
just find the same conclusions using different processes. For this reason,
a comprehensive theory is needed to guide research, but that will probably
take decades, he says. http://www.newswise.com/articles/2003/2/PREY.MIE.html
From ACM News, February 7, 2003
"What Are the Chances?"
New York Times (02/06/03) P. E1; Schiesel, Seth
- Evaluating the risk of "low-probability,
high-consequence events"--natural disasters, nuclear accidents, and spacecraft
catastrophes, for example--lies at the core of probabilistic risk assessment,
which is used by mathematicians, engineers, insurance executives, businesses,
and federal agencies thanks to the availability of conceptual and computing
tools, while recent gains in computing power have boosted users' confidence
in these methods. Probabilistic risk assessment relies on mathematics to
measure the odds of a specific outcome using what is known or estimated about
the myriad variables that might contribute to that outcome. A NASA consultant
used probabilistic risk assessment in 1995 to determine that the odds of
a catastrophic space shuttle failure were 1 in 145; similar techniques are
used to make nuclear labs safer, gauge the health risks posed by toxic-waste
sites, determine how safe and reliable cars and aircraft are, estimate insurance
rates, and weigh the odds of terrorist attacks. For example, insurance companies
employ probabilistic modeling to simulate how a hurricane might behave based
on historical data in which a dozen variables--frequency, size, intensity,
etc.--are involved. The potential storm patterns that emerge, which can range
from 5,000 to 10,000, are tested randomly on models of the properties insured
by a specific firm, a process known as Monte Carlo analysis. Using probabilistic
risk assessment in industrial situations is even more complicated, because
the variables can number in the thousands, tens of thousands, or even hundreds
of thousands; rather than referring to a historical database, an engineer,
for example, must use a computerized model to assess the physical and electromagnetic
traits of each component in the machine he is designing prior to probabilistic
analysis. The best application for industrial probabilistic models is in
the design phase rather than after the machine or product has been put into
operation. http://www.nytimes.com/2003/02/06/technology/circuits/06risk.html
"Pervasive Computing: You Are What You Compute"
HBS Working Knowledge (02/03/03); Silverthorne, Sean
- Panelists at the recent Cyberposium
2003 focused on pervasive computing, and took the opportunity to note their
respective companies and institutions' advances in that area. Stephen Intille
of MIT commented that researchers there are investigating how minuscule sensors
distributed throughout a house can biometrically monitor the health of its
residents, which could be a very useful--and cheaper--alternative to conventional
health care. It is estimated that there are 7.5 billion micro controllers
worldwide; these sensor and controller chips are used for a wide array of
operations, such as heating, ventilation, and air conditioning. Ember, represented
on the panel by CTO Robert Poor, is developing wireless, self-healing networks
to interconnect these myriad chips. Axeda's Richard Barnwell talked about
how his company sells real-time performance tracking devices that help anticipate
equipment failures, maintain machines remotely, and show manufacturers how
their products are used by customers. When asked about how their data-gathering
devices could potentially affect personal privacy, Intille replied that MIT's
home sensors would be used only with user permission, and cited cell phones
with GPS tracking capabilities as a more worrisome device to be concerned
with. Barnwell said that the use of monitoring devices would be strictly
regulated in certain spaces, such as the health care sector. One attendee
asked who will be responsible for replacing empty batteries on such devices,
and MIT professor and panelist Sandy Pentland suggested batteries that draw
power from radio signals and other technologies as one solution. Click Here to View Full Article
"Chaos, Inc."
Red Herring (01/03) No. 121; Waldrop, M. Mitchell
- Agent-based computer simulations
based on complexity science are being used by companies to improve their
bottom lines. Complexity science promotes the theory that all complex systems
have common characteristics: They are massively parallel, consisting of adaptive,
quasi-independent "agents" that interact simultaneously in a decentralized
configuration. By following this theory, agent-based simulations can map
out system behavior that spontaneously stems from many low-level interactions.
The simulations are appealing to company executives because they are easier
to understand than the highly abstract and mathematical underpinnings of
conventional modeling programs, while Fred Siebel of BiosGroup notes that
they allow "what if" scenarios to be played out over a much larger canvas.
For instance, Southwest Airlines hired BiosGroup to model its freight delivery
operations in order to make them more efficient; agents that represented
freight handlers, packages, planes, and other interactive elements were developed
and put through their paces, after which the rules of the system were changed
and tested to find the most efficient behavioral pathway. By following the
strategy outlined by the simulation, Southwest was able to shave as much
as 85 percent off the freight transfer rate at the busiest airports, and
save $10 million over five years. Other sectors that are taking an interest
in agent-based simulation include the U.S. military, which is using it to
coordinate the flights of unmanned reconnaissance aircraft, and the insurance
industry, which wants to employ better risk management strategies. However,
the agent-based simulation industry primarily consists of a handful of struggling
startups, and one of the current drawbacks of their services is that the
technology may be too state-of-the-art for most businesses, according to
Assuratech CEO Terry Dunn. http://www.redherring.com/insider/2003/01/chaos012203.html
From ACM News, February 3, 2003
"Blogs Open Doors for Developers"
CNet (01/31/03); Becker, David- Business
software developers have started to see the value of sharing information
online through Web logs (blogs), message boards, and other forms of communication
from the outset in order to build a base of potential customers, not to mention
fellow developers. Lotus founder Mitch Kapor explains that he started a blog
to tell users about an personal information manager upgrade so as to solicit
their ideas and get feedback while the project was in a very early developmental
stage. "It's part of a long-term process of building a user community," he
notes. Kapor keeps potential users up to date on the project's progress and
new ideas he comes up with. VisiCalc co-inventor Dan Bricklin is also soliciting
user feedback on the SMBmeta specification, and comments that such public
communication channels are a tremendous advance over traditional beta testing,
in which a small group of testers are chosen by developers to try out early
versions of a program. Being open to users throughout product development
is essential for makers of online games, which rely on a user community's
interest in their products, according to Sony Online Entertainment's Scott
McDaniel. However, to take full advantage of blogging and other forms of
communication, developers must be willing to sift through a lot of e-mail,
discussion group postings, and other submissions for good suggestions. They
must also be able to set clear limits on tired or unproductive discussion
threads. http://news.com.com/2100-1001-982854.html
"IBM: Pervasive Computing Is the Future"
ZDNet Australia (01/30/03); Pearce, James
- Pervasive computing devices, which
IBM describes as any non-PC computing device, will increase to 1 billion
in 2005, compared to 325 million in 2002, the firm predicts. Pervasive computers
can take the form of smart cards, cell phones, cameras, Web-enabled refrigerators,
and even smart houses, says Michael Karasick, IBM's director of embedded
development for the pervasive computing division. On U.S. university campuses,
IBM has launched the eSuds service, which allows students to use their cell
phones to reserve washing machines, make payments, and be informed when the
laundering is complete. Honda has also used pervasive computers in the 2002
Accord, allowing users to ask questions using normal language and displaying
the information in the car's navigation system. The computer also connects
the brake and airbag systems to the navigation system, so any damages as
a result of an accident can quickly be investigated and repaired. Andrew
Dutton, vice-president of IBM software group, Asia-Pacific, says "That piece
of information changes the entire structure of the automotive industry."
He says access to such information lets car companies expand their services
to include towing, autobody repair, finance, and insurance. 0n the other
hand, many fear such abundant information will allow people's personal information
to be gathered and monitored. Karasick says customers should try to take
control of the situation by demanding that companies give them discretion
over what data is gathered and how it is used. Click Here to View Full Article
From ACM News, January 31, 2003
"Intelligent Storage"
Computerworld (01/27/03) Vol. 37, No. 4, P. 28; Mearian, Lucas
- Storage devices imbued with intelligence,
also known as object-based storage devices (OSDs), allow for limitless system
scalability since they assume the low-level storage management duties previously
completed by the storage server. Because those read/write blocks are not
passed to the file server, input-output configurations are much more efficient
and the file server is no longer a bottleneck in the system. Scott A. Brandt,
assistant professor at the University of California, Santa Cruz's Storage
Systems Research Center, says storage devices can be added to OSD systems
just like hard drives are added to a PC. He notes that streamlined communications
between file servers and storage devices results in fewer errors as well.
The Storage Networking Industry Association and the International Committee
for Information Technology Standards have joined to form the T10 Technical
Committee, which is working out specifications for object-based storage.
Storage vendor EMC has already released what experts say is the first true
object-based storage arrays, called Centera. Besides more efficient networking
and greater scalability, OSD systems also have better security because it
is assigned to each individual object instead of the device. Click Here to View Full Article
From ACM News, January 29, 2003
"Software Innovator David Gelernter Says the Desktop Is Obsolete"
Application Development Trends Online (01/28/03); Vaughan, Jack- Yale
University computer scientist and veteran developer David Gelernter says
he is now focusing on creating tools that make it easier for users to find
"stuff" on their computers and otherwise improve the end user's computer
experience. Gelernter says the mouse, icon, and windows metaphors are no
longer able to manage the flood of information on most people's PCs. He says,
"As e-mail and the Web became a big thing, it was clear that the hierarchical
file systems and tools we've inherited from the 70s would not work." To solve
this problem, his company, Mirror Worlds Technologies, has released a beta
version of Scopeware, software that runs atop normal desktop operating systems.
Scopeware, available free via download, allows users to search for standard
documents on their PC by keyword, but presents the results as a visual, time-sequenced
narrative. Gelernter says the user should determine the presentation of information,
not the machine. "I want my information management software to have the same
shape as my life, which is a series of events in time," he says. "I want
the flow to determine the shape of the picture I see on the screen." Gelernter
says that future iterations of Scopeware could allow a community of users
to share documents pertinent to them through peer-to-peer systems. Gelernter
was instrumental in devising the parallel programming techniques that allowed
for the Linda language; his work also laid the foundation for Java and distributed
memory architectures. He says it's now time to create software "for the user
as an everyday tool," not to meet the needs of code developers. http://www.adtmag.com/article.asp?id=7187
From ACM News, January 24, 2003
"Of Pawns, Knights, Bits, Bytes"
Wired News (01/23/03); Kahney, Leander- International
chess champion Garry Kasparov will face off against a machine in a six-game
tournament beginning Jan. 26. His opponent will be Deep Junior, an aggressive
chess-playing program considered to be the best in the world, and the computer
chess champion for three years running. Deep Junior, which was developed
by Israeli programmers and a chess grandmaster, is different from the usual
computerized players because of the human way it plays, often sacrificing
pieces instead of preserving them. It also assesses the moves that have the
most potential, unlike early programs that relied on brute force searches.
Artificial intelligence expert Jonathan Schaeffer, who will act as a judge
during the tournament, believes Deep Junior evaluates chess positions with
standard weighting algorithms, such as the mobility of pieces and the safety
of the king; the former is highly rated by aggressive programs such as Deep
Junior. More sophisticated algorithms enable programs to only consider the
most promising maneuvers. Schaeffer notes that the tournament offers Kasparov
an opportunity to get some payback after his 1997 loss to IBM's Deep Blue
program. The event is also the first human/machine chess competition to be
endorsed by the World Chess Federation, a distinction that chess experts
say is a sign of respect toward computers as worthy players. http://www.wired.com/news/culture/0,1284,57345,00.html
"Senate Votes to Curb Project to Search for Terrorists in Databases and Internet Mail"
New York Times (01/24/03) P. A12; Clymer, Adam
- The
Senate voted unanimously on Thursday to constrain the implementation of the
Pentagon's Total Information Awareness (TIA) Program, an initiative to conduct
searches for terrorists by mining Internet mail and online financial, health,
and travel records. The legislation gives a 60-day window for the Defense
Department to furnish a report detailing the program's costs, motives, its
prospective chances for successfully thwarting terrorists, and its impact
on civil liberties and privacy; failing to do so after the deadline would
result in the suspension of TIA research and development. Meanwhile, use
of the system would be restricted to legally sanctioned military and foreign
intelligence operations, barring congressional authorization to employ the
system within the United States. The restrictions were bundled into a series
of amendments to an omnibus spending bill, and authored by Sen. Ron Wyden
(D-Ore.), who attributed their swift passage to the dismay Republican senators
felt over the project's implications for surveillance on innocent U.S. citizens.
Included in his amendment was a statement that Congress should be consulted
in matters whereby TIA programs could be used to develop technologies to
monitor Americans. "I hope that today's action demonstrated Congress' willingness
to perform oversight of the executive branch and challenge attempts to undermine
constitutional liberties," declared People for the American Way leader Ralph
Neas following the vote. Sens. Charles E. Grassley (R-Iowa) and Dianne Feinstein
(D-Calif.), both sponsors of Wyden's bill, agreed that the legislation ensures
that the TIA program will balance civil liberties with efforts to protect
Americans from terrorism. http://www.nytimes.com/2003/01/24/politics/24PRIV.html
(Access to this site is free; however, first-time visitors must register.)
From ACM News, January 24, 2003
"THE Key to User-Friendly Computers?"
Business Week Online (01/22/03); Salkever, Alex
- Jef Raskin, who co-designed Apple's
trend-setting graphical user interface (GUI), is also one of its most outspoken
critics. In an effort to repair what he terms "fundamental flaws" that are
the result of "incompatibilities between the designs of both GUIs and command-line
interfaces and the way our brains are wired," Raskin and a team of volunteers
are developing "The Humane Environment" (THE), a command architecture that
integrates the GUI's advantages with more flexible command-line systems.
BusinessWeek's Alex Salkever, who tried out a THE-based freeware text editor
that runs on the Mac Classic operating system, notes that the tool's biggest
plus is visibility, in which the user sees information only when necessary.
A flashing blue block represents the cursor, while a single letter or text
command appears within the block; the advantages include being able to see
tabs using the mouse rather than switching back and forth between viewing
modes, and being able to realign sentences and eliminate shortened lines
easier. Typed commands flash in blue text on the screen background, which
enables the user to quickly spot and undo any command executed by an accidentally
pressed key. Navigating a page using THE is done with a methodology Raskin
calls LEAP. LEAP is enabled by punching the shift key and then hitting and
releasing the space key, which causes a "Command" prompt to appear under
the text; leaping from one part of the page to another is achieved by hitting
the ">" key, then typing in a set of characters. Salkever finds that using
a mouse or scroll arrow is still more intuitive than LEAP, while overall
he prefers the traditional GUI over THE. Raskin acknowledges that a learning
curve is necessary when users transition between computer interfaces. http://www.businessweek.com/technology/content/jan2003/tc20030122_7027.htm
"Interfaces of the Future"
Technology & Business Magazine (01/13/03); Withers, Stephen
- Futuristic computer interfaces
such as those envisioned by science fiction writers--machines that can read
a user's gestures or facial expressions, for example--are still on the drawing
board or in the lab, and there has not been a sudden appearance of a revolutionary
mass-market technology since the introduction of the graphical user interface
in the mid 1980s. However, in Australia, some sophisticated interfaces have
been making a gradual penetration into niche IT markets, virtual reality
(VR) being one of them. Alan Ryner of SGI reports that immersive VR has become
"core infrastructure" in some sectors, including the military, manufacturing,
and mining, oil, and gas exploration. VR enables military personnel to deal
with vast amounts of data in command and control systems by representing
it visually, while the automotive industry, which started using VR to simulate
car crashes, has moved on to more advanced applications such as styling and
design. Ryner also reports that VR-based design can speed up time to market
and make Australian companies more competitive by allowing far-flung teams
to collaborate on the same data set. Oil and gas companies can use VR simulations
derived from seismic data to work out the best areas to drill. Ryner lists
"hazard perception and situation awareness" as a developing market for VR,
in which people can be trained and retained using an interactive artificial
environment that can model people's behavior. VR can be especially useful
in analytical situations as a way to better serve people who are reluctant
to handle large volumes of data. Click Here to View Full Article
"Remote Monitoring Aids Data Access"
Technology Research News (01/22/03); Patch, Kimberly
- Researchers at Sandia National
Laboratories have discovered a new way to access large sets of remote data
in close to real time. Often, businesses and researchers have a hard time
visualizing and manipulating complex data over distances because of increased
lag time, which hampers usability, according to principal technical staff
member John Eldridge. Instead of sending entire chunks of data back and forth,
the scientists developed a video card system that just sends the video signals.
Tweaking advanced graphics cards originally developed for computer games,
Eldridge and his colleagues compressed the signals and sent them to a Gigabit
Ethernet interface card, which then routed the signals over the Internet.
On the receiving side, special hardware recreates the video stream from the
packets, decompresses it, and displays it on a monitor. While the process
of sending video instead of data is also bandwidth-intensive, it is faster
for the huge repositories of data today's businesses and researchers accumulate.
Doctors, for example, could use it to view magnetic resonance imaging (MRI)
files when diagnosing patients. Eldridge says the system uses a few tricks
to speed the system, including sending only changes in the screen display
and using reprogrammable logic chips to encode and decode video signals,
instead of separate components. He notes that Sandia is looking for a partner
to commercialize the system, and plans to adapt it for use with multi-screen
displays. Click Here to View Full Article
"Electric Paper"
New Scientist (01/18/03) Vol. 177, No. 2378, P. 34; Fildes, Jonathan
- Scientists are working to transform
regular paper into an electronic display for moving images, changing colors,
and text. After asking paper makers four years ago if they were interested
in having electronic circuits, sensors, and displays added to their newspaper
and packaging, Magnus Berggren and colleagues at Linkoping University and
the Advanced Center for Research in Electronics and Optics in Sweden last
year unveiled traditional paper that featured electronic displays. The display's
"active matrix" resembled a laptop's thin-film transistor screen, with semiconducting
polymers printed on paper to make its transistors and display cells. Berggren,
who envisions electronic paper being used for large, low-resolution displays,
is currently demonstrating a seven-segment display that resembles a digital
clock. The technology could lead to chameleon-like wallpaper, flashing cereal
boxes and toy packaging, poster-like displays in shops, and changing text
in magazines within five years. However, challenges remain, including concerns
about how to power electronic paper. Researchers also must find a way to
make the displays change faster, and improve their color quality.
The impact of using a tool to design pages
- Original Page
- Edited Page
DSS Security?
- Help Wanted: Steal This Database
- Worm exposes apathy, Microsoft flaws
- Net Worm Causes More Disruptions
- Tracking the Worm
Artificial intelligence is again in the news. Garry Kasparov will again
face an IBM chess program with artificial intelligence to see who plays
the better chess. Read the NYT article.
From DSS News, March 2, 2003:
What do I need to know about Data Warehousing/OLAP?
The answer to this question depends upon who is asking. Managers need to
be familiar with some DW/OLAP terminology (the basic what questions) and
they need to have an idea of the benefits and limitations of these
decision support components (the why questions). More technical people
in Information Systems need to know how and when to develop systems
using these components. This short DW/OLAP FAQ consolidates answers from
some recent email questions and from a number of questions previously
answered at DSSResources.COM. The bias in this FAQ is definitely towards
what managers need to know. Some more technical question related to
DW/OLAP were answered in the Ask Dan! of February 17, 2002. That Ask
Dan! answered the following questions: Is a Data Warehouse a DSS? What
is a star schema? How does a snowflake schema differ from a star schema?
Also, for people who want definitions for technical terms like derived
data, hypercube, pivot and slice and dice the OLAP Council glossary
(1995) is online at http://dssresources.com/glossary/olaptrms.html.
Q. What is a Data Warehouse?
- A. A data warehouse is a database designed to support a broad range of
decision tasks in a specific organization. It is usually batch updated
and structured for rapid online queries and managerial summaries. Data
warehouses contain large amounts of historical data. The term data
warehousing is often used to describe the process of creating, managing
and using a data warehouse.
Q. What is On-line Analytical Processing (OLAP)?
- A. OLAP is software for manipulating multidimensional data from a
variety of sources. The data is often stored in data warehouse. OLAP
software helps a user create queries, views, representations and
reports. OLAP tools can provide a "front-end" for a data-driven DSS.
Q. What is the difference between data warehousing and OLAP?
- A. The terms data warehousing and OLAP are often used interchangeably.
As the definitions suggest, warehousing refers to the organization and
storage of data from a variety of sources so that it can be analyzed and
retrieved easily. OLAP deals with the software and the process of
analyzing data, managing aggregations, and partitioning information into
cubes for in-depth analysis, retrieval and visualization. Some vendors
are replacing the term OLAP with the terms analytical software and
business intelligence.
Q. When should a company consider implementing a data warehouse?
- A. Data warehouses or a more focused database called a data mart should
be considered when a significant number of potential users are
requesting access to a large amount of related historical information
for analysis and reporting purposes. So-called active or real-time data
warehouses can provided advanced decision support capabilities.
Q. What data is stored in a data warehouse?
- A. In general, organized data about business transactions and business
operations is stored in a data warehouse. But, any data used to manage a
business or any type of data that has value to a business should be
evaluated for storage in the warehouse. Some static data may be compiled
for initial loading into the warehouse. Any data that comes from
mainframe, client/server, or web-based systems can then be periodically
loaded into the warehouse. The idea behind a data warehouse is to
capture and maintain useful data in a central location. Once data is
organized, managers and analysts can use software tools like OLAP to
link different types of data together and potentially turn that data
into valuable information that can be used for a variety of business
decision support needs, including analysis, discovery, reporting and
planning.
Q. Database administrators (DBAs) have always said that having
non-normalized or de-normalized data is bad. Why is de-normalized data
now okay when it's used for Decision Support?
- A. Normalization of a relational database for transaction processing
avoids processing anomalies and results in the most efficient use of
database storage. A data warehouse for Decision Support is not intended
to achieve these same goals. For Data-driven Decision Support, the main
concern is to provide information to the user as fast as possible.
Because of this, storing data in a de-normalized fashion, including
storing redundant data and pre-summarizing data, provides the best
retrieval results. Also, data warehouse data is usually static so
anomolies will not occur from operations like add, delete and update a
record or field.
Q. How often should data be loaded into a data warehouse from
transaction processing and other source systems?
- A. It all depends on the needs of the users, how fast data changes and
the volume of information that is to be loaded into the data warehouse.
It is common to schedule daily, weekly or monthly dumps from operational
data stores during periods of low activity (for example, at night or on
weekends). The longer the gap between loads, the longer the processing
times for the load when it does run. A technical IS/IT staffer should
make some calculations and consult with potential users to develop a
schedule to load new data.
Q. What are the benefits of data warehousing?
- A. Some of the potential benefits of putting data into a data warehouse
include: 1. improving turnaround time for data access and reporting; 2.
standardizing data across the organization so there will be one view of
the "truth"; 3. merging data from various source systems to create a
more comprehensive information source; 4. lowering costs to create and
distribute information and reports; 5. sharing data and allowing others
to access and analyse the data; 6. encouraging and improving fact-based
decision making.
Q. What are the limitations of data warehousing?
- A. The major limitations associated with data warehousing are related to
user expectations, lack of data and poor data quality. Building a data
warehouse creates some unrealistic expectations that need to be managed.
A data warehouse doesn't meet all decision support needs. If needed
data is not currently collected, transaction systems need to be altered
to collect the data. If data quality is a problem, the problem should be
corrected in the source system before the data warehouse is built.
Software can provide only limited support for cleaning and transforming
data. Missing and inaccurate data can not be "fixed" using software.
Historical data can be collected manually, coded and "fixed", but at
some point source systems need to provide quality data that can be
loaded into the data warehouse without manual clerical intervention.
Q. How does my company get started with data warehousing?
- A. Build one! The easiest way to get started with data warehousing is to analyze some existing transaction processing systems and see what
type of historical trends and comparisons might be interesting to
examine to support decision making. See if there is a "real" user need
for integrating the data. If there is, then IS/IT staff can develop a
data model for a new schema and load it with some current data and start
creating a decision support data store using a database management
system (DBMS). Find some software for query and reporting and build a
decision support interface that's easy to use. Although the initial data
warehouse/data-driven DSS may seem to meet only limited needs, it is a
"first step". Start small and build more sophisticated systems based
upon experience and successes.
|
From DSS News, January 20, 2002:
According to Herbert Simon in a 1986 report, "There are no more
promising or important targets for basic scientific research than
understanding how human minds, with and without the help of computers,
solve problems and make decisions effectively, and improving our
problem-solving and decision-making capabilities."
Simon, Herbert A. "Decision Making and Problem Solving." Research
Briefings 1986: Report of the Research Briefing Panel on Decision Making
and Problem Solving. Washington, DC: National Academy Press, 1986
(http://dieoff.org/page163.htm).
|
From DSS News, January 6, 2002:
According to Knight and McDaniel (1979), "Basically, there are three
occasions when organizations are faced with nonroutine decision
situations and must use collegial or political structures to make
choices. The first occasion arises when the organization is faced with
scarce resources. Then the organization must answer the question 'What
are we doing that we can stop?' The second case occurs when the
organization has excess resources. The question is 'Can we do something
that we haven't done before?' The third case develops when the
organization feels the need for systems improvement. The fundamental
question in this case is 'Can we do what we are now doing better?' (p.
142)"
Knight, K.E., and R.R. McDaniel, Jr. Organizations: An Information
Systems Perspective. Belmont, CA: Wadsworth Publishing Co., 1979.
|
From DSS News, September 9, 2001:
Aaron Wildavsky studied budgeting and the use of information in
organizations. His findings emphasize the need for Decision Support
Systems and the difficulty in constructing them. In a 1983 article,
Wildavsky concludes "The very structure of organizations -- the units,
the levels, the hierarchy -- is designed to reduce data to manageable
and manipulatable proportions. ... at each level there is not only
compression of data but absorption of uncertainty. It is not the things
in themselves but data-reduction summaries that are passed up until, at
the end, executives are left with mere chains of inferences. Whichever
way they go, error is endemic: If they seek original sources, they are
easily overwhelmed; if they rely on what they get, they are easily
misled." --
from Wildavsky, A., "Information as an Organizational Problem," Journal
of Management Studies, January, 1983, p. 29.
|
From DSS News, July 1, 2001:
In his 1971 book, C. West Churchman discussed many topics related to
supporting decision makers. Early in that book he stated "Knowledge can
be considered as a collection of information, or as an activity, or as a
potential. If we think of it as a collection of information, then the
analogy of a computer's memory is helpful, for we can say that knowledge
about something is like the storage of meaningful and true strings of
symbols in a computer. ... Put otherwise, to conceive of knowledge as a
collection of information seems to rob the concept of all its life. ...
knowledge resides in the user and not in the collection. It is how the
user reacts to a collection of information that matters. ... Thus
knowledge is a potential for a certain type of action, by which we mean
that the action would occur if certain tests were run. For example, a
library plus its user has knowledge if a certain type of response will
be evoked under a given set of stipulations ... (p. 9-11)"
Churchman, C.W. The Design of Inquiring Systems, Basic Books, New York,
NY, 1971.
|
For those wondering when artificial intelligence will truly take
root, here's a bulletin: it already has. Look at this New York Times
article.
From DSS News, June3, 2001:
According to Gordon Davis (1974), "The value of information is the value
of the change in decision behavior because of the information (less the
cost of the information). An interesting aspect of this concept is that
information has value only to those who have the background knowledge to
use it in a decision. The most qualified person generally uses
information most effectively but may need less information since
experience (frame of reference) has already reduced uncertainty when
compared with the less-experienced decision maker." (p. 180)
Davis, Gordon B., Management Information Systems: Coonceptual
Foundations, Structure, and Development. New York: McGraw-Hill, 1974.
|
From DSS News, June 3, 2001:
From "Ask Dan": Is there a Theory of Decision Support Systems?
- Yes and No ... This question has not been addressed extensively in the
academic Decision Support Systems literature. I can't discuss the
answer or answers to this question adequately in this column, but I'll
try to provide a starting point for a more complete paper.
Let me begin by briefly reviewing what I consider the broadest set of
ideas or propositions that come closest to the start of a theory of
decision support or decision support systems. The propositions all come
from the work of the late Herbert Simon.
From Simon's classic Administrative Behavior (1945) ...
- Simon's Proposition 1: Information stored in computers can increase
human rationality if it accessible when it is needed for the making of
decisions.
- Simon's Proposition 2: Specialization of decision-making functions is
largely dependent upon the possibility of developing adequate channels
of communication to and from decision centers.
- Simon's Proposition 3: Where a particular item of knowledge is needed
repeatedly in decision, the organization can anticipate this need and,
by providing the individual with this knowledge prior to decision, can
extend his area of rationality. This is particularly important when
there are time limits on decisions.
From Simon's paper on "Applying Information Technology to Organization
Design", we can identify 3 additional propositions in a Theory of DSS.
- Simon's Proposition 4: "In the post-industrial society, the central
problem is not how to organize to produce efficiently (although this
will always remain an important consideration), but how to organize to
make decisions--that is, to process information."
- Simon's Proposition 5: From the information processing point of view,
division of labor means factoring the total system of decisions that
need to be made into relatively independent subsystems, each one of
which can be designed with only minimal concern for its interactions
with the others.
Simon's Proposition 6: The key to the successful design of information
systems lies in matching the technology to the limits of the attentional
resources... In general, an additional component (man or machine) for an
information-processing system will improve the system's performance only
if:
- 1. Its output is small in comparison with its input, so that it
conserves attention instead of making additional demands on attention;
- 2. It incorporates effective indexes of both passive and active kinds
(active indexes are processes that automatically select and filter
information for subsequent transmission);
- 3. It incorporates analytic and synthetic models that are capable not
merely of storing and retrieving information, but of solving problems,
evaluating solutions, and making decisions.
A number of other authors have discussed topics related to a theory of
DSS and perhaps in a later column I can examine ideas about when DSS are
and should be used and ideas related to the design and development of
DSS. Simon's propositions address the need for and effectiveness of
decision support systems.
Simon, Herbert A., Administrative Behavior, A study of decision-making
processes in administrative organization (3rd edition). New York: The
Free Press, 1945, 1965, 1976.
Simon, Herbert A., "Applying Information Technology to Organization
Design", Public Administration Review, Vol. 33, pp. 268-78, 1973.
|
What companies have gained a competitive advantage by building a DSS? from DSS News, May 6, 2001
The problem in answering this question is that firms want to
maintain the advantage they gain and hence they are reluctant to release
many details about strategic information systems. Also, DSS that provide
an advantage at one point in time may seem dated or ordinary after only
a few years has elapsed. The advantage can be fleeting and short-term
(cf., Feeny and Ives, 1990)
Porter and Millar (1985) provided a major theoretical perspective on how
information could provide competitive advantage. A number of other
theories related to competitive advantage suggest that deployment of
resources like innovative decision support systems can provide a
sustainable business advantage.
DSS can be important and useful and necessary and yet not provide a
competitive advantage. Many consulting firms and vendors focus on
gaining competitive advantage from a data warehouse or a business
intelligence system and that can happen. Many DSS projects don't however
deliver such results and they probably weren't intended to create
competitive advantage.
In a now classic study, Kettinger et al. (1994) identified a number of
companies that had gained an advantage from Information Systems. Some of
those systems were Decision Support Systems, but most were Transaction
Processing Systems. The following DSS examples are from their paper:
Air Products -- vehicle scheduling system
Cigna -- risk assessment system
DEC -- expert system for computer configuration
First National Bank -- asset management system
IBM -- marketing management system
McGraw Hill -- marketing system
Merrill Lynch -- cash management system
Owens-Corning -- materials selection system
Proctor & Gamble -- customer response system
Time and technology have had a negative impact on how some of the above
systems are perceived. A major lesson learned is that a company needs to
continually invest in a Strategic DSS to maintain any advantage.
INTERFACES Volume 13, No. 6 Nov-Dec 1983 had a number of DSS Case
studies that have since then become classics. For example, Distribution
of Industrial Gases at Air Products and Chemicals, the ASSESSOR Pre-Test
Market Evaluation System, Southern Railway's Computer Aided Train
Dispatching system.
Chapter 2 of my Power's Hyperbook furher explores the question of gaining
competitive advantage from DSS. In the chapter, examples include
decision support systems at Frito-Lay, L.L. Bean, Lockheed-Georgia,
Wal-Mart and Mrs. Field's Cookies.
If a company is trying to develop a Decision Support System that
provides a competitive advantage, managers and analysts should ask how
the proposed DSS affects company costs, customer and supplier relations
and managerial effectiveness. Managers should also attempt to assess
how the proposed strategic system will impact the structure of the
industry and the behavior of competitors.
|
DSS Wisdom from DSS News by D. J. Power, 2(8),
April 8, 2001
Alvin Toffler (1970) argued "Information must flow faster than ever
before. At the same time, rapid change, by increasing the number of
novel, unexpected problems, increases the amount of information needed.
It takes more information to cope with a novel problem than one we have
solved a dozen or a hundred times before. It is this combined demand
for more information at faster speeds that is now undermining the great
vertical hierarchies so typical of bureaucracy." (Toffler, Alvin., Future Shock, New York: Random House, 1970, p. 121)
|
Is there an "information culture" that encourages building Decision
Support Systems? This is the response from DSS Wisdom from DSS News by D. J. Power, 2(8),
April 8, 2001
Let's assume there is such a phenomenon as an "information culture".
Culture refers to shared assumptions, beliefs and ideas
of a group. Information culture would then refer to shared assumptions,
beliefs and ideas about obtaining, processing, sharing and using
information in decision-making and organizational management.
Rick Tanler, who founded Information Advantage, identified
four different information cultures. In a Decision Wire column titled
"Becoming the Competitor All Others Fear", Tanler stated "The four
information cultures are Spectator (observes changes within their
market); Competitor (initiates change within their market); Predator
(attacks market principles); and Information Anarchy (the dysfunctional
information culture)."
Tanler noted in that same column that "Almost every data warehouse is
justified to senior management in terms of the competitive advantages
that will accrue to the enterprise if better information is available
to decision-makers." Tanler argued the Competitor Culture will encourage
managers to develop better information systems and that will lead to
better decisions and better corporate performance. This conclusion is
very optimistic ... and it assumes that initiating change always leads
to positive outcomes.
Also, Tanler argued many companies have a Spectator Culture and
need to move to a Competitor Culture. Tanler believed the
"difference between the Spectator Culture and the Competitor Culture
is that the former focuses on decision-support (What information do
users need?) and the latter focuses on decision-implementation (What are
users doing with the information?)."
Tanler's four culture categories create a "buzzword" approach to
organizational change. It sounds like he is really concerned about how
to design systems rather than about culture. Certainly we need to ask
what are and what might users do with information and how can we better
support their decision-making. Building a DSS is much more than asking
potential users what information they need. Relying on managers to
"divine" what information will be or is needed won't work; such an
approach is much too passive to succeed.
Tanler noted we need to examine the "role of information within the
context of the entire decision cycle." We need to understand what a
decision cycle is to bring about this change. In the management and
decision making literature, a decision cycle or process starts with the
identification of an opportunity or recognition of a problem. The cycle
includes analysis and formulation of decision alternatives. The cycle
also includes approval of a decision and communications and actions
needed to implement the decision and measure its impact.
Tanler argued the "objective is to compress the decision cycle". He
concluded that by "moving from a Spectator Culture to a Competitor
Culture, an organization can make smarter decisions in shorter cycle
times to ultimately become the competitor that all others fear."
Reducing the cycle time is a desirable goal, but in and of itself a
shorter decision cycle does not improve decision making and if decision
support is provided inappropriately to reduce cycle time, then decisions
can be negatively impacted and results will be much worse and not
better.
We need to maintain a humble attitude when our goal is to improve human
decision behavior. Decision-making is as much art as science and we may
be able to inform decision-making with facts and analysis.
In my opinion, a positive information culture encourages active
information use and recognizes that technology can help with a variety
of decision tasks and can speed up the clerical side of those tasks, but
that people remain the thinkers and decision makers who must assume
responsibility for organizational actions.
Businesses aren't intelligent, people are. Decision support has to focus
on helping managers make decisions.
Well ... I didn't set out to critique Tanler's ideas on information
culture and successful implementation of technologies to support
decision making, but in a general way this Ask Dan has done that. Let me
know what you think of when you hear the term information culture ... Is
there a proactive, decision support culture?
Tanler, Rick. "Becoming the Competitor All Others Fear", DecisionWire,
Vol. 1, Issue 11, January 1999. |
DSS Wisdom from DSS News, Volume 2, Number 6, March 11, 2001.
Peter Drucker (1954) wrote in his chapter titled "Making Decisions" that
".. management is always a decision-making process." He notes in the
same chapter that in regard to the "new" decision tools from Operations
Research managers "must understand the basic method involved in making
decisions. Without such understanding he will either be unable to use
the new tools at all, or he will overemphasize their contribution and
see in them the key to problem-solving which can only result in the
substitution of gadgets for thinking, and of mechanics for judgment.
Instead of being helped by the new tools, the manager who does not
understand decision-making as a process in which he has to define, to
analyze, to judge, to take risks, and to lead to effective action, will,
like the Sorcerer's Apprentice, become the victim of his own bag of
tricks." (p. 368)
Drucker, P. The Practice of Management. New York: Harper and Brothers,
1954. |
"Moody Computers," Interactive Week (02/26/01) Vol. 8, No. 8, P. 47; Steinert-Threlkeld, Tom (as seen in Tech News, Volume 3, Issue 172: Monday, March 5, 2001):
- Martin Minsky, co-founder of the Artificial Intelligence Laboratory at the Massachusetts Institute
of Technology and author of "Society of Mind" and its forthcoming sequel, "The Emotion
Machine," argues that emotions are merely another way in which human beings think, rather
than a process independent of or antithetical to thinking. His central idea, he says, "is that each
of the major emotions is quite different. They have different management organizations for how
you are thinking you will proceed." Minsky contends that common-sense reasoning is what
allows us to handle and manipulate these different emotions, to choose which emotion is best
for handling which situation, even though we are not aware when each type of thinking is
occurring. This is also, he says, what separates machine thinking from human thinking.
Machines are not able to see the same piece of knowledge represented in multiple ways. Minsky
says, "You have to build a system that looks at two representations, two expressions or two
data structures, and quickly says in what ways are they similar and what ways are they different.
Then another knowledge base says which kinds of differences are important for which kind of
preference." He contends that such ways of thinking could, for example, benefit search engines,
allowing software to consider how to organize and execute a search based on what human users
might want rather than relying on keywords and algorithms. The ability to approach a problem
from many different ways and then solve it is how Minsky defines intelligence and is what he
means by an intelligent, emotional machine. He dismisses the fear that "emotional" machines
could somehow become irrational, as an emotional human being can become irrational and
commit an act that may endanger or harm others, because that again reflects the human bias
that emotions and thinking are two entirely different things.
http://www.zdnet.com/intweek/stories/news/0,4164,2690670,00.html
More on Professor Herbert Simon can be found at Center for Economic Policy Analysis. This includes pdf versions
of some of his papers including:
- "Rational Decision Making in Business Organizations", 1979, AER.
- "Decision Making and Problem Solving",
1986
James March wrote in 1978, "Prescriptive theories of choice are
dedicated to perfecting the intelligence of human action by imagining
that action stems from reason and by improving the technology of
decision. Descriptive theories of choice are dedicated to perfecting the
understanding of human action by imagining that action makes sense. Not
all behavior makes sense; some of it is unreasonable. Not all decision
technology is intelligent; some of it is foolish." (p. 604) --
from March, J. G. "Bounded Rationality, Ambiguity, and the Engineering
of Choice", Bell Journal of Economics, Vol. 9, 1978, pp. 587-608.
"New business procedures would then be analogous to new mutations in nature. Of a number of procedures, none of which can be
shown either at the time or subsequently to be truly rational, some may supplant others because they do in fact lead to better results.
Thus while they may have originated by accident, it would not be by accident that they are still used. For this reason, if an economist
finds a procedure widely established in fact, he ought to regard it with more respect than he would be inclined to give in the light of his
own analytic method." (Roy F. Harrod, 1939, Oxford EP) .... from The Maximization Debates.
As seen in Edupage (2/22/2001) -- SOFTWARE TRIES 'CONCEPT MAPPING':
New concept mapping software is available for free download from
the University of Florida's Institute for Human and Machine
Cognition. Researchers at the institute are working to make
computers easier to use, exactly the theory behind concept
mapping, which links information in a direct and understandable
way. The researchers expect that concept maps, or Cmaps, will
help change the information navigation on Web sites by providing
a graphical depiction of how that information is linked and
organized rather than by following the traditional method of
organizing information page by page. Funded by NASA and the Navy
as part of a larger project to create similar learning tools, the
software is among the best Cmap programs available, claims Barry
Brosch of Cincom, a commercial firm negotiating a software
license from the University of Florida. Having already made the
software available for nonprofit use, the institute is in the
process of determining how it will offer the software for
commercial application. (Associated Press, 19 February 2001)
Read about Professor Herbert
Simon
"As Easy As Breathing"
Boston Globe (02/04/01) P. H1; Weisman, Robert Michael Dertouzos, director of
MIT's Laboratory for Computer Science, is pioneering the Oxygen research
project, an initiative to develop what Dertouzos calls "human-centric
computing." Human-centric computing revolves around highly intuitive
technology so pervasive as to be invisible, Dertouzos explains. "From now on,
computer systems should focus on our needs and capabilities, instead of
forcing us to bow down to their complex, incomprehensible, and mechanistic
details," Dertouzos writes in his upcoming book, "The Unfinished Revolution:
Human-Centered Computers and What They Can Do For Us." Private industry and
the Pentagon are underwriting the MIT Lab's research, a five-year, $50 million
project involving 150 to 200 researchers. The Oxygen Alliance includes such
industry leaders as Hewlett-Packard, Philips Research, and Nokia Research
Center. At a feedback session in mid-January, Fred Kitson of HP Labs advised
Dertouzos to concentrate on creating a "pervasive computing ecosystem" to
narrow the gap between slow idea development and commercialization. "Initially
it will be difficult because it requires taking a customer-centric rather than
a technology-centric point of view," explains Adrian J. Slywotzsky of Mercer
Management Consulting. In his book, Dertouzos describes three primary
technologies the Oxygen project is exploring. The Handy 21 would be a handheld
device that incorporates the functions of most palm-sized products currently
on the market. The Enviro 21 would be a computing environment the size of a
room or office capable of speech recognition, face recognition, motion
detection, and wall-mounted displays. The third type of technology, the N21
Network, would link the Handy and the Enviro together. Dertouzos expects
human-centric computing to be realized in the next 10 to 20 years. Click
Here to View Full Article
This page was last modified on: 01/12/2004 06:26:05
URL: https://www.umsl.edu/~sauterv/DSS/DSS_Foundations.html
Page Owner: Professor Sauter
(Vicki.Sauter@umsl.edu)
© Vicki L. Sauter. All rights reserved.