If software is damaged, its backup copy can be reinstalled. Recommended Articles. Article Contributed By :. Easy Normal Medium Hard Expert. Writing code in comment? Please use ide. Load Comments. What's New. Most visited in Difference Between. These distributed architectures would require fast low-latency networks of the type discussed elsewhere in this document.
There are many occasions on which the computations required to support the VE cannot be done to full accuracy within the VE speed performance constraints. The trade-off between accuracy and speed is a common theme in the design of VE systems. There are occasions in which faster, less accurate computational algorithms are desirable over slower, more accurate algorithms.
It is not known at this time how to design these trade-offs into a system in a way that can anticipate all possibilities. Research into how these trade-offs are made is therefore needed. A current strategy is to give users full control over these trade-offs.
A related issue is that of time-critical computing, in which a computation returns within a guaranteed time. Designing time-critical computational architectures is an active area of research and is critical to the successful design of VE applications. Extrapolating current trends, we expect that VE applications will saturate available computing power and data management capabilities for the indefinite future.
Dataset size will be the dominant problem for an important class of applications in VE. In the near term, an effective VE platform would include the following: multiple fast processors in an integrated unit; several graphics pipelines integrated with the processors; very large shared physical memory; very fast access to mass storage; operating systems that support shared-memory, multiprocessor architectures; and very high-speed, low-latency networks.
Small VE systems have been successfully built around high-end personal computers PCs with special-purpose graphics boards. This system is capable of rendering several hundred polygons at about 15 Hz, and is used extensively in the Virtuality video arcade VE games. The Virtuality systems are networked and allow a few participants to play together in the same environment. Another common example is the use of an IBM-compatible personal computer with the Intel DVI graphics board, which is capable of rendering a few hundred textured polygons at Hz.
PC-based systems will provide the public with a taste of virtual reality that will eventually lead to demand for more capable computational and graphics platforms.
It is anticipated that, by , systems similar to the entry-level Indy machines from Silicon Graphics should replace the PC-based platforms as the total price of the PC system becomes comparable to that of the Indy.
There are many components to the software required for the real-time generation of VEs. These include interaction software, navigation software, polygon flow minimization to the graphics pipeline software, world modeling software geometric, physical, and behavioral , and hypermedia integration software.
Each of these components is large in its own right, and all of them must act in consort and in real time to create VEs. The goal of the interconnectedness of these components is a fully detailed, fully interactive, seamless VE. Seamless means that we can drive a vehicle across a terrain, stop in front of a building, get out of the vehicle, enter the building on foot, go up the stairs, enter a room and interact with items on a desktop, all without delay or hesitation in the system.
To build seamless systems, substantial progress in software development is required. The following sections describe the software being constructed in support of virtual worlds. Interaction software provides the mechanism to construct a dialogue from various control devices e. The first part of this software involves taking raw inputs from a control device and interpreting them.
Several libraries. Examples of commercial libraries include World ToolKit by Sense8. Shareware libraries are available from the University of Alberta and other universities. These libraries range in sophistication from serial drivers for obtaining the raw output from the interface devices to routines that include predictive tracking and gesture recognition.
The second part of building interaction software involves turning the information about a system's state from a control device into a dialogue that is meaningful to the system or application, at the same time filtering out erroneous or unlikely portions of dialogue that might be generated by faulty data from the input device. The delivery of this dialogue to the virtual world system is then performed to execute some application-meaningful operation. Interaction is a critical component of VE systems that involves both hardware and software.
Interface hardware in VEs provides the positions or states of various parts of the body. This information is typically used to: 1 map user actions to changes in the environment e. The user's intent must be inferred from the output of the hardware as read by the computer system. This inference may be complicated by inaccuracies in the hardware providing the signal. Although there are several paradigms for interaction in VEs, including direct manipulation, indirect manipulation, logical commands, and data input, the problem of realistic, real-time interaction is still comparatively unexplored.
Generally, tasks in VEs are performed by a combination of these paradigms. Other paradigms will certainly need to be developed to realize the potential of a natural interface. Below we provide an overview of some existing technologies. With direct manipulation, the position and orientation of a part of the user's body, usually the hand, is mapped continuously to some aspect of the environment.
Typically, the position and orientation of an object in the VE is controlled via direct manipulation. Pointing in order to move is another example of direct manipulation in which orientation information is used to determine a direction in the VE. Analogs of manual tasks such as picking and placing require display of forces as well and therefore are well suited to direct manipulation, though more abstract aspects of the environment, such as background lighting, can also be controlled in this way.
When indirect manipulation is employed, the user performs direct manipulation on an object in the VE, which in turn controls some other aspect of the environment. This is an extension to VE of the concept of a widget, that is, a two-dimensional interface control used in graphics interface design. Thus one may directly manipulate a slider that controls the background color, while direct manipulation of another slider may control the volume of sound output.
The term employed by these groups is three-dimensional widget. Creators of three-dimensional widgets go beyond the typical slider and checkboxes of traditional two-dimensional interfaces and attempt to provide task-specific widgets, such as the Computational Fluid Dynamics CFD widgets used in the virtual wind tunnel and surface modeling widgets Bryson, a.
Indirect manipulation provides the opportunity to carry out many actions by using relatively few direct manipulation capabilities. Logical commands detect the state of the user, which is then mapped to initiate some action by the environment. Logical commands are discrete events. The user's state that triggers the command may be detected via buttons, gestures as measured by haptic devices, voice commands, etc.
The particular command triggered by a user state may depend on the state of the environment or on the location of parts of the user's body.
For example, a point gesture may do different things depending on which virtual object happens to be coincident with the position of the user's hand. Logical commands can also be triggered via indirect manipulation using menus or speech recognizers.
Data or text input can be provided by conventional keyboard methods external to the VE. Within the environment, speech recognition may be used for both text and numerical input, and indirect manipulation of widgets provides limited numerical input. There are high-level interfaces that should be explored. Research must be performed to explore how to use data measuring the positions of the user's body to interact with a VE in a way that truly provides the richness of real-world interaction.
As an example, obvious methods of manipulating a virtual surface via a DataGlove have proven to be difficult to implement Bryson, b; Snibbe et al. This example demonstrates that research is needed to determine how user tracking data are to be applied as well as how the objects in the VE are to be defined to provide natural interaction.
In addition, research is needed on the problem of mapping continuous input body movement to discrete commands. There are significant. Since such decoding is application-dependent, the VE user interface cannot easily be separated from the application in the way that it can be with current two-dimensional WIMP windows, icons, mouse, pointer interfaces. A crucial decision in designing the interaction is the choice of conceptual approach. Specifically, should researchers focus on ways in which the existing two-dimensional technology might be enriched, or should the starting point be the special attributes and challenges of three-dimensional immersive environments?
Some researchers are recreating the two-dimensional graphic user interface GUI desktop metaphor in three dimensions by placing buttons and scroll bars in the environment along with the user. While we believe that there is great promise in examining the very successful two-dimensional desktop metaphor as a source for ideas, we also believe that there are risks because of the different sets of problems in the two environments. Relying solely on extensions of our experience with two dimensions would not provide adequate solution approaches to three-dimensional interaction needs, such as flying and navigation or to issues related to body-centered coordinates systems and lines of sight.
Two of the more important issues associated with interacting in a three-dimensional environment are line of sight and acting at a distance. With regard to line of sight, VE applications have to contend with the fact that some useful information might be obscured or distorted due to an unfortunate choice of user viewpoint or object placement.
In some cases, the result can lead to misinformation, confusion, and misunderstanding. Common pitfalls include obscuration and unfortunate coincidences.
Obscuration At times, a user must interact with an object that is currently out of sight, hidden behind other objects. How does dealing with this special case change the general form of any user interface techniques we might devise?
Unfortunate Coincidences The archetypical example of this phenomenon is the famous optical illusion in which a person stands on a distant hill while a friend stands near the camera, aligning his hand so that it appears as if the distant friend is a small person standing in the palm of his hand. Such devices, while amusing in some contexts, could under other circumstances, such as air traffic control, prove quite dangerous.
Perhaps we should consider alternative methods for warning the user when such coincidences are occurring or for ensuring that the user has enough depth information via parallax to perceive this. When the user is immersed in a three-dimensional environment, he or she is interacting with objects at a distance.
Some are directly within arm's reach, others are not. In each case, there is a question of how to specify the arguments to a particular command—that is, how does a user select and manipulate objects out of the reach envelope and at different distances from the user that is, in the same room, the same city, across the country? Will the procedure for distant objects be different from those used in selecting and manipulating nearby objects?
Some solutions to the selection problem involve ray casting or voice input, but this leaves open the question of specifying actions and parameters by means of direct manipulation. Some solutions emphasize a body-centric approach, which relies solely on the user's proprioceptive abilities to specify actions in space. Under this scheme, there is no action at a distance, only operations on objects in close proximity to the user.
This approach requires one of three solutions: translate the user's viewpoint to within arm's reach of the object s in question, scale the user so that everything of interest is within arm's reach, or scale the entire environment so that everything is within arm's reach. The first solution has several drawbacks. First, by moving the user over significant distances, problems in orientation could result. Next, moving objects quickly over great distances can be difficult moving an object from Los Angeles to New York would require that the user fly this distance or that the user have a point-and-click, put-me-there interface with a global map.
Finally, moving close to an object can destroy the spatial context in which that move operation is taking place. The second and third solutions are completely equivalent except when other participants or spectators are also in the environment. Perhaps the most basic interaction technique in any application is object selection. Object selection can be implicit, as happens with many direct manipulation techniques on the desktop e.
It is interesting to note that most two-dimensional user interface designers use the phrase "highlight the selected object," to mean "draw a marker, such as selection handles" on the selected object. With VE systems, we have the ability to literally highlight the selected object.
Most examples thus far have used three-dimensional extensions of two-dimensional highlighting techniques, rather than simply doing what the term implies; applying special lighting to the selected object. The following list offers some potentially useful selection techniques for use in three-dimensional computer-generated environments:. Pointing and ray casting. This allows selection of objects in clear view, but not those inside or behind other objects.
This is analogous to "swipe select" in traditional GUIs. Selections can be made on the picture plane with a rectangle or in an arbitrary space with a volume by "lassoing. Carrying this idea over to three dimensions requires a three-dimensional input device and perhaps a volume selector instead of a two-dimensional lasso. Voice input for selection techniques is particularly important in three-dimensional environments.
The question of how to manage naming is extremely important and difficult. It forms a subset of the more general problem of naming objects by generalized attributes. Naming attributes. Specifying a selection set by a common attribute or set of attributes "all red chairs with arms" is a technique that should be exploited.
Since some attributes are spatial in nature, it is easy to see how these might be specified with a gesture as well as with voice, offering a fluid and powerful multimodal selection technique: all red chairs, shorter than this [user gestures with two hands] in that room [user looks over shoulder into adjoining room]. For more complex attribute specification, one can imagine attribute editors and sophisticated three-dimensional widgets for specifying attribute values and ranges for the selection set.
Selection by example is another possibility: "select all of these [grabbing a chair]. It is important to provide the user with an opportunity to express "but not that one" as a qualification in any selection task. Of course, excluding objects is itself a selection task.
An important aspect of the selection process is the provision of feedback to the user confirming the action that has been taken. This is a more difficult problem in three dimensions, where we are faced with the graphic arts question of how to depict a selected object so that it appears unambiguously selected from an arbitrary viewing angle, under any lighting circumstances, regardless of the rendering of the object. Another issue is that of extending the software to deal with two-handed input.
Although manipulations with two hands are most natural for many tasks, adding a second pointing device into the programming loop significantly complicates the programmer's model of interaction and object behavior and so has been rarely seen in two-dimensional systems other than research prototypes.
In three-dimensional immersive environments, however, two-handed input becomes even more important, as. If an interface is poorly designed, it can lull the user into thinking that options are available when in fact they are not. For example, current immersive three-dimensional systems often depict models of human hands in the scene when the user's hands are being tracked.
Given the many kinds of actions that human hands are capable of, depicting human hands at all times might suggest to users that they are free to perform any action they wish—yet many of these actions may exceed the capabilities of the current system.
One solution to this problem is to limit the operations that are possible with bare hands, specifying for more sophisticated operations the use of tools. A thoughtful design would depict tools that suggest their purpose, so that, like a carpenter with a toolbox, the user has an array of virtual tools with physical attributes that suggest certain uses.
Cutting tools might look like saws or knives, while attachment tools might look like staplers. This paradigm melds together issues of modality with voice, context, and command. Interaction techniques and dialogue design have been extremely important research foci in the development of effective two-dimensional interfaces. Until recently, the VE community has been occupied with getting any input to work, but it is now maturing to the point that finding common techniques across applications is appropriate.
These common techniques are points of leverage: by encapsulating them in reusable software components, we can hope to build VE tools similar to the widget, icon, mouse, pointer WIMP application builders that are now widely in use for two-dimensional interfaces. It should also be noted that the progress made in three-dimensional systems should feedback into two-dimensional systems.
Visual scene navigation software provides the means for moving the user through the three-dimensional virtual world. There are many component parts to this software, including control device gesture interpretation gesture message from the input subsystem to movement processing , virtual camera viewpoint and view volume control, and hierarchical data structures for polygon flow minimization to the graphics pipeline.
In navigation, all act together in real time to produce the next frame in a continuous series of frames of coherent motion through the virtual world. The sections below provide a survey of currently developed navigation software and a discussion of special hierarchical data structures for polygon flow. Navigation is the problem of controlling the point and direction of view in the VE Robinett and Holoway, Using conventional computer graphics techniques, navigation can be reduced to the problem of determining a position and orientation transformation matrix in homogeneous graphics coordinates for the rendering of an object.
This transformation matrix can be usefully decomposed into the transformation due to the user's head motion and the transformation due to motions over long distance travel in a virtual vehicle. There may also be several virtual vehicles concatenated together. The first layer of virtual world navigation is the most specific: the individual's viewpoint. One locally controls one's position and direction of view via a head tracking device, which provides the computer with the position and orientation of the user's head.
The next layer of navigation uses the metaphor of a virtual vehicle, which allows movement over distances in the VE greater than those distances allowed by the head-tracker alone. The position and orientation of the virtual vehicle can be controlled in a variety of ways. In simulation applications, the vehicle is controlled in the same way that an actual simulated vehicle would be controlled.
Examples that have been implemented are treadmills and bicycles and joysticks for flight or vehicle simulators. For more abstract applications, there have been several experimental approaches to controlling the vehicle.
The most common is the point and fly technique, wherein the vehicle is controlled via a direct manipulation interface. The user points a three-dimensional position and orientation tracker in the desired direction of flight and commands the environment to fly the user vehicle in that direction.
Other methods of controlling the vehicle are based on the observation that in VE one need not get from here to there through the intervening space. Teleoperation is one obvious example, which often has the user specify a desired destination and then "teleports" the user there. Solutions have included portals that have fixed entry and exit locations, explicit specification of destination through numerical or label input, and the use of small three-dimensional maps of the environment to point at the desired destination.
Another method of controlling the vehicle is dynamic scaling, wherein the entire environment is scaled down so that the user can reach the desired destination, and then scaled up again around the destination indicated by the user. All of these methods have disadvantages, including difficulty of control and orientation problems. There is a hierarchy of objects in the VE that may behave differently during navigation.
Some objects are fixed in the environment and are acted on by both the user and the vehicle. Other objects, usually virtual. Still other objects, such as data displays, are always desired within the user's field of view and are not acted on by either the user or the vehicle. These objects have been called variously world stable, vehicle stable , and head stable Fisher et al.
Although most of the fundamental mathematics of navigation software are known, experimentation remains to be done. Hierarchical data structures for the minimization of polygon flow to the graphics pipeline are the back end of visual scene navigation.
When we have generated a matrix representing the chosen view, we then need to send the scene description transformed by that matrix to the visual display. One key method to get the visual scene updated in real time at interactive update rates is to minimize the total number of polygons sent to the graphics pipeline.
Hierarchical data structures for polygon flow minimization are probably the least well understood aspect of graphics development. This is a very common misconception. Visual reality has been said to consist of 80 million polygons per picture Catmull et al. The alternatives are to live with worlds of reduced complexity or to off-load some of the graphics work done in the pipeline onto the multiple CPUs of workstations.
All polygon reduction must be accomplished in less time than it takes just to send the polygons through the pipeline. The difficulty of polygon flow minimization depends on the composition of the virtual world. This problem has historically been approached on an application-specific basis, and there is as yet no general solution.
Current solutions usually involve partitioning the polygon-defined world into volumes that can readily be checked for visibility by the virtual world. There are many partitioning schemes—some of which work only if the world description does not change dynamically Airey et al. A second component of the polygon flow minimization effort is the pixel coverage of the object modeled. Once an object has been determined to be in view, the secondary question is how many pixels that object will cover.
If the number of pixels covered by an object is small, then a reduced polygon count low-resolution version of that object can be rendered. This results in additional software complexity, again software that must run in real time. Because the level-of-detail models are precomputed, the issue is greater dataset size rather than level selection which is nearly trivial. The current speed of z-buffers alone means we must carefully limit the polygons sent through the graphics pipeline.
Other techniques that use the CPUs to minimize polygon flow to the pipeline are known for specific applications, but those techniques do not solve the problem in general. In a classic paper, Clark presents a general approach for solving the polygon flow minimization problem by stressing the construction of a hierarchical data structure for the virtual world Figure The approach is to envision a world database for which a bounding volume is known for each drawn object.
The bounding volumes are organized hierarchically, in a tree that is used to rapidly discard large numbers of polygons. It provides and supports user functionality. Examples of system software include operating systems such as Windows, Linux, Unix, etc. An application software is designed for benefit of users to perform one or more tasks.
Arnab Chakraborty. Abhilash Nelson. Abhishek And Pukhraj. Concept of Hardware and Software Advertisements. Software is the programs that run on a computer. A computer system requires both hardware and software to function. Aside from the internal components of a computer, additional hardware allows the user to communicate with the system through inputs and outputs.
The table below shows some examples of input and output hardware.
0コメント