- New perspectives on microsoft access 2013 introductory pdf free

- New perspectives on microsoft access 2013 introductory pdf free

Looking for:

Introduction to importing, linking, and exporting data in Access. 













































   

 

New Perspectives on Microsoft Access , Comprehensive - PDF Free Download - Account Options



 

Plus, nerd shirts. In this episode, Craig Loewen will explain what this means, what you can do with it, show us some demos, and tell us about a few additional new features for the Windows Subsystem for Linux. Delbert Murphy joins Scott Hanselman to show how quantum-inspired algorithms mimic quantum physics to solve difficult optimization problems.

Find out how the latest Windows Server security enhancements help protect your edge infrastructure. Discover secured-core servers and secure connectivity enhancements - two security innovations built right into Windows Server NET in a collection of short, pragmatic collection of videos.

Learning any new technology is a time-consuming process where it's easy to get lost. This is why we created this series of practical, bite-sized videos about Node.

We've created a series of videos about dev containers in Visual Studio Code! We'll show you how to get, create, and configure a container-based development environment with the VS Code Remote - Containers extension. Rust has been ranked as one of the most loved languages by developers. In this series, you will learn the fundamentals of Rust development. Skip to main content. This browser is no longer supported.

Download Microsoft Edge More info. Table 2. Many guidelines in the categories listed in Table 2. Broadly, we might divide the guidelines into two catego- ries: a domain specific i.

Note that these guidelines can be relevant and common across the different categories shown in Table 2. For example, guidelines for e-commerce application might also address different general HCI design issues such as display layout, how to solicit input, how to promote vendor-specific styles, and how to target for a particular user group. Even though guidelines are much more specific than the principles, it is still not very clear how to reflect them into the HCI design in a concrete and consistent manner.

In this regard, Tidwell has com- piled many user interface UI design patterns in the form of guide- lines [2]. Each guideline illustrates specific UI examples with exact descriptions of what it is and what it does and why and when it should be used. Such design patterns are of great help during actual HCI design.

It is not possible to list and explain all the guidelines that exist for all the various areas. Here we present a few examples. This problem concerns organizing and allotting relevant information both the content and UI elements in one visible screen or scrollable page.

Generally, the display layout should be such that it is organized according to the information content e. Thus, structuring the information and making it easy to move or navigate among the various items becomes a very important issue for high usability.

Structuring information content and controlling the interface for the purpose of HCI is closely related to the prin- ciple of understanding the task Section 1. By understanding the task, we identify the sequence of subtasks and actions, and each task will be associated with information either for making input or for the resulting output. The task structure, action sequence, and associated content organization will dictate the interaction flow and its fluidity. In this way, only the right amount of information or control will be available at the right time.

Aside from such internal structure, it is also important to provide external means and the right UI for fast and easy navigation. Fast and easy navigation means enabling the user to find the needed action e. Here, we introduce a summarized guideline for the design of an easily navi- gated interface from Leavitt and Shneiderman [3]. Figure 2. With permission.

A navigation page is used primarily to help users locate and link to destination pages. When possible, this means designers should keep navigation-only pages short. To facilitate navigation, designers should differentiate and group navigation elements and use appropriate menu types. In well-designed sites, users do not get trapped in dead-end pages.

As a more concrete example, we illustrate two design patterns from Tidwell [2]. Note that as design patterns, very specific uses of UI ele- ments are suggested addressing the concerned issue Figures 2. What: Put two side-by-side panels on the interface. In the first, show a set of items that the user can select at will; in the other, show the content of the selected item.

You want the user to see the overall structure of the list. Physically, the display you work with is large enough to show two separate panels at once. Adapted from Tidwell, J. Use when: Your application consists of many pages or panels of content for the user to navigate through. For a device with tight space restrictions,. Your users [also] may not be habitual computer users—having many application windows open at once may confuse them.

Modern inter- faces employ graphical user interface GUI elements e. From Smith, S. It is up to the UI designer to compose these input methods for the best performance with respect to the design con- straints e. Consistency of data-entry transactions: Similar sequences of actions should be used under all conditions similar delimit- ers, abbreviations, etc. Minimal input actions by user: Fewer input actions means greater operator productivity.

Selection from a list e. Avoid switch- ing between the keyboard and the mouse. Use default values. Compatibility of data entry with data display: The format of data-entry information should be linked closely to the format of displayed information i. Clear and effective labeling of buttons and data-entry fields: Use consistent labeling. Distinguish between required and optional data entry. Place labels close to the data-entry field.

Match and place the sequence of data-entry and selection fields in a natural scanning and hand-movement direction e. Such a placement is likely to produce frequent erroneous input. Design of form and dialog boxes: Most visual-display layout guidelines also apply to the design of form and dialog boxes. Situations become more complicated when other forms of input are also used, such as touch, gesture, three- dimensional 3-D selection, and voice.

There are separate guidelines for incorporating such input modalities. It explains how to make web content more accessible to people with disabilities. Web content generally refers to the information in a web page or web appli- cation, including text, images, forms, sounds, and such Figure 2. The following is a summary of the guidelines: 1. Perceivable A. Provide text alternatives for nontext content. Provide captions and other alternatives for multimedia.

Create content that can be presented in different ways, including by assistive technologies, without losing meaning. Make it easier for users to see and hear content.

The colors of the background and foreground text can be changed. Operable A. Make all functionality available from a keyboard. Give users enough time to read and use content. Do not use content that causes seizures.

Help users navigate and find content. Understandable A. Make text readable and understandable. Make content appear and operate in predictable ways. Help users avoid and correct mistakes. Robust A. Maximize compatibility with current and future user tools. Many conventional principles equally apply to mobile networked devices Figure 2. Fast status information especially with regard to network connection and services 2.

Minimize typing and leverage on varied input hardware e. Large hit targets for easy and correct selection and manipulation 5.

Enable shortcuts e. Keep the user informed of his or her actions 3. It concerns the limited and differ- ent sizes of a family of handheld devices i. Make sure that your app consistently provides a balanced and aestheti- cally pleasing layout by adjusting its content to varying screen sizes and orientations. Panels are a great way for your app to achieve this. They allow you to combine multiple views into one compound view when a lot of horizontal screen real estate is available and by splitting them up when less space is available.

For instance, Apple has published a design guideline document [8] that details how appli- cation icons should be designed and stylized: 1. Investigate how your choice of image and color might be interpreted by people from different cultures. Create different sizes of your app icon for different devices. When iOS displays the app icon on the home screen of a device, it automatically adds the following visual effects: a rounded corners, b drop shadow, and c reflective shine. These guidelines promote organi- zational styling and its identity and, ultimately, its consistency in user interfaces.

Franklin Gothic is used only for text over point size. It is used for headers and should never be used for body text. Tahoma should be used at 8-, 9-, or point sizes. Trebuchet MS bold, 10 point is used only for the title bars of Windows Figure 2.

Similar to visual icons, which must capture the underlying meaning for whatever it is trying to represent and draw attention for easy recognition, ear- cons should be designed to be intuitive. They suggest three types of earcons, namely, those that are a symbolic, b nomic, and c meta- phoric. Symbolic earcons rely on social convention such as applause for approval; nomic ones are physical such as a door slam; and metaphori- cal ones are based on capturing the similarities such as a falling pitch for a falling object [10].

We take a more in- depth look at the aural modality in Chapter 3. The categories include design guidelines for manual control, spoken input and out- put, visual and auditory display, navigation guide, and cell phone con- sideration, to name just a few Figure 2. The use of send to make a connection and power to turn a phone on and off are notable inconsistencies.

Voice dialog Verbal commands and button labels should use the same terms. Commands of interest include dial, store, recall, and clear. This is an instance of the consistency principle. Manual dialing The store and recall buttons, used for similar functions, should be adjacent to each other.

This is an instance of the grouping principle. Source: Green, P. The following is a guideline under the checkout-process section concerning the steps of a subtask the checkout process. Check-out should start at the shopping cart, followed by the gift options or shipping method, the shipping address, the billing address, payment information, order review and finally an order summary.

The checkout process is linear. Many guidelines are still at quite a high level, similar to the HCI principles, and leave the developer wondering how to actually apply them in practice. Another reason is that there are just too many different aspects to consider especially for a large-scale system. Sometimes, the guide- lines can even be in conflict with each other, which requires priori- tizing on the part of the designer. For instance, it can be difficult to give contrast to an item for highlighting its importance when one is restricted to using certain colors, e.

Another example might be when attempting to introduce a new interface technology e. While the new interface may have been proven effective in the laboratory, it still may require significant familiarizing and training on the part of the user. It is often the case that external constraints such as monetary and human resources restrict sound HCI practice. One must realize that all designs involve compromises and tradeoffs. Experienced designers understand the ultimate benefit and cost for practicing sound HCI design.

In Chapter 3, we will study cognitive and ergo- nomic knowledge more theoretical , which, along with the principles and guidelines we have learned so far more experiential , will be applied to HCI design.

ISO Tidwell, Jennifer. Designing interfaces. Leavitt, Michael O. Research-based web design and usability guidelines. Smith, Sidney L. Guidelines for designing user interface software. Bedford, MA: Mitre Corporation. Reid, and Gregg Vanderheiden, eds. Web content accessibility guidelines WCAG 2. Guidelines for mobile interface design. Multi-pane layouts. Windows XP visual guidelines.

Microsoft Corporation. Blattner, Meera M. Sumikawa, and Robert M. Earcons and icons: Their structure and common design principles. Human—Computer Interaction 4 1 : 11— Suggested human factors design guidelines for driver information systems. Kalsbeek, Maarten. Interface and interaction design patterns for e-commerce checkouts. We will look at the computer aspects of HCI design in the second part of this book.

In this chapter, we take a brief look at some of the basic human factors that constrict the extent of this interaction. In Chapters 1 and 2, we studied two bodies of knowledge for HCI design, namely a high-level and abstract principles and b specific HCI guidelines. To practice user-centered design by following these principles and guidelines, the interface requirements must often be investigated, solicited, derived, and understood directly from the tar- get users through focus interviews and surveys.

However, it is also possible to obtain a fairly good understanding of the target user from knowledge of human factors. Human-factors knowledge will particularly help us design HCI in the following ways. Also, evaluate inter- action models and interface implementations and explain or predict their performance and usability.

For instance, a goal of a word-processing system might be to produce a nice-looking document as easily as possible. This problem-solving process epito- mizes the overall information-processing model. As a lower level part of the information-processing chain [more ergonomic], we take a closer look at these and how they relate to HCI in Section 3. Figure 3. Then a hierarchi- cal plan Figure 3. A number of actions or subtasks are identified in the hope of solving the individual subgoals considering the external situation.

By enacting the series of these subtasks to solve the subgoals, the top goal is eventually accomplished. Note that enacting the subtasks does not guarantee their successful completion i. Thus the whole process is repeated by observing the resulting situation and revising and restoring the plan.

Note that a specific interface may be chosen to accomplish the subtasks in the bottom. Note that in a general hierarchical task model, certain subtasks need to be applied in series, and some may need to be applied concurrently. One can readily appreciate from the simple example in Figure 3. The interaction model must represent as much as possible what the user has in mind, especially what the user expects must be done the mental model in order to accomplish the overall task.

The interface selection should be done based on ergonomics, user preference, and other requirements or constraints. Finally, the subtask structure can lend itself to the menu structure, and the actions and objects to which the actions apply can serve as the basis for an object-class diagram for an object-oriented interactive software implementation. In the remainder of this section and in Section 3. Ergonomic aspects are dis- cussed in Section 3. Such a phenomenon would be a result of an interface based on an ill-modeled interaction.

Memory capacity also influences the interactive performance greatly. As shown in Figure 3. The short-term memory is also sometimes known as the working memory, in the sense that it contains changing memory elements meaning- ful for the task at hand or chunks. Humans are known to remember about eight chunks of memory lasting only a very short amount of time [2].

Imagine an interface with a large number of options or menu items. The user would have to rescan the available options a number of times to make the final selection. In an online purchasing system, the user might not be able to remember all of the relevant information such as items purchased, delivery options, credit card chosen, billing address, usage of discount cards, etc.

Retrieving information from the long-term memory is a difficult and relatively time-consuming task. Therefore, if an interactive system e. Memory-related performance issues are also important in multi- tasking. Many modern computing settings offer multitasking envi- ronments.

This process can bring about overall degradation in task performance in many respects [3]. Based on these figures and a task-sequence model, one might be able to quantitatively estimate the time taken to complete a given task and, therefore, make an evaluation with regard to the origi- nal performance requirements.

Tables 3. Boff, L. Kauffman, and J. Table 3. Point to file icon P 1. Point to file icon P 2. Click mouse button BB 2. Click mouse button BB 3.

Point to file menu P 3. Move hand to keyboard M 4. Press and hold mouse button B 4. Hit command key: command-T KK 5. Move hand back to mouse H 6. Release mouse button B 7. The GOMS evaluation methodology starts by the same hierarchi- cal task modeling we have described in Section 3. Once a sequence of subtasks is derived, one might map a specific operator in Table 3.

With the pre- established performance measures Table 3. Different operator mappings can be tried comparatively in terms of their performance. Even though this model was created nearly 30 years ago, the figures are still amazingly valid. GOMS models for other computing environments have been proposed as well [8]. GOMS is quite simple in that it can only evaluate in terms of the task performance, while there are many other criteria by which an HCI design should be evaluated.

Obviously, there can be some inaccuracies introduced in the use of the mental operators during the interaction modeling process. We now shift our focus to raw information processing. First we look at the input side i. Humans are known to have at least five senses. Among them, those that would be relevant to HCI at least for now are the modalities of visual, aural, haptic force feedback , and tactile sensation.

Taking external stimulation or raw sensory information sometimes computer generated and then processing it for perception is the first part in any human—computer interaction. Another aspect of sensation and perception is attention, that is, how to make the user selectively consciously or otherwise tune in to a particular part of the information or stimulation. Note that attention must occur and be modulated within awareness of the larger task s.

While we might tune in to certain important information, we often still need to have an understanding, albeit approximate, of the other activities or concurrent tasks, such as in multitasking or parallel pro- cessing of information. In the following discussion, we examine the processes of sensa- tion and perception in the four major modalities and the associated human capabilities in this regard.

Just as cognitive science was useful in interaction and task modeling, this knowledge is essential in sound interface selection and design. As already mentioned, the parameters of the visual interface design and display system will have to conform to the capacity and characteristics of the human visual system.

In this section, we review some of the important properties of the human visual system and their implications for interface design.

First we take a look at a typical visual interaction situation as shown in Figure 3. The shaded area in Figure 3.

Viewing distance dotted line in Figure 3. However, one might be able to define a nominal and typical viewing distance for a given task or operating environment. The shaded area illustrates the horizontal field of view shown to be much less than the actual for illustration purpose , while the dashed line is the same as offered by the display.

The display offers different fields of view depending on the viewing distance dotted line in the middle. The oval shape in the display represents the approximate area for which high details are perceived through the corresponding foveal area in the user eyes. In Figure 3. The display offers different fields of view, depending on the viewing distance dotted line in the middle.

This is also synonymous with the power of sight, which is different for different people and age groups. Note that the display FOV is more important than the absolute size of the display. A distant large display can have the same display FOV as a close small display, even though it may incur different viewing experiences.

If possible, it is desirable to choose the most economical display, not necessarily the biggest or the one with the highest resolu- tion, with respect to the requirement of the task and the typical user characteristics.

The oval region in Figure 3. On the other hand, the rods are distributed mainly in the periphery of the retina and are responsible for motion detection and less detailed peripheral vision.

While details may not be sensed, the rods contrib- ute to our awareness of the surrounding environment. Differently from that of human perception, most displays have uniform resolution. However, if the object details can be adjusted depending on where the user is looking or based on what the user may be interested in Figure 3.

We may assess the utility of a large, very-high-resolution display system such as the one shown in Figure 3. Is it really worth the cost? From Ni, T. Consequently, it can be argued that it is more economical to use a smaller high-resolution display placed at a close distance.

Interestingly, Microsoft Research recently introduced a display system called the Illumiroom [9] in which a high-resolution display is used in the middle, and a wide low-resolution projection and peripheral display provides high immersion Figure 3.

A color can be specified by the composure of the amounts contributed by the three fundamental colors and also by hue particular wavelength , saturation relative difference in the major wavelength and the rest in the light , and bright- ness value total amount of the light energy Figure 3.

Contrast in brightness is measured in terms of the difference or ratio of the amounts of light ener- gies between two or more objects. The recommended ratio of the foreground to background brightness contrast is at least Color contrast is defined in terms of differences or ratios in the dimensions of hue and saturation. It is said that the brightness contrast is more effective for detail perception than the color contrast Figure 3. Before all these low-level-part features are finally Energy Hue dominant wavelength Energy of dominant wavelength Saturation Energy of white light Brightness total light energy Wavelength Figure 3.

From Hemer, M. Pre-attentive features are compos- ite, primitive, and intermediate visual elements that are automatically recognized before entering our consciousness, typically within 10 ms after entering the sensory system [12]. These features may rely on the relative differences in color, size, shape, orientation, depth, texture, motion, etc.

At a more conscious level, humans may universally recognize certain high-level complex geometric shapes and properties as a whole and understand the underlying concepts. From Ware, C. The actual form of sound feedback can be roughly divided into three types: a simple beep- like sounds, b short symbolic sound bytes known as earcons e. As we did for the visual modal- ity, we will first go over some important parameters of the human aural capacity and the corresponding aural display parameters. It is instructive to know the decibel levels of different sounds as a guideline in setting the nominal volume for the sound feed- back Table 3.

The dominant frequency compo- nents determine various characteristics of sounds such as the pitch e. Humans can hear sound waves with frequency values between about 20 and 20, Hz [13]. Phase differences occur, for example, because our left and right ears may have slightly different distances to the sound source and, as such, phase differences are also known to contribute to the perception of spatialized sound such as stereo.

When using aural feedback, it is important for the designer to set these fundamental parameters properly. A general recommendation is that the sound signal should be between 50 and Hz and com- posed of at least four prominent harmonic frequency components fre- quencies that are integer multiples of one another , each within the range of — Hz [14]. Aural feedback is more commonly used in intermittent alarms. However, overly loud i. Instead, other techniques can be used to attract attention and convey urgency by such aural feedback techniques as repetition, variations in frequency and volume, and gradual and aural contrast to the background ambient sound e.

First, sound is effectively omnidirectional. However, as already mentioned, it can also be a nuisance as a task interrupter e. Making use of contrast is possible with sound as well. For instance, auditory feed- back would require a 15—dB difference from the ambient noise to be heard effectively. Differentiated frequency components can be used to convey certain information. Continuous sound is somewhat more subject to becoming habituated e.

In general, only one aural aspect can be interpreted at a time. Humans do possess an ability to tune in to a particular part of the sound e. As for using it actively as a means for input to interactive systems, two major methods are: a keyword recognition and b natural language understanding.

Isolated-word-recognition technology for enacting simple com- mands has become very robust lately. In most cases, it still requires speaker-specific training or a relatively quiet background. As such, many voice input systems operate in an explicit mode or state. The need to switch to the voice-command mode is still quite a nuisance to the ordinary user.

Thus, voice input is much more effective in situations where, for example, hands are totally occupied or where modes are not necessary because there is very little background noise or because there is no mixture of conversation with the voice commands. Machine understanding of long sentences and natural-language- based commands is still very computationally difficult and demanding. With the spread of smart-media client devices that might be computationally light yet equipped with a sleuth of sensors, such a cloud-based natural-language interaction combined with intelligence will revolutionize the way we interact with computers in the near future.

To be precise, the term Figure 3. The smart- media client devices would send the captured sentence in voice or text , and a correct and intel- ligent response is given back in real time. Thus haptic refers to both the sensation of force feedback as well as touch tactile.

For convenience, we will use the term haptic to refer to the modal- ity for sensing force and kinesthetic feedback through our joints and muscles even though any force feedback practically requires contact through the skin and the term tactile for sensing different types of touch e. The fingertip is one of the most sensitive areas and is frequently used for HCI purpose. Vibration fre- quency of about Hz is said to be the optimal for comfort- able perception [16]. For a fingertip, this amounts to about 0.

As mentioned previously, there are many types of tactile stimula- tion, such as texture, pressure, vibration, and even temperature. For the purposes of HCI, the following parameters are deemed important, and the same goes for the display system providing the tactile-based feedback.

Physical tactile sensation is felt by a combination of skin cells and nerves tuned for particular types of stimulation, e. From Proprioception, Intl. While there are many research prototypes and commercial tactile display devices, the most practical one is the vibration motor, mostly applied in a single actuator configuration. Most vibration motors do not offer separate controllability for amplitude and frequency.

In addi- tion, most vibrators are not in direct contact with the stimulation tar- get e. Thus additional care is needed to set the right parameter values for the best effects under the circumstances.

Another way to realize vibratory tactile display is to use thin and light piezoelectric materials that exhibit vibration responses according to the amounts of electric potential supplied. Due to their flat form factor, such materials can be embedded, for instance, into flat touch screens. Sometimes sound speakers can be used to generate indirect vibratory feedback with high controllability responding to wide ranges of amplitude and frequency signals Figure 3.

Along with tactile feed- 3. Right: tactile array with multiple actuators. The activation force for the joints is between 0. Note that haptic devices are both input and output devices at the same time. We briefly discuss this issue of haptic input in the next section in the context of human body ergonomics.

The simplest form of a haptic device is a simple electromagnetic latch that is often used in game controllers. It generates a sudden inertial movement and slowly repositions itself for repeated usage. Normally, the user holds on to the device, and inertial forces are deliv- ered in the direction relative to the game controller. Such a device is not appropriate for fast-occurring interaction e. More-complicated haptic devices are in the form of a robotic kine- matic chain, either fixed on the ground or worn on the body.

As a kine- matic chain, such devices offer higher degrees of freedom and finer force control Figure 3. For the grounded device, the user interacts with the tip of the robotic chain through which a force feedback is delivered. The sensors in the joints of the device make it possible to track the tip interaction point within the three-dimensional 3-D operating space. Using a similar control structure, body-worn devices transfer force with its mechanism directly attached to the body.

Important haptic display parameters are a the degrees of freedom the number of directions in which force or torque be can displayed , b the force range should be at least greater than 0.

Stability is in fact a by-product of the proper sampling period, which refers to the time taken to sense the current amount of force at the interaction point and then determine whether the target value has been reached and reinforce it a process that repeats until a target equilibrium force is reached at the interaction point. The ideal sampling period is about Hz, and when the sampling period falls under a certain value, the robotic mechanism exhibits instability e. The dilemma is that provid- ing a high sampling rate requires a heavy computation load, not only in updating the output force, but also in physical simulation e.

They tend to be heavy, clunky, dangerous, and take up a large volume. The cost is very high, often with only a small operating range, force range, or limited degrees of freedom. In many cases, simpler devices, such as one-directional latches or vibra- tors, are used in combination with visual and aural feedback to enrich the user experience.

However, for various reasons, multimodal interfaces are gaining popularity with the ubiquity of multimedia devices. By employing more than one modality, interfaces can become more effective in a number of ways, depending on how they are configured [22]. Here are some represen- tative examples. For instance, the ring of a phone call can be simulta- neously aural and tactile to strengthen the pick-up probability. For multimodal interfaces to be effective, each feedback must be properly synchronized and consistent in its representation.

The representation must be coordinated between the two: In the previous example, if there is one highlighting, then there should also be one corresponding beep. When inconsistent, the interpretation of the feedback can be confus- ing, or only the dominant modality will be recognized. In this section, we briefly look at ergonomics aspects. To be precise, ergonomics is a discipline focused on making products and interfaces comfortable and efficient.

Thus, broadly speaking, it encom- passes mental and perceptual issues, although in this book, we restrict the term to mean ways to design interfaces or interaction devices for comfort and high performance according to the physical mechanics of the human body. For HCI, we focus on the human motor capabilities that are used to make input interaction. From the main equation in Figure 3. Thus, to reiterate, ID represents an abstract notion of difficulty of the task, while MT is an actual prediction value for a particular task.

For instance, as shown in Figure 3. From MacKenzie, I. Berard et al. In addition to discrete-event input methods e. Obviously, humans will exhibit different motor-control perfor- mances with different devices, as already demonstrated with the two types of device mentioned previously e. The mouse and 3-D stylus, for instance, belong to what is called the isomet- ric devices, where the movement of the device directly translates to the movement in the display or virtual space. Nonisometric devices are those that control the movement in the display in principle with some- thing else such as force, thus possibly with no movement input at all.

Control accuracy for touch interface presents a different problem. Despite our fine motor-control capability of submillimeter perfor- mance—and with recent touch screens offering higher than dpi resolution—it is the size of the fingertip contact unless using a stylus pen , 0.

Even larger objects, once selected, are not easy to con- trol if the touch screen is held by another hand or arm i. We can also readily see that many of the HCI principles discussed previously in this book naturally derive from these underlying theories. User centered system design: New perspectives on human-computer interaction. Psychological Review 63 2 : Marois, Rene, and Jason Ivanoff. Capacity limits of information processing in the brain. Trends in cognitive sciences 9 6 : — Anderson, J.

Bothell, M. Byrne, S. Douglass, C. Lebiere, and Y. An integrated theory of the mind. Psychological Review 4 : — Polk, T.

Cognitive modeling. Salvucci, D. Threaded cognition: An inte- grated theory of concurrent multitasking. Psychological Review 1 : — Card, Stuart K.

Moran, and Allen Newell. The model human processor: An engineering model of human performance. In Handbook of human perception. Thomas, 1— New York: John Wiley and Sons. Schulz, Trenton. Using the keystroke-level model to evaluate mobile phones. Microsoft Research. CHI An immersive event Illusions create an immersive experience. Ni, Tao, Greg S. Schmidt, Oliver G.

Staadt, Mark A. Livingston, Robert Ball, and Richard May. A survey of large high-resolution display technologies, techniques, and applications. Hemer, Mark A. Projected changes in wave climate from a multi-model ensemble mark, Nature Climate Change — Ware, C. Information visualization: Perception for Design. Waltham, MA: Morgan Kaufmann. Olson, Harry Ferdinand. Music, physics and engineering. Mineola, NY: Dover Publications. Bregman, Albert S. Auditory scene analysis: The perceptual organiza- tion of sound.

Ferrucci, David. Building Watson. IBM Research. Patel Prachi. Synthetic skin sensitive to the lightest touch. Jones, Lynette A. Kinesthetic sensing. KU Leuven. Tactile feedback. Reeves, L. Lai, J. Larson, S. Oviatt, T. Balaji, S. Buisine, P. Collings, et al. Guidelines for multimodal user interface design. Communications of the ACM 47 1 : 57— Fitts, Paul M.

The information capacity of the human motor sys- tem in controlling the amplitude of movement. Journal of Experimental Psychology 47 6 : MacKenzie, I. Movement time prediction in human-computer interfaces. San Francisco: Morgan Kaufman. Human-Computer Interaction 7 1 : 91— In this book, HCI design is an integral part of a larger software design and its architectural development and is defined as the process of establishing the basic framework for user interaction UI , which includes the following iterative steps and activities.

HCI design includes all of the preparatory activities required to develop an interactive software product that will provide a high level of usability and a good user experience when it is actually implemented. We illustrate these four iterative steps using a concrete example after a short explanation of the respective steps Figure 4.

For interac- tive software with a focus on the user experience, we take a particular look at functions that are to be activated directly by the user through interaction functional-task requirements and functions that are important in realizing certain aspects of the user experience functional-UI requirements , even though these may not be directly activated by the user. One such example is an automatic functional feature of adjust- ing the display resolution of a streamed video based on the network traffic.

It is not always possible to computationally separate major functions from those for the user interface.

That is, certain functions actually have direct UI objectives. Finally, we identify nonfunctional UI requirements, which are UI features rather than computational functions that are not directly related to accomplishing the main application task.

For instance, requiring a certain font size or type according to a corporate guideline may not be a critical functional require- ment, but a purely HCI requirement feature. The results of the user analysis will be reflected back to the requirements, and this could identify additional UI requirements functional or non- functional.

It is simply a process to reinforce the original requirements analysis to further accommodate the potential users in a more complete way. For instance, a particular age group might necessitate certain interaction features such as a large font size and high contrast, or there might be need for a functional UI feature to adjust the scrolling speed. This is the crux of interac- tion modeling: identifying the application task structure and the sequential relationships between the different elements.

With a crude task model, we can also start to draw a more detailed scenario or storyboard to envision how the system would be used and to assess both the appropriateness of the task model and the feasibility of the given requirements. Again, one can regard this simply as an iterative process to refine the original rough requirements. Through the process of storyboarding, a rough visual profile of the interface can be sketched.

It will also serve as a starting point for drawing the object-class diagram, message diagrams, and the use cases for preliminary implementation and programming. The chosen individual interface components need to be consolidated into a practical package, because not all of these interface components may be available on a working platform e.

Certain choices will have to be retracted in the interest of employing a particu- lar interaction platform. For instance, for a particular sub- task and application context, the designer might have chosen voice recognition to be the most fitting interaction technique.

However, if the required platform does not support a voice sensor or network access to the remote recognition server, an alternative will have to be devised. Such concessions can be made for many reasons besides platform requirements, such as due to constraints in budget, time, personnel, etc.

Before we go through a concrete example of HCI design, we first review possible and representative interfaces hardware and software to choose from in the following section. We take a look at the hardware options in terms of the larger computing platforms, which are composed of the usual devices. Suited for: Special tasks and situations where interaction and computations are needed on the spot e. There are many such custom-designed inter- faces, such as those shown in Figure 4.

For a single application, a number of subtasks may be needed concur- rently and thus must be interfaced through multiple windows. For relatively large displays, overlapping windows may be used.

However, as the display size decreases e. Even on the desktop, the Metro style presents individual applications on the full screen without marked borders, but instead offers new convenient means for sharing data with other applications and switching between the applications or tasks Figure 4. Other important detailed considerations for a window for supporting interaction for a subtask might be its size, interior layout, and management method e. Figure 4.

Clickable icons are sim- ple and intuitive Figure 4. As a compact representation designed for facilitated interaction, icons must be designed to be as informative or distinctive as possible despite their small size and compactness. The recent Windows Metro—style interface has introduced a new type of icon called a tile that can dynamically change its look with useful information associated with what the icon is supposed to represent [5].

For instance, the e-mail application icon dynamically shows the number of new unread e-mails Figure 4. Typical menus are Figure 4. Selection of a menu item involves three subtasks: a acti- vating the menu and laying out the items if not already acti- vated by default , b visually scanning and moving through the items and scrolling if the display space is not sufficient to contain and show the whole menu of items at once , and c choosing the wanted item.

All of these subtasks are realized by making discrete inputs, e. Menus i. Some of the most popular ones are shown in Figure 4. Table 4. From Petzold, C. In either case, it is clear that the menu must be organized, categorized, and structured typically hierarchically according to the task and the associated objects. If long menus are inescapable, the items should at least be laid out in a systematic manner, e. Before the mouse era, the HCI was mostly in the form of key- board inputting of text commands.

However, WIMP interfaces have greatly con- tributed to the mass proliferation of computer technologies. In Chapter 5, we will take a more systematic look at the GUI components as part of implementation knowledge. For now, in considering interface options, it suffices to understand the following representative GUI components, aside from those for discrete selection WIMP , for soliciting input from a user in a convenient way Figure 4.

However, 2-D control in a 3-D applica- tion is often not sufficient e. The mismatch in the degrees of freedom brings about fatigue and incon- venience Figure 4. Aside from a task such as 3-D games and navigation, it is also possible to organize the 2-D-operated GUI elements in 3-D virtual space.

It is not clear whether such an interface brings about any particular advantages because, despite the added dimension, the occlusion due to overlap will remain, as the interface is viewed from only one direction into the screen. It has been a huge success since its introduction in the early s, when it revolutionized com- puter operations. Thanks to continuing advances in interface technologies e.

In addition, the cloud-computing environment has enabled running computationally expen- sive interface algorithms, which non-WIMP interfaces often require, over less powerful e.

Chapters 7—9 in this book take a look at some basic implementation issues for these new non-WIMP interfaces. Wire-framing originated from making rough specifications for website page design and resembles scenarios or storyboards.

Designing the content of a screen left and overall interaction behavior, e. It depicts the page layout or arrangement of the UI objects and how they respond to each other. Wireframes can be pencil drawings or sketches on a whiteboard, or they can be produced by means of a broad array of free or commercial software applications. Wireframes produced by these tools can be simulated to show interface behavior, and depending on the tools, the interface logic can be exported for actual code implementa- tion but usually not.

Note that there are tools that allow the user to visually specify UI elements and their configuration and then auto- matically generate code.

Regardless of which type of tool is used, it is important that the design and implementation stages be separated. Through wire-framing, the developer can specify and flesh out the kinds of information displayed, the range of functions available, and their priorities, alternatives, and interaction flow. An initial requirements list may look something like the one in Table 4.

No more flying pages; no more awkward flipping and page searching. Eliminate the need to carry and manage physical sheet music. Store music transcription files using a simple file format.

Help the user effectively accompany the music by timed and effective presentation of musical information e. Help the user effectively practice the accompaniment and sing along through flexible control e. Help user sing along by showing the lyrics and beats in a timed fashion. Here, we focus more on the HCI-related requirements for the sake of brevity. There does not seem to be a particular consideration for a particular age group or gender.

Note that, for now, most of the requirements or choices are rather arbitrary without clear justifications. Each task is to be activated directly by the user through an interface. The top-level application has six subtasks select song, select tempo, etc. Through such a perspective, one can identify the precedence relationship among the subtasks.

The user is also able to play and view the timed display of the musical information, but only after a song has been chosen indicated by the dashed arrow. Such a model can serve as a rough starting point for defining the overall software architecture for No Sheets. A storyboard is then drawn based on the task model to further envision its usage and possible interface choices. A storyboard con- sists of graphic illustrations organized in sequence and is often used to previsualize motion picture, animation, and interactive experi- ences.

There is no fixed format, but each illustration usually includes a depiction of the important steps in the interaction, annotated with a description of important aspects e. Figures 4. It is very important that we try to adhere to the HCI prin- ciples, guidelines, and theories to justify and prioritize our decision.

Note that we have started with the requirement that the application is to be deployed on a smartphone the interface platform. Left: Icons and GUI elements in the menu in the left can be dragged onto the right to design the interface layer. Right: Navigation among the design layers can be defined as well indicated by the arrows.

This will become more apparent as we evaluate the initial prototype and revise our requirements and design for No Sheets 2. The discussion started with a requirements analysis and its continued refinement through user research and application-task modeling.

Then, we drew up a storyboard and carefully considered dif- ferent options for particular interfaces by applying any relevant HCI principles, guidelines, and theories. The overall process was illustrated with a specific example design process for a simple application.

It roughly followed the aforementioned process, but it did so purpose- fully in a hurried and simplistic fashion, leaving much potential for later improvement. Nevertheless, this exercise emphasizes that the design process is going to be unavoidably iterative, because it is not usu- ally possible to have the provisions for all usage possibilities.

This is why an evaluation is another necessary step in a sound HCI design cycle, even if a significant effort is thought to have gone into the initial design and prototyping. In the next chapters, we first look at issues involved with taking the design into actual implementation.

The implemented prototype or final version must then be evaluated in real situations for future continued iterative improvement, extension, and refinement.

 


New perspectives on microsoft access 2013 introductory pdf free. Introduction to importing, linking, and exporting data in Access



 

- С твоей головой все в порядке. - Но человечество _знает меру_ собственного ничтожества. Николь так и не привыкла к процессу чистки. - У нас все будет хорошо несколько недель, они следили за нами все время, а _не_ ради безоговорочной капитуляции, - сказала она дружелюбным тоном.

   


Comments

Popular posts from this blog

Windows 10 enterprise app store missing free

(PDF) AdobeВ® PhotoshopВ® CC Windows Keyboard Shortcuts Reference | RAJKUMAR T - .71 Photoshop Shortcuts to Help You Edit Photos Like a Pro [+ PDF Cheatsheet]

Keyboard not working in windows 10 apps + start menu free. Solution when you cannot type in Windows 10 Start Menu to Search for an app