Real Time Obstacle Avoidance


Introduction

Real-time obstacle avoidance is one of the key issues to successful applications of mobile robot systems. All mobile robots feature some kind of collision avoidance, ranging from primitive algorithms that detect an obstacle and stop the robot short of it in order to avoid a collision, through sophisticated algorithms, that enable the robot to detour obstacles. The latter algorithms are much more complex, since they involve not only the detection of an obstacle, but also some kind of quantitative measurements concerning the obstacle's dimensions. Once these have been determined, the obstacle avoidance algorithm needs to steer the robot around the obstacle and resume motion toward the original target. Autonomous navigation represents a higher level of performance, since it applies obstacle avoidance simultaneously with the robot steering toward a given target.

A more general and commonly employed method for obstacle avoidance is based on edge detection. In this method, the algorithm tries to determine the position of the vertical edges of the obstacle and consequently attempts to steer the robot around either edge. The line connecting the two edges is considered to represent one of the obstacle's boundaries. This method was used in our own previous research,as well as in several other research projects.A disadvantage with obstacle avoidance based on edge detecting is the need of the robot to stop in front of an obstacle in order to allow for a more accurate measurement.

Speed Control

The intuitive way to control the speed of a mobile robot in the VFF environment is to set it proportional to the magnitude of the sum of all forces.


  Thus, if the path was clear, the robot would be subjected only to the target force and would move toward the target, at its maximum speed. Repulsive forces from obstacles, naturally opposed to the direction of Ft (with disregard to the damping effect discussed above), would reduce the magnitude of the resultant R, thereby effectively reducing the robot"s speed in the presence of obstacles.

Recovery From "Local Minimum Traps"

One problem inherent to the basic VFF method is the possibility for the robot to get "Trapped." This situation may occur when the robot runs into a dead end (e.g., inside a U shaped obstacle). Traps can be created by a variety of different obstacle configurations, and different types of traps can be distinguished. This section presents a comprehensive set of heuristic rules to recover from different trap conditions. Chattergy [10] presented some heuristic local path planning solutions for various obstacle configurations (and trap conditions), based on distance measurements to the obstacle.

Trap-state Detection

In an ideal, non-inertial system, simply simply monitoring the speed of the robot may discover trap-states. If caught in a trap, the robot’s speed will become zero as the robot converges to the equilibrium position with R = 0. In a dynamic system, however, the robot overshoots the equilibrium position and will either oscillate or run in a closed loop, as shown in Fig. 3a for an actual run. Therefore, it is impractical to monitor the magnitude of the resultant force |R| for trap-state detection.

Abstract

A new real-time obstacle avoidance approach for mobile robots has been developed and implemented. This approach permits the detection of unknown obstacles simultaneously with the steering of the mobile robot to avoid collisions and advancing toward the target. The novelty of this approach, entitled the Virtual Force Field, lies in the integration of two known concepts: Certainty Grids for obstacle representation, and Potential Fields for navigation. This combination is especially suitable for the accommodation of inaccurate sensor data (such as produced by ultrasonic sensors) as well as for sensor fusion, and enables continuous motion of the robot without stopping in front of obstacles.

Conclusions

A comprehensive obstacle avoidance approach for fast-running mobile robots, denoted as the VFF method, has been developed and tested on our experimental mobile robot CARMEL. The VFF method is based on the following principles:

1.     A Certainty Grid for representation of (inaccurate) sensory data about obstacles provides a robust real-time world model.




Quantum Dot Lasers


Quantum Dots

        Quantum dots (QDs) are small conductive regions in a semiconductor containing a variable number of charge carriers( from one to thousand) that occupy well—defined discrete quantum states. They have typical dimensions between nanometers to a few microns. When the space, at any side around a material shrinks to 100A, quantization of the energy levels at the reduced side will occur. In quantum dots electrons are confined in all directions to a volume in space with dimensions on the order of their de Broglie wavelength. Therefore they have no kinetic energy and as a result, they occupy spectrally sharp energy levels like those found in atoms.

 Introduction

        The past decade has seen a tremendous amount of research in the fabrication of semiconductor structures, which was stimulated  by the drive towards increasing miniaturization and performance of solid-state devices. One major step in these developments has been the development of low dimensional devices.


Bandwidth  Limits

        Before discussing in detail how the dynamics of QDs affect the performance of QD devices, in particular directly modulated lasers, it is important to mention briefly what generally limits the bandwidth of semiconductor lasers and the typical methodology for analyzing semiconductor laser performance. Typically high-speed lasers are analyzed using a three-rate-equation model, in which the number of photons, carriers in the active region, and carriers in the core are modeled in three distinct equations.

Fabrication Of Dots

        The unique advantages of QD structures can be realized only if the dots are as uniform as possible in shape and size. Conventional semiconductor-processing techniques that are based on lithography and etching face inherent problems such as limited resolution, and the introduction of surface defects during production. As a result, several research groups have started working on the direct synthesis of quantum nanostructures either by combining epitaxial growth techniques (MBE or MOCVD) with photolithography.

Quantum Dot Vcsels

        Much of the present focus on quantum dots is driven by the promise of inexpensive lasers and detectors for the telecommunications wavelength, utilizing the zero- dispersion window of an optical fiber. There has been an additional incentive to develop lasers grown on GaAs substrates, for easy integration of optical devices with the relatively mature GaAs electronic device technology, moving towards the development of high- speed optical communication systems.

Abstract

Quantum Dots(QDs) are small conductive regions in a semiconductor, containing a number of charge carriers(from one to thousand) that occupy well defined discrete quantum states. They have typical dimensions between nanometers to a few microns .When the space at any side, around a material shrinks to 100Å, quantisation of the energy levels at the reduced side will occur. In quantum dots electrons are confined in all directions to a volume in space with dimensions on the order of their de Broglie wave length, ie,  they have no kinetic energy and as a result they occupy spectrally sharp energy levels like those found in atoms.

 Conclusion


                Though quantum dot lasers show immense potential for superior device performances, there are still some significant problems associated with the control of emission wavelengths reproducibility of the dots, high-temperature reliability and long- term stability of the dots. The current challenge is to match and surpass the performance of quantum well lasers. There is still need for the development of a quantum dot structure lasing around 1.55 micrometer, which is a principal wavelength in fiber optic communications. This would give QD lasers a chance to move into applications such as ultrafast optical data transfer. A key aspect of quantum-dot production challenge will be to improve our control over the dot distribution produced in the self-assembly process.


Pervasive Computing


Abstract

Pervasive computing refers to embedding computers and communication in our environment. Pervasive computing provides an attractive vision for the future of computing. The idea behind the pervasive computing is to make the computing power disappear in the environment, but will always be there whenever needed or in other words it means availability and invisibility. These invisible computers won't have keyboards or screens, but will watch us, listen to us and interact with us.

Introduction

Pervasive computing environments involve the interaction, coordination, and cooperation of numerous, casually accessible, and often invisible computing devices. These devices will connect via wired and wireless links to one another as well as to the global networking infrastructure to provide more relevant information and integrated services. Existing approaches to building distributed applications, including client/server computing, are ill suited to meet this challenge.


Mobile computing and communication is one of the major parts of the pervasive computing system. Here data and computing resources are shared among the various devices. The coordination between these devices is maintained through communication, which may be wired or wireless. With the advent of Bluetooth and Ad hoc networking technologies the wireless communication has overtaken the wired counter part.

Implementation

There are many middleware technologies that provide a set of application programming interfaces (APIs) as well as network protocols that can meet the network requirements. It establishes a software platform enabling all devices that form the network to talk to each other, irrespective of their operating systems or interface constraints. In these environments, each device provides a service to other devices in the network. Each device publishes its own interfaces, which other devices can use to communicate with it and thereby access its particular service. This approach ensures compatibility and standardized access among all devices.

Adaptation

Adaptation is required in order to overcome the intrinsically dynamic nature of pervasive computing. Mobility of users, devices and software components can occur, leading to changes in the physical and virtual environments of these entities. Moreover, applications can be highly dynamic, with users requiring support for novel tasks and demanding the ability to change requirements on the fly.

Security Policy

A security policy is a set of rules for authorization, access control, and trust in a certain domain, it can also contain information about some users' roles and the abilities associated with those roles. Theft of service is the actual number one security problem in cellular networks. A similar problem exists with computer network services. Solutions devised for cellular telephony can be applied. Control of access to services relies on a form of identification. Either a user or a device may be identified.

HAVi- An Implementation in Consumer Appliance Environment

HAVi is a standard for home appliances consisting of a set of APIs, services, and a standard for communication. HAVi's primary goal is providing a dynamic service environment in which software components can discover and interact with other. It provides mechanisms for devices to discover, query and control other appliances on the home network, and provides system services such as message and event.

Conclusion

The trends in pervasive computing are increasing the diversity and heterogeneity of networks and their constituent devices Pervasive computing is expected to bring an explosion in the number of devices in our local environments. This paper presents a vision of a future computing landscape characterized by the ubiquity of computing devices, the autonomy, dynamicity and the heterogeneity of system components. 


Optical Coherence Tomography


Abstract

This paper explains the state of the art optical coherence tomography as an efficient diagnostic imaging tool for biomedical applications. It reviews the basic theory and modes of operation together with its applications and limitations. It also examines the various kinds of instruments, which are employed in the whole apparatus, in addition to a discussion on hardware and software methods to combat the sources of error.

Types of OCT

There are also other types of OCT which are also utilize light to produce imaging of tissue however they tend to include or vary the components of the system to provide more emphasis and extract more information from scans based on something specific from the sample. One of these ideas is the Doppler OCT which looks from for frequency shifts in the interference patterns which would show moving objects in the sample such as blood cells. This is particularly interesting to ophthalmologists since variations in blood flow can be causes of blindness, including diabetic retinapothy and macular degeneration . In addition to Doppler OCT, researchers are also looking into polarization which would measure the polarization of returning light and interference fringes since this might be a way to image damage to tissue such as nerve fibers, skin, and other connective tissues.


Light Source

The light source itself should satisfy three basic requirements, i.e., emission in near infra-red spectrum region, having short coherence length and high irradiance. Because of short mean scattering length for high frequencies (blue and higher), longer wavelengths are highly desired. On the other hand, due to strong water absorption for wavelengths.

Speed

As important as the resolution is the speed of an OCT system. The frame rate is basically determined by the speed with which the path-length could be swept, in order to obtain a complete cross correlation function. A new technique has also been introduced recently by the employment of grating-based phase control delay . The Fourier transform is generated on the grating upon incidence of the reference beam. The scattered wave is then directed to a linear wavelength dependent phase ramp, and since linear phase ramp in the frequency domain stands for group delay in time domain, when the signal is incident on the grating for the second time, the inverse Fourier transform is generated.

Introduction

OCT has been mainly used for biomedical applications where many fac¬tors affect the feasibility and effectiveness of any imaging technique. The highly scattering as well as absorbing living tissues greatly limit the applica¬tion of optical imaging modalities. Other imaging methods, such as ultrasonic has been used for a long time, yet each of them have certain problems and limitations. For instance, albeit ultrasonic technique can provide informa¬tion from depths far beyond the capability of OCT, in many applications the resolution is not satisfactory to result in any useful information.

Conclusions

Optical coherence tomography (OCT) has been emerged a novel diagnostic tool for bio medical applications, especially in situations where conventional imaging methods are either hazardous or of little valuable information.




Nanotechnology


What Is Nanotechnology?

Computers reproduce information at almost no cost. Bu treating atoms discretely, like computers treat bits of information. This would allow automatic construction of consumer’s goods without traditional labor, like a Xerox machine produces unlimited copies without a human retyping the original information. Electronics is fueled by miniaturization. Working smaller has led to the tools capable of manipulate the atoms of soil, air and water to make copies of it.

The shotgun marriage of chemistry and engineering called “nanotechnology” is ushering in the era of self-replicating machinery and self assembling consumer goods made from cheap raw atoms (drexler, merkle paraphrased).

Self-Replicant And Nanotechnology

A circular objective of nanotechnology is the ability to make products inexpensively. While the ability to make a few very small, very precise molecular machines very expensively would clearly be a major scientific achievement, it would not fundamentally change how we make most products.



Fortunately, we are surrounded and inspired by products that are marvelously complex and yet very inexpensive. By watching birds soar effortlessly through the air, so we can take inspiration from nature as we develop molecular manufacturing systems. Airplanes are very different from birds: a 747 bears only the smallest resemblance to a duck even though both fly. The artificial self replicating systems that have been envisioned for molecular manufacturing bear about the same degree of similarity to their biological counterparts as a car might bear to a horse.

 Abstract

      Miniaturization. It’s a word we’ve become accustomed to over the last few decades. We’ve heard that computers that took up whole rooms half a century ago can now easily fit on to a microchip that sits on the tip of your finger. But what if we could go even much, much smaller than that? There are researchers who are trying to do just this. Their Field is called nanotechnology. It is derived from the word nanometer. Nanotechnology is a broad tern that describes many approaches to measurement tools, production methods, and devices that operate on that scale f one-billionth meter. A nanometer is one billionth of a meter. That’s a thousand, million times smaller than a meter.

Conclusion

Our modern technology builds on an ancient tradition. Thirty thousands years ago chipping flint was the high technology of the day. Our ancestor grasped stones containing trillions of atoms and removed chips containing billions of trillions of atoms to make their ax heads. They made fine work with skills difficult to imitate today. This nanotechnology builds the chips at molecular level, instead of burning the features on the silicon chips. We call the product “chips” and we consider them exquisitely small, at least in comparison to ax heads.