Search This Blog

Tuesday, January 11, 2011

aviation

Aviation history refers to the history of development of mechanical flight—from the earliest attempts in kites and gliders to powered heavier-than-air, supersonicand spaceflights.
Speech recognition (also known as automatic speech recognition or computer speech recognition) converts spoken words to text. The term "voice recognition" is sometimes used to refer to recognition systems that must be trained to a particular speaker—as is the case for most desktop recognition software. Recognizing the speaker can simplify the task of translating speech.
Speech recognition is a broader solution which refers to technology that can recognize speech without being targeted at single speaker—such as a call center system that can recognize arbitrary voices.
Speech recognition applications include voice user interfaces such as voice dialing (e.g., "Call home"), call routing (e.g., "I would like to make a collect call"), domotic appliance control, search (e.g., find a podcast where particular words were spoken), simple data entry (e.g., entering a credit card number), preparation of structured documents (e.g., a radiology report), speech-to-text processing (e.g., word processors or emails), and aircraft (usually termed Direct Voice Input).

Training air traffic controllers

Training for air traffic controllers (ATC) represents an excellent application for speech recognition systems. Many ATC training systems currently require a person to act as a "pseudo-pilot", engaging in a voice dialog with the trainee controller, which simulates the dialog which the controller would have to conduct with pilots in a real ATC situation. Speech recognition and synthesis techniques offer the potential to eliminate the need for a person to act as pseudo-pilot, thus reducing training and support personnel. In theory, Air controller tasks are also characterized by highly structured speech as the primary output of the controller, hence reducing the difficulty of the speech recognition task should be possible. In practice this is rarely the case. The FAA document 7110.65 details the phrases that should be used by air traffic controllers. While this document gives less than 150 examples of such phrases, the number of phrases supported by one of the simulation vendors speech recognition systems is in excess of 500,000.

The USAF, USMC, US Army,US Navy and FAA as well as a number of international ATC training organizations such as the Royal Australian Air Force and Civil Aviation Authorities in Italy, Brazil, Canada are currently using ATC simulators with speech recognition from a number of different vendors. Helicopters

The problems of achieving high recognition accuracy under stress and noise pertain strongly to the helicopter environment as well as to the fighter environment. The acoustic noise problem is actually more severe in the helicopter environment, not only because of the high noise levels but also because the helicopter pilot generally does not wear a facemask, which would reduce acoustic noise in the microphone. Substantial test and evaluation programs have been carried out in the past decade in speech recognition systems applications in helicopters, notably by the U.S. Army Avionics Research and Development Activity (AVRADA) and by the Royal Aerospace Establishment (RAE) in the UK. Work in France has included speech recognition in the Puma helicopter. There has also been much useful work in Canada. Results have been encouraging, and voice applications have included: control of communication radios; setting of navigation systems; and control of an automated target handover system.
As in fighter applications, the overriding issue for voice in helicopters is the impact on pilot effectiveness. Encouraging results are reported for the AVRADA tests, although these represent only a feasibility demonstration in a test environment. Much remains to be done both in speech recognition and in overall speech recognition technology, in order to consistently achieve performance improvements in operational settings.
The Intelligent Flight Control System (IFCS) is a next-generation flight control system designed to provide increased safety for the crew and passengers of aircraft as well as to optimize the aircraft performance under normal conditions.[1] The main benefit of this system is that it will allow a pilot to control an aircraft even under failure conditions that would normally cause it to crash. The IFCS is being developed under the direction of the NASA Dryden Flight Research Center with the collaboration of the NASA Ames Research Center, Boeing Phantom Works, the Institute for Scientific Research at West Virginia University, and the Georgia Institute of Technology.

Objectives of IFCS

The main purpose of the IFCS project is to create a system for use in civilian and military aircraft that is both adaptive and fault tolerant.[1] This is accomplished through the use of upgrades to the flight control software that incorporate self-learning neural networktechnology. The goals of the IFCS neural network project are.[2]
1.      To develop a flight control system that can identify aircraft characteristics through the use of neural network technology in order to optimize aircraft performance.
2.      To develop a neural network that can train itself to analyze the flight properties of the aircraft.
3.      To be able to demonstrate the aforementioned properties on a modified F-15 ACTIVE aircraft during flight, which is the testbed for the IFCS project.

[edit]Theory of Operation

The neural network of the IFCS learns flight characteristics in real time through the aircraft’s sensors and from error corrections from the primary flight computer, and then uses this information to create different flight characteristic models for the aircraft[3]. The neural network only learns when the aircraft is in a stable flight condition, and will discard any characteristics that would cause the aircraft to go into a failure condition. If the aircraft’s condition changes from stable to failure, for example, if one of the control surfaces becomes damaged and unresponsive, the IFCS can detect this fault and switch the flight characteristic model for the aircraft. The neural network then works to drive the error between the reference model and the actual aircraft state to zero.

[edit]Project History

[edit]Generation 1

Generation 1 IFCS flight tests were conducted in 2003 to test the outputs of the neural network.[1] In this phase, the neural network was pre-trained using flight characteristics obtained for the F-15S/MTD in a wind tunnel test and did not actually provide any control adjustments in flight.[2] The outputs of the neural network were run directly to instrumentation for data collection purposes only.

[edit]Generation 2

Generation 2 IFCS tests were conducted in 2005 and used a fully integrated neural network as described in the theory of operation.[3] It is a direct adaptive system that continuously provides error corrections and then measures the effects of these corrections in order to learn new flight models or adjust existing ones.[1] To measure the aircraft state, the neural network takes 31 inputs from the roll, pitch, and yaw axes and the control surfaces.[3] If there is a difference between the aircraft state and model, the neural network adjusts the outputs of the primary flight computer through a dynamic inversion controller to bring the difference to zero before they are sent to the actuator control electronics which move the control surfaces.

Description of the AIDA project

Introduction

AIDA is a tool which supports the designer in the first phase of the aircraft design process: the conceptual design phase. There are already many design support tools available, but they are focused on the following design phases, which are numerical oriented. ADAS (Bil, 1988) is such a tool, developed at the Faculty of Aerospace Engineering of Delft University of Technology. It supports the preliminary design of aircraft by optimising the concept of an aircraft and adding details.
The support tool described in this paper focuses on the conceptual design phase, which requires more creativity and qualitative reasoning. Artificial Intelligence (AI) techniques are applied to the more qualitative aspects of the conceptual design. The project is therefore called AIDA: Artificial Intelligence supported Design of Aircraft. With the assistance of AIDA the designer should be able to give more attention to the creative issues, than spending much of his time on keeping track of the design process and managing software tools.
Within the conceptual design process several tasks can be distinguished. To perform these tasks, various AI-techniques are implemented in the AIDA project, like Constraint-Based Reasoning, Case-Based Reasoning and Rule-Based Reasoning. The intention is that the general concept of AIDA can also be used in other domains, like ship or car design.

No comments:

Post a Comment