Friday, April 20, 2012

Supervised vs. Unsupervised learning

Supervised vs. Unsupervised learning

Machine learning algorithms are described as either 'supervised' or 'unsupervised'. The distinction is drawn from how the learner classifies data. In supervised algorithms, the classes are predetermined. These classes can be conceived of as a finite set, previously arrived at by a human. In practice, a certain segment of data will be labelled with these classifications. The machine learner's task is to search for patterns and construct mathematical models. These models then are evaluated on the basis of their predictive capacity in relation to measures of variance in the data itself. Many of the methods referenced in the documentation (decision tree induction, naive Bayes, etc) are examples of supervised learning techniques.
Unsupervised learners are not provided with classifications. In fact, the basic task of unsupervised learning is to develop classification labels automatically. Unsupervised algorithms seek out similarity between pieces of data in order to determine whether they can be characterized as forming a group. These groups are termed clusters, and there are a whole family of clustering machine learning techniques.
In unsupervised classification, often known as 'cluster analysis' the machine is not told how the texts are grouped. Its task is to arrive at some grouping of the data. In a very common of cluster analysis (K-means), the machine is told in advance how many clusters it should form -- a potentially difficult and arbitrary decision to make.
It is apparent from this minimal account that the machine has much less to go on in unsupervised classification. It has to start somewhere, and its algorithms try in iterative ways to reach a stable configuration that makes sense. The results vary widely and may be completely off if the first steps are wrong. On the other hand, cluster analysis has a much greater potential for surprising you. And it has considerable corroborative power if its internal comparisons of low-level linguistic phenomena lead to groupings that make sense at a higher interpretative level or that you had suspected but deliberately withheld from the machine. Thus cluster analysis is a very promising tool for the exploration of relationships among many texts.
\begin{figure}\begin{center}\epsfig{file=supunsupcausa.eps,width=9cm}\end{center} \end{figure}

Figure 2 illustrates the difference in the causal structure of supervised and unsupervised learning. It is also possible to have a mixture of the two, where both input observations and latent variables are assumed to have caused the output observations.
With unsupervised learning it is possible to learn larger and more complex models than with supervised learning. This is because in supervised learning one is trying to find the connection between two sets of observations. The difficulty of the learning task increases exponentially in the number of steps between the two sets and that is why supervised learning cannot, in practice, learn models with deep hierarchies.
In unsupervised learning, the learning can proceed hierarchically from the observations into ever more abstract levels of representation. Each additional hierarchy needs to learn only one step and therefore the learning time increases (approximately) linearly in the number of levels in the model hierarchy.
If the causal relation between the input and output observations is complex -- in a sense there is a large causal gap -- it is often easier to bridge the gap using unsupervised learning instead of supervised learning. This is depicted in figure 3. Instead of finding the causal pathway from inputs to outputs, one starts building the model upwards from both sets of observations in the hope that in higher levels of abstraction the gap is easier to bridge. Notice also that the input and output observations are in symmetrical positions in the model.


  
Figure 3: Unsupervised learning can be used for bridging the causal gap between input and output observations. The latent variables in the higher levels of abstraction are the causes for both sets of observations and mediate the dependence between inputs and outputs.
\begin{figure}\begin{center}\epsfig{file=supunsupgap.eps,width=9cm}\end{center} \end{figure}

Neural network

Neural networks: A requirement for intelligent systems

Throughout the years, the computational changes have brought growth to new technologies.Such is the case of artificial neural networks, that over the years, they have given various solutions to the industry.
Designing and implementing intelligent systems has become a crucial factor for the innovation and development of better products for society. Such is the case of the implementation of artificial life as well as giving solution to interrogatives that linear systems are not able resolve.
A neural network is a parallel system, capable of resolving paradigms that linear computing cannot. A particular case is for considering which I will cite. During summer of 2006, an intelligent crop protection system was required by the government. This system would protect a crop field from season plagues. The system consisted on a flying vehicle that would inspect crop fields by flying over them.
Now, imagine how difficult this was. Anyone that could understand such a task would say that this project was designated to a multimillionaire enterprise capable of develop such technology. Nevertheless, it wasn’t like that. The selected company was a small group of graduated engineers. Regardless the lack of experience, the team was qualified. The team was divided into 4 sections in which each section was designed to develop specific sub-systems. The leader was an electronics specialist. She developed the electronic system. Another member was a mechanics and hydraulics specialist. He developed the drive system. The third member was a system engineer who developed all software, and the communication system. The last member was designed to develop all related to avionics and artificial intelligence.
Everything was going fine. When time came to put the pieces together, all fitted perfectly until they find out the robot had no knowledge about its task. What happened? The one designated to develop all artificial intelligent forgot to “teach the system”. The solution would be easy; however, training a neural network required additional tools. The engineer designated to develop the intelligent system passed over this inconvenient.
It was an outsider who suggested the best solution: Acquiring neural network software. For an affordable price, the team bought the software, and with its help, they designed and trained the system without a problem.
The story ended satisfactorily, but just in some parts of the design. The drive system was working perfectly as the software and the communication device. The intelligent system was doing its job. Nonetheless, the project was a complete failure. Why? They never taught it how to fly.

Designing a neural network efficiently

By experience, I know it is not necessary to be a programmer nor have deep knowledge about complex neural network algorithms in order to design a neural network. There is a wide range of neural network software out there, and most of them have good quality. My suggestion for those looking for the answer on neural network design is to acquire all required tools. Good software will save you thousands of hours of programming as well as in learning complex algorithms.

Concluding...

To end this preface I just really hope you find what you are looking for.

A personal recommendation (no AI related)

I have been running this website for more than three years and I was thinking you may like visiting some other sites from my network: About MIDI a site dedicated to give useful information about the MIDI protocol as well as recording and other musical tips. The Senior Business Advisor: This is just new. It is a website intended to be a guide about business. It contains useful reviews among tips and articles.
What I am putting this here? I just wanted you to know this is not my only website. I just started this network no ago, and I did not wanted to put these links here because I didn't wanted to interrupt your search into the Artificial intelligence technologies.

By the way Happy New Year from NeuroAI.

Neural network introduction

This site is intended to be a guide on technologies of neural networks, technologies that I believe are an essential basis about what awaits us in the future. The site is divided into 3 sections: The first one contains technical information about the neural networks architectures known, this section is merely theoretical, The second section is set of topics related to neural networks as: artificial intelligence genetic algorithms, DSP's, among others.
And the third section is the site blog where I expose personal projects related to neural networks and artificial intelligence, where the understanding of certain theoretical dilemmas can be understood with the aid of source code programs. The site is constantly updated with new content where new topics are added, this topics are related to artificial intelligence technologies.

Introduction

What is an artificial neural network?
An artificial neural network is a system based on the operation of biological neural networks, in other words, is an emulation of biological neural system. Why would be necessary the implementation of artificial neural networks? Although computing these days is truly advanced, there are certain tasks that a program made for a common microprocessor is unable to perform; even so a software implementation of a neural network can be made with their advantages and disadvantages.
Advantages:
  • A neural network can perform tasks that a linear program can not.
  • When an element of the neural network fails, it can continue without any problem by their parallel nature.
  • A neural network learns and does not need to be reprogrammed.
  • It can be implemented in any application.
  • It can be implemented without any problem.

Disadvantages:
  • The neural network needs training to operate.
  • The architecture of a neural network is different from the architecture of microprocessors therefore needs to be emulated.
  • Requires high processing time for large neural networks.
Another aspect of the artificial neural networks is that there are different architectures, which consequently requires different types of algorithms, but despite to be an apparently complex system, a neural network is relatively simple. Artificial neural networks are among the newest signal processing technologies nowadays. The field of work is very interdisciplinary, but the explanation I will give you here will be restricted to an engineering perspective.
In the world of engineering, neural networks have two main functions: Pattern classifiers and as non linear adaptive filters. As its biological predecessor, an artificial neural network is an adaptive system. By adaptive, it means that each parameter is changed during its operation and it is deployed for solving the problem in matter. This is called the training phase.
A artificial neural network is developed with a systematic step-by-step procedure which optimizes a criterion commonly known as the learning rule. The input/output training data is fundamental for these networks as it conveys the information which is necessary to discover the optimal operating point. In addition, a non linear nature make neural network processing elements a very flexible system.

Basically, an artificial neural network is a system. A system is a structure that receives an input, process the data, and provides an output. Commonly, the input consists in a data array which can be anything such as data from an image file, a WAVE sound or any kind of data that can be represented in an array. Once an input is presented to the neural network, and a corresponding desired or target response is set at the output, an error is composed from the difference of the desired response and the real system output.
The error information is fed back to the system which makes all adjustments to their parameters in a systematic fashion (commonly known as the learning rule). This process is repeated until the desired output is acceptable. It is important to notice that the performance hinges heavily on the data. Hence, this is why this data should pre-process with third party algorithms such as DSP algorithms.
In neural network design, the engineer or designer chooses the network topology, the trigger function or performance function, learning rule and the criteria for stopping the training phase. So, it is pretty difficult determining the size and parameters of the network as there is no rule or formula to do it. The best we can do for having success with our design is playing with it. The problem with this method is when the system does not work properly it is hard to refine the solution. Despite this issue, neural networks based solution is very efficient in terms of development, time and resources. By experience, I can tell that artificial neural networks provide real solutions that are difficult to match with other technologies.
Fifteen years ago, Denker said: “artificial neural networks are the second best way to implement a solution” this motivated by their simplicity, design and universality. Nowadays, neural network technologies are emerging as the technology choice for many applications, such as patter recognition, prediction, system identification and control.



The Biological Model

Artificial neural networks born after McCulloc and Pitts introduced a set of simplified neurons in 1943. These neurons were represented as models of biological networks into conceptual components for circuits that could perform computational tasks. The basic model of the artificial neuron is founded upon the functionality of the biological neuron. By definition, “Neurons are basic signaling units of the nervous system of a living being in which each neuron is a discrete cell whose several processes are from its cell body”
biological neural network
The biological neuron has four main regions to its structure. The cell body, or soma, has two offshoots from it. The dendrites and the axon end in pre-synaptic terminals. The cell body is the heart of the cell. It contains the nucleolus and maintains protein synthesis. A neuron has many dendrites, which look like a tree structure, receives signals from other neurons.
A single neuron usually has one axon, which expands off from a part of the cell body. This I called the axon hillock. The axon main purpose is to conduct electrical signals generated at the axon hillock down its length. These signals are called action potentials.
The other end of the axon may split into several branches, which end in a pre-synaptic terminal. The electrical signals (action potential) that the neurons use to convey the information of the brain are all identical. The brain can determine which type of information is being received based on the path of the signal.
The brain analyzes all patterns of signals sent, and from that information it interprets the type of information received. The myelin is a fatty issue that insulates the axon. The non-insulated parts of the axon area are called Nodes of Ranvier. At these nodes, the signal traveling down the axon is regenerated. This ensures that the signal travel down the axon to be fast and constant.
The synapse is the area of contact between two neurons. They do not physically touch because they are separated by a cleft. The electric signals are sent through chemical interaction. The neuron sending the signal is called pre-synaptic cell and the neuron receiving the electrical signal is called postsynaptic cell.
The electrical signals are generated by the membrane potential which is based on differences in concentration of sodium and potassium ions and outside the cell membrane.
Biological neurons can be classified by their function or by the quantity of processes they carry out. When they are classified by processes, they fall into three categories: Unipolar neurons, bipolar neurons and multipolar neurons.
Unipolar neurons have a single process. Their dendrites and axon are located on the same stem. These neurons are found in invertebrates.
Bipolar neurons have two processes. Their dendrites and axon have two separated processes too.
Multipolar neurons: These are commonly found in mammals. Some examples of these neurons are spinal motor neurons, pyramidal cells and purkinje cells.
When biological neurons are classified by function they fall into three categories. The first group is sensory neurons. These neurons provide all information for perception and motor coordination. The second group provides information to muscles, and glands. There are called motor neurons. The last group, the interneuronal, contains all other neurons and has two subclasses. One group called relay or protection interneurons. They are usually found in the brain and connect different parts of it. The other group called local interneurons are only used in local circuits.

The Mathematical Model

Once modeling an artificial functional model from the biological neuron, we must take into account three basic components. First off, the synapses of the biological neuron are modeled as weights. Let’s remember that the synapse of the biological neuron is the one which interconnects the neural network and gives the strength of the connection. For an artificial neuron, the weight is a number, and represents the synapse. A negative weight reflects an inhibitory connection, while positive values designate excitatory connections. The following components of the model represent the actual activity of the neuron cell. All inputs are summed altogether and modified by the weights. This activity is referred as a linear combination. Finally, an activation function controls the amplitude of the output. For example, an acceptable range of output is usually between 0 and 1, or it could be -1 and 1.
Mathematically, this process is described in the figure

From this model the interval activity of the neuron can be shown to be:

The output of the neuron, yk, would therefore be the outcome of some activation function on the value of vk.

Activation functions

As mentioned previously, the activation function acts as a squashing function, such that the output of a neuron in a neural network is between certain values (usually 0 and 1, or -1 and 1). In general, there are three types of activation functions, denoted by Φ(.) . First, there is the Threshold Function which takes on a value of 0 if the summed input is less than a certain threshold value (v), and the value 1 if the summed input is greater than or equal to the threshold value.

Secondly, there is the Piecewise-Linear function. This function again can take on the values of 0 or 1, but can also take on values between that depending on the amplification factor in a certain region of linear operation.

Thirdly, there is the sigmoid function. This function can range between 0 and 1, but it is also sometimes useful to use the -1 to 1 range. An example of the sigmoid function is the hyperbolic tangent function.


The artifcial neural networks which we describe are all variations on the parallel distributed processing (PDP) idea. The architecture of each neural network is based on very similar building blocks which perform the processing. In this chapter we first discuss these processing units and discuss diferent neural network topologies. Learning strategies as a basis for an adaptive system

A framework for distributed representation

An artifcial neural network consists of a pool of simple processing units which communicate by sending signals to each other over a large number of weighted connections. A set of major aspects of a parallel distributed model can be distinguished :
  • a set of processing units ('neurons,' 'cells');
  • a state of activation yk for every unit, which equivalent to the output of the unit;
  • connections between the units. Generally each connection is defined by a weight wjk which determines the effect which the signal of unit j has on unit k;
  • a propagation rule, which determines the effective input sk of a unit from its external inputs;
  • an activation function Fk, which determines the new level of activation based on the efective input sk(t) and the current activation yk(t) (i.e., the update);
  • an external input (aka bias, offset) øk for each unit;
  • a method for information gathering (the learning rule);
  • an environment within which the system must operate, providing input signals and|if necessary|error signals.

Processing units

Each unit performs a relatively simple job: receive input from neighbours or external sources and use this to compute an output signal which is propagated to other units. Apart from this processing, a second task is the adjustment of the weights. The system is inherently parallel in the sense that many units can carry out their computations at the same time. Within neural systems it is useful to distinguish three types of units: input units (indicated by an index i) which receive data from outside the neural network, output units (indicated by an index o) which send data out of the neural network, and hidden units (indicated by an index h) whose input and output signals remain within the neural network. During operation, units can be updated either synchronously or asynchronously. With synchronous updating, all units update their activation simultaneously; with asynchronous updating, each unit has a (usually fixed) probability of updating its activation at a time t, and usually only one unit will be able to do this at a time. In some cases the latter model has some advantages.

 

Neural Network topologies

In the previous section we discussed the properties of the basic processing unit in an artificial neural network. This section focuses on the pattern of connections between the units and the propagation of data. As for this pattern of connections, the main distinction we can make is between:
  • Feed-forward neural networks, where the data ow from input to output units is strictly feedforward. The data processing can extend over multiple (layers of) units, but no feedback connections are present, that is, connections extending from outputs of units to inputs of units in the same layer or previous layers.
  • Recurrent neural networks that do contain feedback connections. Contrary to feed-forward networks, the dynamical properties of the network are important. In some cases, the activation values of the units undergo a relaxation process such that the neural network will evolve to a stable state in which these activations do not change anymore. In other applications, the change of the activation values of the output neurons are significant, such that the dynamical behaviour constitutes the output of the neural network (Pearlmutter, 1990).
Classical examples of feed-forward neural networks are the Perceptron and Adaline. Examples of recurrent networks have been presented by Anderson
(Anderson, 1977), Kohonen (Kohonen, 1977), and Hopfield (Hopfield, 1982) .

Training of artifcial neural networks

A neural network has to be configured such that the application of a set of inputs produces (either 'direct' or via a relaxation process) the desired set of outputs. Various methods to set the strengths of the connections exist. One way is to set the weights explicitly, using a priori knowledge. Another way is to 'train' the neural network by feeding it teaching patterns and letting it change its weights according to some learning rule.
We can categorise the learning situations in two distinct sorts. These are:
  • Supervised learning or Associative learning in which the network is trained by providing it with input and matching output patterns. These input-output pairs can be provided by an external teacher, or by the system which contains the neural network (self-supervised).
  • Unsupervised learning or Self-organisation in which an (output) unit is trained to respond to clusters of pattern within the input. In this paradigm the system is supposed to discover statistically salient features of the input population. Unlike the supervised learning paradigm, there is no a priori set of categories into which the patterns are to be classified; rather the system must develop its own representation of the input stimuli.
  • Reinforcement Learning This type of learning may be considered as an intermediate form of the above two types of learning. Here the learning machine does some action on the environment and gets a feedback response from the environment. The learning system grades its action good (rewarding) or bad (punishable) based on the environmental response and accordingly adjusts its parameters. Generally, parameter adjustment is continued until an equilibrium state occurs, following which there will be no more changes in its parameters. The selforganizing neural learning may be categorized under this type of learning.

Modifying patterns of connectivity of Neural Networks

Both learning paradigms supervised learning and unsupervised learning result in an adjustment of the weights of the connections between units, according to some modification rule. Virtually all learning rules for models of this type can be considered as a variant of the Hebbian learning rule suggested by Hebb in his classic book Organization of Behaviour (1949) (Hebb, 1949). The basic idea is that if two units j and k are active simultaneously, their interconnection must be strengthened. If j receives input from k, the simplest version of Hebbian learning prescribes to modify the weight wjk with
Neural network training formula
where ϒ is a positive constant of proportionality representing the learning rate. Another common rule uses not the actual activation of unit k but the difference between the actual and desired activation for adjusting the weights:
in which dk is the desired activation provided by a teacher. This is often called the Widrow-Hoff rule or the delta rule, and will be discussed in the next chapter. Many variants (often very exotic ones) have been published the last few years.

What is Soft Computing?

What is Soft Computing?



Introduction
The concept of fuzzy set was introduced by Zadeh in 1965 to allow elements to belong to a set in a gradual rather than an abrupt way (i.e. permitting memberships valued in the interval [0,1] instead of in the set {0,1}). Ever since then, applications and developments based on this simple concept have evolved to such an extent that it is practically impossible nowadays to encounter any area or problem where applications, developments, products, etc. are not based on fuzzy sets.
One important type of problem in particular are optimization problems, which optimize the value that a function may reach on a previously specified set, and these and everything relating to them are covered by the area known as mathematical programming. When fuzzy elements are considered in mathematical programming, fuzzy optimization methods emerge, and these are perhaps one of the most fruitful areas of fuzzy-related knowledge, both from the theoretical and the applied points of view. Yet despite all its methods and models for solving the enormous variety of real practical solutions, as with conventional mathematical programming, it cannot solve every possible situation for while a problem may be expressed in fuzzy terms, it cannot be solved with fuzzy techniques.
The ease of resolving ever larger real problems, the impossibility of discovering exact solutions to these problems in every case, and the need to provide answers to the practical situations considered in a great many cases have lead to the increasing use of heuristic-type algorithms which have proved to be valuable tools capable of providing solutions where exact algorithms are not able to. In recent years, a large catalogue of heuristic techniques has emerged inspired by the principle that satisfaction is better than optimization, or in other words, rather than not being able to provide the optimal solution to a problem, it is better to give a solution which at least satisfies the user in some previously specified way, and these have proved to be extremely effective.
These heuristics are said to have been mostly inspired by nature, society, physics, etc. to produce theoretical models which match the circumstances considered, and from this perspective, it has been possible to solve cases which, until only very recently, were impossible with conventional techniques. In most cases, however, the solutions achieved have not been optimal and are instead “almost optimal”, having been obtained with criteria other than the classic ”achieving the best value of the objective function”, by considering characteristics which have been subjectively established by the decision-maker.
It is well known that when we speak of human subjectivity, or even closeness to an ideal value, the best comparative way of modelling this type of situation is by means of fuzzy sets, or more generally with soft computing methodologies. This method of modelling subjectivity (which is so developed in other fields) has hardly ever been applied to the case of heuristic algorithm design despite all indications that this might well be a very promising approach because in addition to providing solutions which are as close to the optimum as other well-known conventional heuristic ones,
a) they solve the problem in a less costly way than other methods;
b) they generalize already known heuristics; and
c) the hybridization in the soft computing context favours and enriches the appearance of original procedures which can help resolve new problems.
However, while the historic path of fuzzy sets and systems has been much explored, the same cannot be said of soft computing. In order to narrow this gap, we will describe what soft computing is and what is understood by heuristics, and from both concepts we will attempt to find a common ground where the best of both worlds can be combined. There will be results: the first is that there will be soft computing-based metaheuristic procedures which appear to be one of the most promising tools for the effective solution of problems which are as yet impossible to solve, and also for finding solutions which suit the person looking for them; and the second (as a result of the first) is that a new description will emerge of the components which define soft computing and this will further extend the application sphere.
Consequently, next section present the former concept of soft computing and its main classical constituents. Then section 3 focuses on the definition of heuristics and metaheuristics. The review of the soft computing components is carried out in section 4, and in section 5 new hybrid metaheuristics in soft computing are presented and briefly described. The main conclusions and bibliography close the paper.
Soft Computing
Prior to 1994 when Zadeh [2] first defined “soft computing“, the currently-handled concepts used to be referred to in an isolated way, whereby each was spoken of individually with an indication of the use of fuzzy methodologies. Although the idea of establishing the area of soft computing dates back to 1990 [3], it was in [2] that Zadeh established the definition of soft computing in the following terms:
"Basically, soft computing is not a homogeneous body of concepts and techniques. Rather, it is a partnership of distinct methods that in one way or another conform to its guiding principle. At this juncture, the dominant aim of soft computing is to exploit the tolerance for imprecision and uncertainty to achieve tractability, robustness and low solutions cost. The principal constituents of soft computing are fuzzy logic, neurocomputing, and probabilistic reasoning, with the latter subsuming genetic algorithms, belief networks, chaotic systems, and parts of learning theory. In the partnership of fuzzy logic, neurocomputing, and probabilistic reasoning, fuzzy logic is mainly concerned with imprecision and approximate reasoning; neurocomputing with learning and curve-fitting; and probabilistic reasoning with uncertainty and belief propagation".
It is therefore clear that rather than a precise definition for soft computing, it is instead defined by extension, by means of different concepts and techniques which attempt to overcome the difficulties which arise in real problems which occur in a world which is imprecise, uncertain and difficult to categorize.
There have been various subsequent attempts to further hone this definition, with differing results, and among the possible alternative definitions, perhaps the most suitable is the one presented in [4]: "Every computing process that purposely includes imprecision into the calculation on one or more levels and allows this imprecision either to change (decrease) the granularity of the problem, or to "soften" the goal of optimalisation at some stage, is defined as to belonging to the field of soft computing".
The viewpoint that we will consider here (and which we will adopt in future) is another way of defining soft computing, whereby it is considered to be the antithesis of what we might call hard computing. Soft computing could therefore be seen as a series of techniques and methods so that real practical situations could be dealt with in the same way as humans deal with them, i.e. on the basis of intelligence, common sense, consideration of analogies, approaches, etc. In this sense, soft computing is a family of problem-resolution methods headed by approximate reasoning and functional and optimization approximation methods, including search methods. Soft computing is therefore the theoretical basis for the area of intelligent systems and it is evident that the difference between the area of artificial intelligence and that of intelligent systems is that the first is based on hard computing and the second on soft computing.
From this other viewpoint on a second level, soft computing can be then expanded into other components which contribute to a definition by extension, such as the one first given. From the beginning [5], the components considered to be the most important in this second level are probabilistic reasoning, fuzzy logic and fuzzy sets, neural networks, and genetic algorithms (GA), which because of their interdisciplinary, applications and results immediately stood out over other methodologies such as the previously mentioned chaos theory, evidence theory, etc. The popularity of GA, together with their proven efficiency in a wide variety of areas and applications, their attempt to imitate natural creatures (e.g. plants, animals, humans) which are clearly soft (i.e. flexible, adaptable, creative, intelligent, etc.), and especially the extensions and different versions, transform this fourth second-level ingredient into the well-known evolutionary algorithms (EA) which consequently comprise the fourth fundamental component of soft computing, as shown in the following diagram:

From this last conception of soft computing, playing fuzzy sets and fuzzy logic a necessarily basic role, we can describe other areas emerging around it simply by considering some of the possible combinations which can arise:
1. From the first level and beginning with approximate reasoning methods, when we only concentrate on probabilistic models, we encounter the Dempster-Shafer theory and Bayesian networks. However, when we consider probabilistic methods combined with fuzzy logic, and even with some other multi-valued logics, we encounter what we could call hybrid probabilistic models, fundamentally probability theory models for fuzzy events, fuzzy event belief models, and fuzzy influence diagrams.
2. When we look at the developments directly associated with fuzzy logic, fuzzy systems and in particular fuzzy controllers stand out. Then, arising from the combination of fuzzy logic with neural networks and EA are fuzzy logic-based hybrid systems, the foremost exponents of which are fuzzy neural systems, controllers adjusted by neural networks (neural fuzzy systems which differ from the previously mentioned fuzzy neural systems), and fuzzy logic-based controllers which are created and adjusted with EA.
3. Moving through the first level to the other large area covered by soft computing (functional approach/optimization methods) the first component which appears is that of neural networks and their different models. Arising from the interaction with fuzzy logic methodologies and EA methodologies are hybrid neural systems, and in particular fuzzy control of network parameters, and the formal generation and weight generation in neural networks.
4. The fourth typical component of soft computing and perhaps the newest yet possibly most up-to-date is that of EA, and associated with these are four large, important areas: evolutionary strategies, evolutionary programming, GA, and genetic programming. If we were only to focus on these last areas, we could consider that in this case the amalgam of methodologies and techniques associated with soft computing culminate in three important lines: fuzzy genetic systems, bioinspired systems, and applications for the fuzzy control of evolutionary parameters.
On further examination of this last component some additional considerations are needed. Firstly, independently of the broad-minded approach adopted to contemplate what can be embraced by fuzzy genetic systems, bioinspired systems, and fuzzy control applications on evolutionary parameters, other important topics are missing from this description. Secondly, if we are referring in particular to bioinspired systems, it is clear that not only are they the product of fuzzy logic, neural networks or EA (with all the variants that we can consider for these three components) but also that other extremely important methodologies are involved in them.
In the sections which follow we will therefore justify a new definition for soft computing components, which was first referred to in [6], in order to provide a clearer perspective of the different areas that this covers without any loss of essence.
Heuristics and Metaheuristics
As stated in [7], since the fuzzy boom of the 1990s, methodologies based on fuzzy sets (i.e. soft computing) have become a permanent part of all areas of research, development and innovation, and their application has been extended to all areas of our daily life: health, banking, home, and are also the object of study on different educational levels. Similarly, there is no doubt that thanks to the technological potential that we currently have, computers can handle problems of tremendous complexity (both in comprehension and dimension) in a wide variety of new fields.
As we mentioned above, since the mid 1990s, GA (or EA from a general point of view) have proved to be extremely valuable for finding good solutions to specific problems in these fields, and thanks to their scientific attractiveness, the diversity of their applications and the considerable efficiency of their solutions in intelligent systems, they have been incorporated into the second level of soft computing components.
EA, however, are merely another class of heuristics, or metaheuristics, in the same way as Taboo Search, Simulated Annealing, Hill Climbing, Variable Neighbourhood Search, Estimation Distribution Algorithms (EDA), Scatter Search, GRASP, Reactive Search and very many others are. Generally speaking, all these heuristic algorithms (metaheuristics) usually provide solutions which are not ideal, but which largely satisfy the decision-maker or the user. When these act on the basis that satisfaction is better than optimization, they perfectly illustrate Zadeh’s famous sentence [2] : "...in contrast to traditional hard computing, soft computing exploits the tolerance for imprecision, uncertainty, and partial truth to achieve tractability, robustness, low solution-cost, and better rapport with reality”.
Consequently, among the soft computing components, instead of EA (which can represent only one part of the search and optimization methods used), heuristic algorithms and even metaheuristics should be considered.
There is usually controversy about the difference between metaheuristics and heuristics, and while it is not our intention here to enter into this debate, we are interested in offering a brief reflection on both concepts. The term heuristics comes from the Greek word “heuriskein”, the meaning of which is related to the concept of finding something and is linked to Archimedes’ famous and supposed exclamation, “Eureka!”.
On this basis, a large number of heuristic procedures have been developed to solve specific optimization problems with great success, and the best of these have been extracted and used in other problems or in more extensive contexts. This has contributed to the scientific development of this field of research and to the extension of the application of its results. As a result, metaheuristics have emerged, a term which appeared for the first time in an article by Fred Glover in 1986.
The term metaheuristics derives from the combination of the word heuristics with the prefix meta (meaning beyond or of a higher level), and although there is no formal definition for the term metaheuristics, the following two proposals give a clear representation of the general notion of the term:
a) I. H. Osman and G. Laporte [18]: "An iterative generation process which guides a subordinate heuristic by combining intelligently different concepts for exploring and exploiting the search space".
b) S. Voss et al. [19]: "is an iterative master process that guides and modifies the operations of subordinate heuristics to efficiently produce high quality solutions".
It is therefore clear that metaheuristics are more broad-brush than heuristics. In the sections which follow, we will focus on the concept of metaheuristics, and will start by pointing out that in the terms that we have defined, certain metaheuristics will always be better than others in terms of their performance when it comes to solving problems.
In order to achieve the best performance of the metaheuristics, it is desirable for them to have a series of “good properties” which include simplicity, independence, coherence, effectiveness, efficiency, adaptability, robustness, interactivity, diversity, and autonomy [8]. In view of their definition and the series of desirable characteristics, it is both logical and obvious that EA are to be found among metaheuristics and they are therefore well placed with the other second-level soft computing components to facilitate the appearance of new theoretical and practical methodologies, outlines, and frameworks for a better understanding and handling of generalized imprecision in the real world (as explained in [3]).
A review of Soft Computing Components
Returning to the previous description of the components which describe soft computing on the different levels, we could say that the most important second-level components are probabilistic reasoning, fuzzy logic and sets, neural networks and in view of what we have explained, metaheuristics (which would typically encompass EA but would not be confined to these exclusively). The new defining framework for the main methodologies which make up soft computing would therefore be described as in the following diagram:

As we explained before, rather than understanding soft computing methodologies in an isolated way, it is necessary to understand them through the hybridization of their second-level components. Correspondingly, it is perfectly logical for us to explore the new theoretical-practical facets deriving from the appearance of metaheuristics among these components.
There are so many and such a variety of metaheuristics available that it is practically impossible to agree on one universally-accepted way of classifying them. Nevertheless, the hierarchy on which there is the most consensus considers three (or four) foremost groups:
1) metaheuristics for evolutionary procedures based on sets of solutions which evolve according to natural evolution principles.
2) metaheuristics for relaxation methods, problem-solving methods using adaptations of the original model which are easier to resolve.
3) metaheuristics for neighborhood searches, which explore the solution space and exploit neighbourhood structures associated to these solutions.
4) other types of intermediate metaheuristics between the ones mentioned above or derived in some way from them, but which we will not consider because of their great variability (and to avoid dispersion).
We have decided to classify the metaheuristics in this way, and what is at first apparent is that our previous definition of soft computing “by extension” according to its components, not only maintains the essence of Zadeh’s original definition but generalizes and expands it to contemplate new possibilities. In effect, if we were to call these four groups of metaheuristics MH(1), ... MH(4), respectively, the previous diagram could now be represented more explicitly as shown below,

where, due to the fact that there are still classic soft computing components, the different known and studied areas remain as they are, emerging as always when two or more of these components are interrelated with each other. However, as a result of having incorporated new possibilities into the fourth component (metaheuristics), it now makes perfect sense to wait for new hybrid models to appear to be developed.
In order to demonstrate the range of study areas at our disposal when metaheuristics is taken as the base component, in the following sections we will concentrate on describing the hybridizations which arise through the use of the previous categorization.
Hybrid Metaheuristics in Soft Computing
In this section, we will consider the three main previously mentioned groups of metaheuristics. From these, we will then describe the new metaheuristics which have emerged, briefly dwelling on the less developed or less popular ones because they are more recent.
5.1. Evolutionary Metaheuristics. These metaheuristics are by far the most popular and define mechanisms for developing an evolution in the search space of the sets of solutions in order to come close to the ideal solution with elements which will survive in successive generations of populations. In the context of soft computing, the hybridizations which take these metaheuristics as a reference are fundamental:

Although this is a very important and very wide area (covering everything from fuzzy genetic systems to the adjustment of fuzzy controllers with evolutionary algorithms, in addition to EDA, bioinspired systems, etc.), it is beyond the scope of this article and those interested should refer to ([9, 10, 11]).
5.2. Relaxation metaheuristics. A real problem may be relaxed when it is simplified by eliminating, weakening or modifying one of its characteristic elements. Relaxation metaheuristics are strategies for relaxing the problem in heuristic design, and which are able to find solutions for problems which would otherwise have been very difficult to solve without the use of this methodology. Examples of these are rounding up or down or adjustments in nature, as occurs when an imprecisely and linguistically-expressed quantity is associated to an exact numerical value. From this point of view, a real alternative is to flexibilize exact algorithms, introducing fuzzy stop criteria, which eventually leads to rule-based relaxation metaheuristics; admitting the vagueness of coefficients, justifying algorithms for resolving problems with fuzzy parameters, and relaxing the verification of restrictions, allowing certain violations in their fulfillment:

In order to illustrate some of these metaheuristics more specifically, we will consider algorithms with fuzzy stop criteria [12, 13]. We know that the stop criteria fix the end conditions of an algorithm’s iterative procedure, establishing these criteria from the problem’s theoretical features, from the type of solution being sought, and from the type of algorithm used. If a given algorithm provides the succession (xn) of feasible solutions, some of the most frequent stop criteria are:
a) stop the process after N iterations;
b) stop the process when the relative or absolute distance between two elements in the succession from a certain iteration are less than or equal to a prefixed value;
c) stop the process when a prefixed measure g(xn) satisfies a certain condition such as being less than or equal to a constant.
In short, it can be said that an algorithm determines a reference set and stops when the set specified in the stop criteria has been obtained. The flexibilization of exact algorithms with the introduction of fuzzy stop criteria therefore assumes that the reference set is considered to be a fuzzy set, and the stop criteria are fixed according to the membership degree of the elements.
5.3. Search metaheuristics. Generally speaking, these are probably the most important metaheuristics, and their basic operation consists in establishing strategies for exploring the solution space of the problem and iterating the starting-point solutions. Although at first sight they might appear to be similar to evolutionary searches, they are not since evolutionary searches base their operation on the evolution of a population of individuals in the search space. These metaheuristics are usually described by means of various metaphors, which classify them as bioinspired, sociological, based on nature, etc. and this makes them extremely popular.
However, outside this descriptive framework, given that a search can be made by means of a single search procedure (or by more than one in which case the search methods could either cooperate with each other or not) the search metaheuristic (without this classification being exclusive for this section) can be considered as individual or multiple, allowing in this last case the possibility for different agents to cooperate with each other. The different options which can emerge in the context of soft computing are collected in the following diagram:

Among the best known individual metaheuristics are Hill Climbing, Greedy-like, Multi-start, Variable Neighbourhood, Simulated Annealing, Taboo, … which have their own fuzzy extensions.
Independently of their specific method of action, all these metaheuristics explore the search space according to evaluations of the objective function of the specific problem which is being solved, and this explicitly supposes performing numerical valuations with the help of an objective function in a precisely defined space. Only too often, however, the objective function represents some vaguely established property, and the search space (or the neighborhoods being searched) has no clearly defined boundaries, and this makes it logical to focus the application of these metaheuristics with theoretical elements from the sphere of fuzzy logic and fuzzy sets. It is precisely in this context that FANS-type algorithms emerge [14,15].
FANS is a neighborhood search method where the solutions are evaluated not only in terms of the objective functions but also through the use of fuzzy properties and concepts which enable qualitative valuations on the solutions. It is also a method which may be adapted to the context since its behaviour varies according to the state of the search through the use of various administrators. FANS is based on four main components (O, FV, OS and NS) and a diagram of the algorithm is shown below to display the interaction between these four components.

If, however, the search procedure is performed using various metaheuristics, there is always the possibility of cooperation between these [16], and therefore the generalization of everything described so far to the context of parallelism, something which is obviously beyond the sphere of this paper but which it is interesting to reflect on since with the proliferation of parallel computing, more powerful work stations and faster communication networks, parallel implementations of metaheuristics have emerged as something natural and provide an interesting alternative for increasing the speed of the search for solutions. Various strategies have correspondingly been proposed and applied and these have proved to be very efficient for resolving large-scale problems and for finding better solutions than those of their sequential counterparts due to the division of the search space, or because they have improved the intensification and diversification of the search. As a result, parallelism (and therefore multiple metaheuristics) not only constitutes a way of reducing the execution times of individual metaheuristics, but also of improving their effectiveness and robustness.
In the soft computing framework , the basic idea which has been developed so far has consisted in supposing that there is a set of resolving agents [17] which are basically algorithms for solving combinatorial optimization problems, and to execute them cooperatively by means of a coordinating agent to solve the problem in question, taking the generality based on minimum knowledge of a problem as a fundamental premise. Each solving agent acts autonomously and only communicates with a coordinating agent to send it the solutions as it finds them and to receive guidelines about how to proceed. The coordinating agent receives the solutions found by each solving agent for the problem, and following a fuzzy rule base to model its behaviour, it creates the guidelines which it then sends to them, thereby taking total control of the strategy.
Conclusion
The concept of fuzzy set has been and is a paradigm in the scientific-technological world with important repercussions in all social sectors because of the diversity of its applications, of the ease of its technological transference, and of the economic saving that its use supposes. Although when the first article on the subject was published about 40 years ago it was met with resistance from certain academic sectors, time has shown that fuzzy sets constitute the nucleus of a doctrinal body of indubitable solidness, dynamism and international recognition which is known as soft computing.
It is precisely this dynamism which has lead us to reflect in this article on what the defining limits of soft computing are in an attempt to widen the range of its basic components with the inclusion of metaheuristics. This wider and more general perspective of soft computing allows the possibility of incorporating new and as yet undeveloped search/optimization methods (without any of the already explored methods being the protagonist), thereby avoiding the tendency indicated by Zadeh in [3] to proclaim the methodology in which we are interested to be the best (which, as Zadeh pointed out, is yet another version of the famous hammer principle which says that "When the only tool you have is a hammer, everything begins to look like a nail").
Referencias
[1] Zadeh, L.A. (1965): Fuzzy Sets. Information and Control, 338-353.
[2] Zadeh, L.A. (1994). Soft Computing and Fuzzy Logic. IEEE Software 11, 6, 48-56.
[3] Zadeh, L.A. (2001): Applied Soft Computing. Applied Soft Computing 1, 1–2
[4] Li, X., Ruan, D. and van der Wal, A.J. (1998): Discussion on soft computing at FLINS'96. International Journal of Intelligent Systems, 13, 2-3, 287- 300.
[5] Bonissone, P (2002): Hybrid Soft Computing for Classification and Prediction Applications. Conferencia Invitada. 1st International Conference on Computing in an Imperfect World (Soft-Ware 2002), Belfast
[6] Verdegay, J.L. (2005): Una revisión de las metodologías que integran la "Soft Computing". Actas del Simposio sobre Lógica Fuzzy y Soft Computing (LFSC2005). Granada, 151-156
[7] Verdegay, J.L., Ed. (2003): Fuzzy Sets-based Heuristics for Optimization. Studies in Fuzziness. Springer Verlag
[8] Melián, B., Moreno Pérez, J.A., Moreno Vega, J.M. (2003): Metaheurísticas: Una visión global. Revista Iberoamericana de Inteligencia Artificial 19, 2, 7-28
[9] Cordón, O., F. Gomide, F. Herrera, F. Hoffmann, L. Magdalena (2004): Ten Years of Genetic Fuzzy Systems: Current Framework and New Trends. Fuzzy Sets and Systems 141:1, 5-31.
[10] Larrañaga, P., J.A. Lozano, H. Mühlenbein (2003): Algoritmos de estimación de distribuciones en problemas de optimización combinatoria. Inteligencia Artificial. Revista Iberoamericana de Inteligencia Artificial, 19(2), 149-168.
[11] Arenas, M.G., F. Herrera, M. Lozano, J.J. Merelo, G. Romero, A.M. Sánchez (Eds) (2005): Actas del IV Congreso Español sobre Metaheurísticas, Algoritmos Evolutivos y Bioinspirados (MAEB'05) I y II.
[12] Vergara-Moreno, E (1999): Nuevos Criterios de Parada en Algoritmos de Optimizacion. Tesis Doctoral. Universidad de Granada.
[13] Verdegay, J.L. y E. Vergara-Moreno (2000): Fuzzy Termination Criteria in Knapsack Problem Algorithms. Mathware and Soft Computing VII, 2-3, 89-97.
[14] Pelta, D.A. (2002): Algoritmos Heuristicos en Bioinformática (2002). Tesis Doctoral. Universidad de Granada.
[15] Blanco, A., D. Pelta y J.L. Verdegay (2002): A Fuzzy Valuation-based Local Search Framework for Combinatorial Problems. Fuzzy Optimization and Decision Making 1, 177-193.
[16] Cruz Corona, C. (2005): Estrategias cooperativas multiagentes basadas en Soft Computing para la solución de problemas de optimización. Tesis Doctoral. Universidad de Granada.
[17] Pelta, D.A., A. Sancho-Royo, C. Cruz y J.L. Verdegay: Using memory and fuzzy rules in a co-operative multi-thread strategy for optimization. Information Science (en prensa)
[18] Osman, I. H. and Laporte, G. (1996): Metaheuristic: A bibliography, Annals of Operations Research 63, 513-623.
[19] Voss S., Martello S., Osman I.H. and Rucairol C., Eds. (1999): Meta-Heuristics: Advances and Trends in Local Search Paradigms for Optimization, Kluwer Academic Publishers.

A Wide Area Network


A Wide Area Network (WAN) Tutorial [Technology Explained]

Looking For A VPN? Check Out Spotflux, the 100% Free One-Click US VPN Solution Today!
www.spotflux.com

Tikona Broadband 5 Ps/ MB 60GB in 20 months, Rs 165/month. Check Service in your bldg & Order
www.tikona.in

Idea App Mall Vist and Download Heavenly Apps to make your phone a deadly phone.
www.ideamall.in

Realworld Systems Competent GIS Consultancy Worldwide GIS: Smallworld, ESRI...
www.Realworld-Systems.com
define wide area networkIf you are at home reading this then you are most likely connected to the Internet. Whether it is by a wireless signal or physical Ethernet connection, you are a part of a network. Your home network – all computers, routers, modems, etc – is called a local area network (LAN).
A wide area network (WAN) is a large telecommunications network that consists of a collection of LANs and other networks. WANs generally span a wide geographical area, and can be used to connect cities, states, or even countries.
Although they appear like an up-scaled version of a LAN, WANs are actually structured and operated quite differently. This wide area network tutorial serves to explain how WANs are designed/constructed and why their use is beneficial.

Wide Area Network ““ Connection Options

“Many WANs are built for one particular organization and are private. Others, built by Internet service providers (ISPs), provide connections from an organization’s LAN to the Internet.” Several options are available for WAN connectivity: leased line, circuit switching, packet switching, and cell relay.

Leased Line

wide area network tutorial
WANs are often built using leased lines. These leased lines involve a direct point-to-point connection between two sites. Point-to-point WAN service may involve either analog dial-up lines or dedicated leased digital private lines.
Analog lines ““ a modem is used to connect the computer to the telephone line. Analog lines may be part of a public-switched telephone network and are suitable for batch data transmissions.
Samsung Mobile Phones Discover Samsung phones today. All Phones range. Official Site. Visit!
www.Samsung.com

Voicegain Cloud Telephony India Cloud Telephony, Hosted PBX, IVR, Auto Attendant, Call Routing
www.voicegain.com

Cisco e-Bulletin for SMB Successful case studies & more. Subscribe now to win an iPad2!
www.cisco.com

Buy Nokia Mobile Phones Grab Your Nokia Windows Phone at Amazing Prices and Great Features.
www.Nokia.Com
Dedicated lines ““ digital phone lines that permit uninterrupted, secure transmission at fixed costs.
At each end of the leased line, a router connects to the LAN on one side and a hub within the WAN on the other. Leased lines can get pretty expensive in the long run.

Circuit Switching

wide area network tutorial
Instead of using leased lines, WANs can be built using circuit switching. “In telecommunications, a circuit switching network is one that establishes a circuit (or channel) between nodes and terminals before the users may communicate, as if the nodes were physically connected with an electrical circuit.”
In other words, a dedicated circuit path is created between end points. The best example of this is a dialup connection. Circuit switching is more difficult to setup, but it does have the advantage of being less expensive.

Packet Switching

Packet switching is a method that groups all transmitted data together into bits called packets. Devices transport packets via a shared single point-to-point/point-to-multipoint link across a carrier network. Sequences of packets are then delivered over a shared network.
Similar to circuit switching, packet switching is relatively inexpensive, but because packets are buffered and queued, packet switching is characterized by a fee per unit of information, whereas circuit switching is characterized by a fee per time unit of connection time (even when no data is transferred).

Cell Relay

Cell relay is similar to packet switching but it uses fixed length cells instead of variable length packets. Data is divided into these cells and then transported across virtual circuits.
This method is best for simultaneous voice and data but can cause considerable overhead.

WANs vs LANs

wide area network tutorial
Depending on the service, WANs can be used for almost any data sharing purpose for which LANs can be used. The most basic uses of WANs are for email and file transfer, but WANs can also permit users to access data remotely.
New types of network-based software used for productivity, like work-flow automation software, can also be used over WANs. This allows workers to collaborate on projects easily, regardless of their location.
Unlike LANs, WANs typically do not link individual computers. WANs link LANs together. They provide communications links over great distances.

The Existence Of WANs

WANs have existed for decades, but new technologies, services, and applications have developed over the years to dramatically increase their effect on business. WANs were originally developed for digital leased-line services carrying only voice (not data).
define wide area network
At first, they connected the private branch exchanges (PBXs) of remote offices of the same company. WANs are still used for voice services, but today they are used more frequently for data and image transmission (like videoconferencing). These added applications have spurred significant growth in WAN usage, primarily because of the surge in LAN connections to the wider networks.
A wide area network allows companies to make use of common resources in order to operate. Internal functions such as sales, production and development, marketing, and accounting can also be shared with authorized locations through this sort of network.
In the event of a problem – say a company facility is damaged from a natural disaster – employees can move to another location and access the network. Productivity is not lost.

Conclusion

The wide area network has made it possible for companies to communicate internally in ways never before possible. Because of WANs, we (the consumers) can enjoy benefits from companies that we wouldn’t have been able to in the past.
What do you think of WANs? What’s next for connectivity? Leave your thoughts, ideas, and comments below.

LAN - Local Area Network

Definition: A local area network (LAN) supplies networking capability to a group of computers in close proximity to each other such as in an office building, a school, or a home. A LAN is useful for sharing resources like files, printers, games or other applications. A LAN in turn often connects to other LANs, and to the Internet or other WAN. Most local area networks are built with relatively inexpensive hardware such as Ethernet cables, network adapters, and hubs. Wireless LAN and other more advanced LAN hardware options also exist.
Specialized operating system software may be used to configure a local area network. For example, most flavors of Microsoft Windows provide a software package called Internet Connection Sharing (ICS) that supports controlled access to LAN resources.
The term LAN party refers to a multiplayer gaming event where participants bring their own computers and build a temporary LAN.
Also Known As: local area network
Examples:
The most common type of local area network is an Ethernet LAN. The smallest home LAN can have exactly two computers; a large LAN can accommodate many thousands of computers. Many LANs are divided into logical groups called subnets. An Internet Protocol (IP) "Class A" LAN can in theory accommodate more than 16 million devices organized into subnets.

Thursday, April 19, 2012

What is HD

What is HD technology?

HD is the latest development in home entertainment. But what is it?


The home entertainment market advances at a rapid rate. Many new technologies are designed and introduced each year but only a few of them receive mainstream acceptance. The most recent of these is HD technology. HD stands for High Definition and this technology is being used to create High Definition TVs. HDTV offers a sharpness and detail that has never been experienced in home entertainment.

HDTV is the latest buzz word in the technology sphere. Everyone is heading to buy the latest and best HDTV screen available today. Many top brands bring you these screens at affordable prices with the latest features and best technology. There are many variations of HDTV and many of the key electronics manufacturers that are making them.
 High Definition TVs give crystal clear pictures integrated with Dolby Digital sound to match the quality that you experience in the cinemas. Unlike ordinary analog TVs the HDTV digitalize the TV programming to give you theatre quality pictures and audio. The widescreen High definition integrated with Dolby digital sound makes your television viewing life like. You actually are gripped by what you see. And now companies like Toshiba have brought you the HD DVD players and Blu-Ray players that help you see the real picture come alive. This helps you to see your High Definition TV in complete glory. Sony is a leading brand for HDTV yet you can find many cheaper brands that are just as good as the expensive ones.
With the “progressive” scan technology the HDTV can produce a flicker-free image. This helps you to read the text more easily and also it eases your viewing experience as fast moving images comes more relaxed on it. In short the HDTV is enabled to refresh all the one million pixels simultaneously to give you breath-taking picture quality. Can we now say that you can really live life in High Definition TV?

What is Aeronautics?

What is Aeronautics?

Link to Adult's Public Site Page



Definition

Aeronautics is the study of the science of flight. Aeronautics is the method of designing an airplane or other flying machine. There are four basic areas that aeronautical engineers must understand in order to be able to design planes. To design a plane, engineers must understand all of these elements.

Design Process

1 Aerodynamics is the study of how air flows around the airplane. By studying the way air flows around the plane the engineers can define the shape of the plane. The wings, the tail, and the main body or fuselage of the plane all affect the way the air will move around the plane.
2. Propulsion is the study of how to design an engine that will provide the thrust that is needed for a plane to take off and fly through the air. The engine provides the power for the airplane. The study of propulsion is what leads the the engineers determine the right kind of engine and the right amount of power that a plane will need
3. Materials and Structures is the study of what materials are to be used on the plane and in the engine and how those materials make the plane strong enough to fly effectively. The choice of materials that are used to make the fuselage wings, tail and engine will affect the strength and stability of the plane. Many airplane materials are now made out of composites, materials that are stronger than most metals and are lightweight.
4. Stability and Control is the study of how to control the speed, direction, altitude and other conditions that affect how a plane flies. The engineers� design the controls that are needed in order to fly and instruments are provided for the pilot in the cockpit of the plane. The pilot uses these instruments to control the stability of the plane during flight.

Engineering and Science Careers at NASA

What are the different kinds of careers in aerospace?

NASA Engineering Teams consist of many individuals - engineers, technicians, and scientists and various support personal.
Engineering and Science Careers offer:

  • Challenging jobs
  • Good pay and benefits
  • Lasting and tangible products
  • Help to humankind
  • Prestige and status
  • Continued educational experiences

Scientists

Scientists are knowledge seekers. They are inquisitive, seeking answers to known questions and finding many more questions.
  • Astronomy
  • Biology
  • Chemistry
  • Computer
  • Economics
  • Geology
  • Materials
  • Mathematics
  • Medical Doctor
  • Meteorology
  • Nutrition
  • Oceanography
  • Psychology
  • Physics
  • Physiology
  • Sociology
  • Statistics
  • Systems Analysis

Engineers

Engineers are problems solvers. They are the people that make things work and make life interesting, comfortable, and fun.
  • Aerospace
  • Architectural
  • Astronautics
  • Biomedical
  • Chemical
  • Civil
  • Computer
  • Electrical
  • Environmental
  • Industrial
  • Metallurgical
  • Mechanical
  • Nuclear
  • Petroleum
  • Safety
  • Systems

Technicians

Technicians are skilled personnel. Their skills are necessary for the research and development activities of Engineers and Scientist.
  • Aerospace
  • Aircraft
  • Avionics
  • Communications
  • Electrical
  • Electronic
  • Engineering
  • Fabrication
  • Materials
  • Mechanics
  • Modeling
  • Pattern Making

Preparing for an Aerospace Career

Engineers, scientists, and technicians rely on years of accumulated creative and academic skills to be part of a NASA Engineering Team. The journey to become a team member started when you were born and has continued throughout your life. Most engineering, scientific, and technical jobs require not only a High School Diploma or equivalent, but an Associate, Bachelor, or Graduate Degree.
While you're in High School you should take:
  • Algebra
  • Biology
  • Calculus
  • Chemistry
  • Computer Applications / Programming
  • English
  • Fine Arts / Humanities
  • Foreign Language
  • Geometry
  • Physics
  • Social Studies
  • Trigonometry
For Engineering and Science, Advanced Placement or Honors level courses are recommended.
Technicians need to meet the same general High School requirements, but Advanced Placement or Honors courses are not necessary. Drafting, mechanics, electronics, or similar technical courses are also recommended.
College and Universities seek "well rounded" students. Extracurricular activities and part time or summer jobs are also important.
Education Beyond High School
To begin a career as an Engineer or Scientist you need to obtain a Bachelor's Degree from an accredited College or University. Courses are usually completed in four to five years for full time students. Universities also offer graduate programs where students can obtain Master's and Doctoral Degrees in Science and Engineering. A Master's program generally takes two years. An additional two to four years is needed to earn a Doctorate.
Technicians typically earn a two year Associate of Science degree. Some may continue for two more years to obtain a Bachelor's degree. A few complete a five year apprenticeship program offered at some NASA field centers.
Preparing to become a NASA Engineering Team member is difficult. It requires a considerable amount of time, energy, and dedication... but the rewards are worth it.

Spaceship

Assignment: Design a Spaceship


Requirements - It must:
  • achieve mission with payload and/or passengers.
  • be easily and economically produced and maintained.
  • be reusable and have as few stages as possible to reduce cost and recover expensive materials.
  • pass all engineering and flight tests.
  • BE COST EFFECTIVE.
Engineers at NASA's Langley Research Center must consider many questions as they design the next generation of space vehicles. Their approach is not CAN they do it, but HOW can they do it BETTER than before and more cost effectively.
One of Langley's jobs is to create new and innovative technologies to meet the challenges of space flight and lower the cost of future space missions. With technological advances in many areas and expanded needs and capabilities of space missions, NASA researchers face unlimited possibilities. As they work through a series of steps from concept inception to full-scale design, they may hit stumbling blocks and be forced to retrace their steps and sometimes even start over. At every turn, however, they are pioneering their way through science and engineering, turning theories into reality. Their designs must pass final qualification tests and be proven cost efficient. Only then will they be considered for service.

What is a Spaceship

A spaceship is designed to travel in space and may be launched from Earth by a launch vehicle. It may carry a payload to accomplish a mission with or without people and return to Earth. HL-20 transport vehicle
HL-20. This personnel transporter has made it to
the mock-up stage and awaits further approval before
being built.

FIVE STEPS TO BLASTOFF

STEP ONE: Mission Purpose

What is the purpose of the mission? That question begins the avalanche of other questions which lead toward design requirements. What is the payload, how big is it, how much acceleration and entry heating must it take? Once these, and many more requirements are decided, a study is done to determine whether the mission performance requirement can be met.
HL-20 schematics
Step 1. The HL-20 was designed by NASA Langley
to carry astronauts back and forth to the space
station and to serve as an emergency return
vehicle while they are there.

STEP TWO: Design

The nature of the payload and its special needs help determine the design - shape, size and configuration - of the space vehicle. If people are going, there are obvious unique requirements, such as seating capacity, entrance and exit hatches and access to certain systems. The configuration of the spacecraft must provide for all of the support systems, such as communications, electrical systems and life support.
Cutaway view of HL-20
Step 2. Researchers considered various
configurations for the HL-20. External access to
subsystems, to allow for easy maintenance, and
enough room for eight passengers were two top priorities.

STEP THREE: Analyses

NASA Langley engineers must determine the craft's general operation before launch and upon its return. They must analyze the aerodynamic, or air flow, characteristics of the configuration, as well as monitor structural stress, effects of high speed, heat tolerances and the performance trajectory, or course it flies to space and back.
Engineers must consider appropriate new materials for the spaceship that could minimize cost and weight. Every pound of extra structure may take up to 10 pounds more in total launch weight to get it into space - and back. And every pound of structure raises the cost of the mission.
HL-20 design & analysis
Step 3. The HL-20 design was analyzed for
aerodynamics in wind tunnels and by computer,
to understand how the air would flow around
it and would affect its flight into space and back.

STEP FOUR: Testing

Once the spaceship has been designed, it must be certified for flight through a series of performance, vibration and thermal tests. It is now time to test the actual structure with models of the design.
It is not necessary to build an entire spaceship for initial testing. Instead, engineers build and test the individual components. A wing, for example, may be subjected to tests that are not appropriate for any other part of the vehicle.
After initial testing, any parts of the spaceship structure or internal systems which do not meet performance requirements are then redesigned and retested.
Water entry testing
Step 4. Water entry tests using a small-scale
model of the actual design.

STEP FIVE: Fabrication

Once a final design passes initial tests, a full-scale model, or mock-up, is fabricated in fiber glass or other inexpensive materials. Afterward, an actual prototype, called the flight model, may be built and then tested to assure the quality of design. If it passes many hours of tests including a series of experimental flight tests, it is ready for production and operation.
HL-20 interior mock-up
Step 5. A mock-up of the interior design of
the HL-20 enables real astronauts to
determine if they can move and function as planned.

Next Generation Has Arrived

Current space missions require a launch vehicle with rocket stages to get a spaceship such as the HL-20 into space. As we approach the new millennium, NASA Langley is using its experience to help industry develop and introduce the next generation of space vehicles. One of its top priorities is a fully reusable spaceship, a launch vehicle, which would fly to space and back as a single unit or single stage. Depending on the mission, the reusable launch vehicle could support sophisticated, high-precision, deployable instruments for specific scientific research. A prototype of this vehicle, the X-33, is slated to fly in 1999.
NASA Langley engineers also have an active role in the design of the International Space Station, the components of which are currently being built.

Summary

NASA Langley's current development of next generation launch vehicles follows a systemized course from inception to prototypes to flight vehicles. With the goal to reuse vehicle components and eliminate multi-stage rockets, NASA Langley researchers have brought us into the 21st Century and will continue to meet the ever changing and expanding requirements of space missions.