Other tasks in the category: Biology More task. Total solved problems on the site: Instant access to the answer in our app. See results 0. Log in Forgot your password? Join now Forgot your password? Behavioural responses to stimuli are considered to be adaptive, because the responses are adaptions to the environment and can change, depending on the stimuli or environment.
This can be seen in cocaine addicts, whose dopamine receptors get intensely stimulated. The amount of receptors decreases, with prolonged use of the drug and less pleasure is felt by the user, which shows that the response, in this case pleasure, changes depending on the stimuli. Some external and internal stimuli that might signal Alaskan caribou to migrate could be snow fall, shorter daytime, and longer nights.
Some internal stimuli could be the internal clock that knows when it is time to migrate. Learn More. Organisms are perpetually facing noxious insults but exhibit surprising diverse reaction patterns. Depending on the strength, frequency and quality of the stress stimuli biological systems may react with increased vitality, future stress resistance or with injury and degeneration.
Whereas a multitude of such specific stress responses has been observed in diverse biological systems the underlying molecular mechanisms are mainly unknown. These knowledge restrictions urge the exploration of specific molecular signaling reactions controlling the ambivalent responses of cells and organisms to noxious effects. The adaptive responses of signaling networks to defined stress stimuli need to be investigated in a time-and dose-resolved manner in cellular and organismic models.
These typically include environmental factors such as toxins, irradiation, dietary restriction or infectious agents. In addition, secondary responses such as the release of inflammatory mediators induced by infections or trauma, or the increased production of reactive oxygen species after physical exercise belong to this category.
Stress-dependent adaptive responses of biological systems characteristically display pronounced dose dependency. Thus, in many cases low doses of a potentially harmful environmental factor can even cause beneficial effects increasing the vitality of the affected organism.
Hormetic reactions have been first mentioned by Paracelsus and were since then observed in multiple biological systems Calabrese It is well established that when treated with a potentially noxious stimulus below a specific threshold, organisms are capable of developing resistance or at least marked robustness towards the same or mechanistically related stressors.
Such pre conditioning effects of stressors are of extraordinary medical interest. Whereas a multiplicity of hormetic dose-dependent and conditioning time-related responses of biological systems to stressors has been described, the molecular understanding of these phenomena is clearly rudimentary.
The present overview seeks to summarize and to exemplify the current knowledge on adaptive stress responses and aims to develop some idea how to shed light on the underlying molecular signaling mechanisms. To explore the molecular dimension of hormetic responses, signaling processes controlling the ambivalent reactions of cells and organisms to noxious factors stressors have to be analyzed.
As illustrated in Fig. In terms of a heuristic approach the subsequent signaling reactions may be structured in three layers. The final effect of the stressor on the vitality of the cell will be probably predisposed by signal transducers and lastly defined in the decision layer.
As a major challenge for researchers in the field the topology of the complex signaling processes in the transducer region and in the decision layer are of special importance with respect to the molecular understanding of hormetic or adaptive responses of biological systems.
The complex reaction patterns of signaling mediators and networks to defined stressors has to be investigated in a dose- and time-related manner in selected animal and plant models.
Three goals seem of major relevance for the analysis of stress related signaling processes:. Identification of signaling reactions involved in adaptive stress responses. Selected animal and plant models should be used to identify and to characterize signaling mediators and signaling pathways conveying specific stress responses of cells and organisms.
The current knowledge on candidate stress mediators and pathways needs to be ameliorated and deepened. Dissection of the dose-dependent and time-related effects of stressors. Detailed investigations of hormetic dose-dependent and conditioning time-related effects of stressors on selected signaling pathways represent the main challenge for researchers of signaling processes controlling adaptive stress reactions.
Typically, the bell-shaped response of cells and organisms to increasing doses of stressors has to be analyzed concerning its underlying signaling patterns.
The adaptive response pattern of cells to the stressor ROS has been exemplified in Fig. At low doses ROS induced signaling reactions prevail, which are able to increase the vitality of cells. At high doses damaging effects of the mediator are dominating. We refer to this as a list-of-parts object representation. The Bayesian nonparametric additive clustering technique inferred the same list-of-parts object representation as its MAP estimate when applied to every condition-level similarity matrix.
Importantly, the result did not have to come out this way. If the additive clustering technique inferred different object representations when applied to different condition-level similarity matrices, this outcome would have been inconsistent with the hypothesis of modality invariance.
The fact that the additive clustering technique always inferred part-based representations is also noteworthy. In hindsight, however, it might be unsurprising for subjects to have used part- based representations. Recall that our stimuli were generated by combining distinct parts.
It seems likely that subjects would be sensitive to the structure of this generative process. More- over, previous theoretical and empirical studies have indicated that people often use part-based object representations [21—25].
Given a condition-level similarity matrix, MDS assigns locations in an abstract space to objects such that similar objects are nearby and dissim- ilar objects are far away [64—66]. When using MDS, there are potential pitfalls when averaging similarity judg- ments of different subjects. Results from MDS analysis. Lee and Pope [68] developed a BIC score that ameliorates these potential pitfalls. This score takes into account both the fit and complexity of an MDS model.
The results based on stress values and BIC scores are shown in Fig 5a and 5b, respectively. In both cases, values typically reach a minimum or nearly so at four dimensions in all experimental conditions.
In Fig 6, we plot the MDS space with four dimensions for the crossmodal condition. The results for other conditions are omitted since they are all qualitatively quite similar.
In each panel of Fig 6, we plot two of the four dimensions against each other, i. What is striking is the clear clustering in all panels. We see four clusters of four objects where each dimension takes one of two possible values. This is precisely the list-of-parts repre- sentation found by the Bayesian nonparametric additive clustering technique.
In summary, our correlational analyses of the experimental data reveal that subjects made similar similarity judgments in visual, haptic, crossmodal, and multisensory conditions. Our analyses using a Bayesian non- parametric additive clustering technique and using multidimensional scaling indicate that sub- jects formed the same set of modality-independent object representations in all conditions.
Here we evaluate whether the MVH model provides a good account of our experimental data. To conduct this evaluation, however, the model must be supplemented with an object similarity metric. Such a metric could potentially take several different forms. For example, object similarity could be computed based on modality-independent features.
Alterna- tively, it could be based on modality-specific features such as visual or haptic features. Researchers studying how people represent space have made a surprising discovery. Spatial locations can be represented in many different reference frames, such as eye-centered, head-cen- tered, body-centered, or limb-position centered coordinate systems.
Counterintuitively, people often transform representations of spatial locations into a common reference frame, namely an eye-centered reference frame, when planning and executing motor movements [69—72]. These studies raise an interesting issue: In what reference frame do people judge object similar- ity? Do they judge object similarity in a modality-independent feature space? Or do they judge object similarity in a sensory-specific feature space such as a visual or haptic space?
Here we address these questions by augmenting the MVH model with different object similarity functions. Each point corresponds to one of the 16 objects. The mapping from modality-independent to visual features could be achieved by a vision-specific forward model. A second alternative is that people re-represent objects in terms of haptic features via a haptic-specific forward model to judge object similar- ity.
Because the MVH model includes modality-independent representations along with vision-specific and haptic-specific forward models, it can be used to evaluate these different possibilities. In one set of simulations, the model was used to compute object similarity in a modality- independent feature space. On each simulated trial, the model computed modality-indepen- dent representations for two objects.
In brief, this measure has a library of three tree-based operators: rename node, remove node, and insert node. Given two modal- ity-independent object representations—that is, two spatial trees or MAP estimates of the shapes of two objects—this similarity measure counts the number of operators in the shortest sequence of operators that converts one representation to the other representation or vice versa. For similar representations, the representation for object A can be converted to the representation for object B using a short operator sequence, and thus these representations have a small distance.
For dissimilar representations, a longer operator sequence is required to convert one object representation to the other, and thus these representations have a large dis- tance. In a second set of simulations, the model was used to compute object similarity in a visual feature space.
As above, the model was used to acquire modality-independent representations for two objects on each simulated trial. Next, the vision-specific forward model was used to map each object representation to images of the represented object, thereby re-representing each object from a modality-independent reference frame to a visual reference frame.
Given three images from orthogonal viewpoints of each object see Fig 4a , the similarity of the two objects was estimated as the Euclidean distance between the images of the objects based on their pixel values.
In a final set of simulations, the model was used to compute object similarity in a haptic fea- ture space. This set is identical to the set described in the previous paragraph except that the haptic-specific forward model GraspIt!
Given sets of joint angles for each object, the similarity of two objects was estimated as the Euclidean distance between the haptic features of the objects based on their associated joint angles. Which set of simulations produced object similarity ratings matching the ratings provided by our experimental subjects? The results for these three models are shown in Figs 7, 8 and 9. In each figure, the four graphs correspond to the visual, haptic, cross- modal, and multisensory conditions.
Each graph contains points, one point for each possible pair of objects. The correlation denoted R between subject and model ratings is reported in the top-left corner of each graph. Indeed, the correlation R ranges from 0. In other words, the MVH-M model provides a nearly perfect account of our experimental data. In summary, we have compared the performances of three models. All models represent objects in a modality-independent manner.
However, the models differ in the space in which they calculate object similarity. One model calculates similarity using modality-independent features MVH-M , another model maps modality-independent features to visual features and calculates similarity on the basis of these visual features MVH-V , and a final model maps modality-independent features to haptic features and calculates similarity on the basis of these haptic features MVH-H.
Consequently, we hypothesize that subjects computed object similarity in a modality-independent feature space. Results for the MVH-M model this model computes object similarity in a modality-independent feature space. The four graphs correspond to the visual top left , haptic top right , crossmodal bottom left , and multisensory bottom right experimental conditions. We hypothesized that any system that can accom- plish this feat will include three components: a representational language for characterizing modality-independent representations, a set of sensory-specific forward models for mapping from modality-independent representations to sensory signals, and an inference algorithm for inverting forward models i.
Results for the MVH-V model this model computes object similarity in a visual feature space. The format of this figure is identical to the format of Fig 7. Simulation results show that the model infers identical object representations when an object is viewed, grasped, or both.
It is worth pointing out that the particular implementational choices we have made in our model are in some sense arbitrary; any model that instantiates our framework will be able to capture modality invariance. Therefore, from this perspective, our particular model in this work should be taken as one concrete example of how modality independent representations can be acquired and used.
Our work in this paper focused on showing how our framework can capture one aspect of multisensory perception, i. We take this as an encouraging first step in applying our framework to multisensory perception more generally. We believe other aspects of multisensory perception such as cue combination, crossmodal transfer of knowledge, and crossmodal recognition can be easily understood and treated in our framework.
Results for the MVH-H model this model computes object similarity in a haptic feature space. Our experimental results suggest that people extract modality independent shape representations from sensory input and base their judg- ments of similarity on such representations.
The success of our model in accounting for these results are important from two perspectives. First, from a larger perspective, it is significant as a validation of our theoretical framework.
Second, it constitutes an important contribution to cognitive modeling, particularly an emerging probabilistic language-of-thought approach, by showing how symbolic and statistical approaches can be combined in order to understand aspects of human perception. Related research Our theoretical framework is closely related to the long standing vision-as-inference [74] approach to visual perception.
In this approach, the computational problem of visual percep- tion is formalized as the inversion of a generative process; this generative process specifies how PLOS Computational Biology DOI Then, the purpose of the visual system is to invert this generative model to infer the most likely causes, i. This approach, also called analysis-by-synthesis, has fea- tured prominently both in cognitive science [75, 76] and computer vision [77—79].
Our work here can be seen as the application of this approach to multisensory perception. Previous research has instantiated our general theoretical framework in other ways. For example, Yildirim and Jacobs [80] developed a latent variable model of multisensory percep- tion. In this model, modality-independent representations are distributed representations over binary latent variables. Sensory-specific forward models map the modality-independent repre- sentations to sensory e.
The acquisition of modality-inde- pendent representations takes place when a Bayesian inference algorithm the Indian Buffet Process [63] uses the sensory features to infer these representations.
Advantages of this model include the fact that the dimensionality of the modality-independent representations adapts based on the complexity of the training data set, the model learns its sensory-specific forward models, and the model shows modality invariance.
Disadvantages include the fact that the inferred modality-independent representations distributed representations over latent vari- ables are difficult to interpret, and the fact that the sensory-specific forward models are restricted to being linear.
Perhaps its biggest disadvantage is that it requires a well-chosen set of sensory features in order to perform well on large-scale problems. In the absence of good sen- sory features, it scales poorly, mostly due to its linear sensory-specific forward models and complex inference algorithm. As a second example, Yildirim and Jacobs [2] described a model of visual-haptic object shape perception that is a direct precursor to the MVH model described in this paper.
This strat- egy for representing object shape provides enormous flexibility, but this flexibility comes at a price. Inference using this model is severely underconstrained.
Consequently, the investigators designed a customized i. Despite the use of this algo- rithm, inference is computationally expensive. That is, like the latent variable model described in the previous paragraph, the model of Yildirim and Jacobs [2] scales poorly.
Probabilistic language-of-thought We believe that the MVH model described in this paper has significant theoretical and practi- cal advantages over alternatives. These arise primarily due to its use of a highly structured implementation of a representational language for characterizing modality-independent repre- sentations.
In particular, the model combines symbolic and statistical approaches to specify a probabilistic context-free object shape grammar. Due to this shape grammar, the model is able to use a principled inference algorithm that has previously been applied to probabilistic gram- mars in other domains. We find that inference in the model is often computationally tractable.
We are reasonably optimistic that the model or, rather, appropriately extended versions of the model will scale well to larger-scale problems.
Although important challenges obviously remain, our optimism stems from the fact that shape grammars much more complex than the one reported here are regularly used in the Computer Vision and Computer Graphics litera- tures to address large-scale problems. In addition, due to its principled approach, the model should be easy to extend in the future because relationships between the model and other mod- els in the Cognitive Science and Artificial Intelligence literatures using grammars, such as mod- els of language, are transparent.
As a consequence, lessons learned from other models will be easy to borrow for the purpose of developing improved versions of the model described here.
For example, one school of thought favors symbolic approaches, such as approaches based on grammars, pro- duction rules, or logic.
An advantage of symbolic approaches is their rich representational expressiveness—they can often characterize a wide variety of entities in a compact and efficient manner. An alternative school of thought favors statistical approaches, such as approaches based on neural networks or Bayesian inference.
An advantage of statistical approaches is their ability to learn and adapt, and their robustness to noise and uncertainty. Their main disadvantage is that they often require highly structured prior distributions or like- lihood functions to work well [81]. Advocates of symbolic and statistical schools of thought have often engaged in heated debates [82—85]. Unfortunately, these debates have not led to a resolution as to which approach is best.
To date, the proba- bilistic language-of-thought approach has been used almost exclusively in domains that are typi- cally modeled using symbolic methods, such as human language and high-level cognition. A significant contribution of the research presented here is that it develops and applies this approach in the domain of perception, an area whose study is dominated by statistical techniques.
Future research We foresee at least three areas of future research. First, the framework described here sheds light on modality invariance. Future work will need to study whether this framework also sheds light on other aspects of multisensory perception and cognition. For example, can the framework be used to understand why our percepts based on two modalities are often more accurate than our percepts based on a single modality, why training with two modalities is often superior to training with a single modality even when testing is conducted in unisensory conditions , or why crossmodal transfer of knowledge is often, but not always, successful?
Future work will also need to study the applicability of the framework to other sensory domains, such as visual and auditory or auditory and haptic environments. Future work will also need to consider how our framework can be extended to study the acquisition of other types of conceptual knowledge from sensory signals. Second, future research will need to study the role of forward models in perception and cog- nition. For example, we have speculated that sensory-specific forward models may be ways of implementing sensory imagery, and thus our framework predicts a role for imagery in multi- sensory perception.
Behavioral, neurophysiological, and computational studies are needed to better understand and evaluate this hypothesis. New and improved forward models are frequently being reported in the scientific literature and made available on the world wide web e. These forward models will allow cognitive scientists to study human perception, cognition, and action in much more realistic ways than has previ- ously been possible.
Finally, cognitive scientists often make a distinction between rational models and process models [87]. Rational models or computational theories [88] are models of optimal or nor- mative behavior, characterizing the problems that need to be solved in order to generate the PLOS Computational Biology DOI Like rational models, the MVH model is based on optimality considerations. However, like process models, it uses psychologically plausible representations and operations e.
For readers solely interested in process models, we claim that the MVH model is a good starting point. As pointed out by others [89, 90], the MCMC inference algorithm used by the MVH model can be replaced by approximate inference algorithms known as particle filter or sequential Monte Carlo algorithms that are psychologically plausible.
Future work will need to study the bene- fits of extending our framework through the use of psychologially plausible and approximately optimal inference algorithms to create rational process models of human perception. All subjects gave informed consent. The grammar is an instance of a probabilistic context-free grammar. However, proba- bilities for each production rule are not shown in Fig 10 because our statistical inference proce- dure marginalizes over the space of all probability assignments see below.
The rules contain two non-terminal symbols, S and P. Non-terminal P is always replaced by a terminal represent- ing a specific object part.
Non-terminal S is used for representing the number of parts in an object. Production rules are supplemented with additional information characterizing the spa- tial relations among parts. An object is generated using a particular sequence of production rules from the grammar. To represent the spatial relations among object parts, a parse tree is extended to a spatial tree. Before describ- ing this extension, it will be useful to think about how 3-D space can be given a multi-resolu- tion representation.
The center location of this voxel is the origin of the space, denoted 0, 0, 0. Using a Cartesian coordinate system with axes labeled x, y, and z, a coordinate of a Fig Production rules of the shape grammar in Backus-Naur form.
S denotes spatial nodes, and P refer to part nodes. S is also the start symbol of the grammar. P1, P2, etc. Illustration of the multi-resolution representation of 3-D space. This process can be repeated.
0コメント