samtimen中文融资融券是什么意思思

当前位置: &
li lei是什么意思
中文翻译李蕾李磊:&&&&n. (pl. li) 〔汉语〕 (中国里程单位)里。 :&&&&n. leu 的复数。
例句与用法1.Lucy : ok . hi , li lei ! what time is it , please露茜:好的。嗨,李雷!请问几点了2.Li lei is gao shan , they ' re classmates李雷与高山年龄一样大,他们是同班同学。 3.Li lei : sam , what are you going to do tomorrow李雷:萨姆,明天你打算干什么4.Jim : kate , this is my classmate , li lei吉姆:凯特,这是我的同班同学,李雷。 5.Li lei has enough money to travel around the world李雷有足够的钱环游世界。 6.I went to the bookshop yesterday . so did li lei我昨天去了书店,李雷也去了。 7.My friend li lei likes mooncakes with meat in them我的朋友李雷喜欢里面有肉的月饼。 8.Li lei is good at drawing , but i ' m bad at it李雷擅长画画,但是我不擅长。 9.What do you think li lei will be in five years五年后你认为李雷会干什么? 10.Li lei : hello , mimi ! you look like lucy - s hat李雷:喂,咪咪!你看上去像露茜的帽子。 &&更多例句:&&1&&&&&&&&
相邻词汇热门词汇Sorry, we can't show you. - 500 Internal Server Error
T20:12:11.:00
a0d-4ebb-8eea-27698fadd08bGraphical Models
A Brief Introduction to Graphical Models and Bayesian Networks
By Kevin Murphy, 1998.
"Graphical models are a marriage between probability theory and
graph theory. They provide a natural tool for dealing with two problems
that occur throughout applied mathematics and engineering --
uncertainty and complexity -- and in particular they are playing an
increasingly important role in the design and analysis of machine
learning algorithms. Fundamental to the idea of a graphical model is
the notion of modularity -- a complex system is built by combining
simpler parts. Probability theory provides the glue whereby the parts
are combined, ensuring that the system as a whole is consistent, and
providing ways to interface models to data. The graph theoretic side
of graphical models provides both an intuitively appealing interface
by which humans can model highly-interacting sets of variables as well
as a data structure that lends itself naturally to the design of
efficient general-purpose algorithms.
Many of the classical multivariate probabalistic systems studied in
fields such as statistics, systems engineering, information theory,
pattern recognition and statistical mechanics are special cases of the
general graphical model formalism -- examples include mixture models,
factor analysis, hidden Markov models, Kalman filters and Ising
models. The graphical model framework provides a way to view all of
these systems as instances of a common underlying formalism. This view
has many advantages -- in particular, specialized techniques that have
been developed in one field can be transferred between research
communities and exploited more widely. Moreover, the graphical model
formalism provides a natural framework for the design of new systems."
--- Michael Jordan, 1998.
This tutorial
We will briefly discuss the following topics.
, or, what exactly is a
graphical model?
, or, how can we use these models
to efficiently answer probabilistic queries?
, or, what do we do if we don't know what the
, or, what happens when it
is time to convert beliefs into actions?
, or, what's this all good for, anyway?
has made a
version of this web
I also have a closely related tutorial in
Articles in the popular press
The following articles provide less technical introductions.
article (10/28/96)
about Bayes nets.
about Microsoft's application of BNs.
Other sources of technical information
(Association for Uncertainty in Artificial Intelligence)
My list of
, presented to the Mathworks, May 2003
of other Bayes net tutorials
In the AI community, it is more common
to construct the parameters by hand (eg from an expert), or to use
frequentist (maximum likelihood) learning techniques to estimate them.
We will adopt this convention for simplicity below.
Undirected graphical models are more popular with the physics and
vision communities, and directed models are more popular with the AI
and statistics communities. (It is possible to have a model with both
directed and undirected arcs, which is called a chain graph.)
For a careful study of the relationship between directed and
undirected graphical models, see the books by Pearl88, Whittaker90,
and Lauritzen96.
Although directed models have a more complicated notion of
independence than undirected models,
they do have several advantages.
The most important is that
one can regard an arc from A to B as
indicating that A ``causes'' B.
(See the discussion on .)
This can be used as a guide to construct the graph structure.
In addition, directed models can encode deterministic
relationships, and are easier to learn (fit to data).
In the rest of this tutorial, we will only discuss directed graphical
models, i.e., Bayesian networks.
In addition to the graph structure, it is necessary to specify the
parameters of the model.
For a directed model, we must specify
the Conditional Probability Distribution (CPD) at each node.
If the variables are discrete, this can be represented as a table
(CPT), which lists the probability that the child node takes on each
of its different values for each combination of values of its
parents. Consider the following example, in which all nodes are binary,
i.e., have two possible values, which we will denote by T (true) and
F (false).
We see that the event "grass is wet" (W=true) has two
possible causes: either the water sprinker is on (S=true) or it is
raining (R=true).
The strength of this relationship is shown in the table.
For example, we see that Pr(W=true | S=true, R=false) = 0.9 (second
hence, Pr(W=false | S=true, R=false) = 1 - 0.9 = 0.1, since each row
must sum to one.
Since the C node has no parents, its CPT specifies the prior
probability that it is cloudy (in this case, 0.5).
(Think of C as representing the season:
if it is a cloudy season, it is less likely that the sprinkler is on
and more likely that the rain is on.)
The simplest conditional independence relationship encoded in a Bayesian
network can be stated as follows:
a node is independent of its ancestors given its parents, where the
ancestor/parent relationship is with respect to some fixed topological
ordering of the nodes.
By the chain rule of probability,
the joint probability of all the nodes in the graph above is
P(C, S, R, W) = P(C) * P(S|C) * P(R|C,S) * P(W|C,S,R)
By using conditional independence relationships, we can rewrite this as
P(C, S, R, W) = P(C) * P(S|C) * P(R|C)
* P(W|S,R)
where we were allowed to simplify the third term because R is
independent of S given its parent C, and the last term because W is
independent of C given its parents S and R.
We can see that the conditional independence relationships
allow us to represent the joint more compactly.
Here the savings are minimal, but in general, if we had n binary
nodes, the full joint would require O(2^n) space to represent, but the
factored form would require O(n 2^k) space to represent, where k is
the maximum fan-in of a node. And fewer parameters makes learning easier.
Are "Bayesian networks" Bayesian?
Despite the name,
Bayesian networks do not necessarily imply a commitment to Bayesian
statistics.
Indeed, it is common to use
frequentists methods to estimate the parameters of the CPDs.
Rather, they are so called because they use
Bayes' rule for
probabilistic inference, as we explain below.
(The term "directed graphical model" is perhaps more appropriate.)
Nevetherless, Bayes nets are a useful representation for hierarchical
Bayesian models, which form the foundation of applied Bayesian
statistics
(see e.g., the
In such a model, the parameters are treated like any other random
variable, and becomes nodes in the graph.
The most common task we wish to solve using Bayesian networks is
probabilistic inference. For example, consider the water sprinkler
network, and suppose we observe the
fact that the grass is wet. There are two possible causes for this:
either it is raining, or the sprinkler is on. Which is more likely?
We can use
to compute the posterior probability of each
explanation (where 0==false and 1==true).
is a normalizing constant, equal to the probability (likelihood) of
So we see that it is more likely that the grass is wet because
it is raining:
the likelihood ratio is 0.8 = 1.647.
Top-down and bottom-up reasoning
In the water sprinkler example, we had evidence of an effect (wet grass), and
inferred the most likely cause. This is called diagnostic, or "bottom
up", reasoning, since it goes
fr it is a common task in expert systems.
Bayes nets can also be used for causal, or "top down",
reasoning. For example, we can compute the probability that the grass
will be wet given that it is cloudy.
Hence Bayes nets are often called "generative" models, because they
specify how causes generate effects.
Judea Pearl, 2000, Cambridge University Press.
"Causation,
Prediction and Search", Spirtes, Glymour and
Scheines, 2001 (2nd edition), MIT Press.
and Correlation in Biology", Bill Shipley, 2000,
Cambridge University Press.
"Computation, Causation and Discovery", Glymour and Cooper (eds),
1999, MIT Press.
Conditional independence in Bayes Nets
In general,
the conditional independence relationships encoded by a Bayes Net
are best be explained by means of the "Bayes Ball"
algorithm (due to Ross Shachter), which is as follows:
Two (sets of) nodes A and B are conditionally independent
(d-separated) given a set C
if and only if there is no
way for a ball to get from A to B in the graph, where the allowable
movements of the ball are shown below.
Hidden nodes are nodes whose values
are not known, and are
observed nodes (the ones
we condition on) are shaded.
The dotted arcs indicate direction of flow of the ball.
The most interesting case is the first column, when we have two arrows converging on a
node X (so X is a "leaf" with two parents).
If X is hidden, its parents are marginally independent, and hence the
ball does not pass through (the ball being "turned around" is
indicated by the curved arrows); but if X is observed, the parents become
dependent, and the ball does pass through,
because of the explaining away phenomenon.
Notice that, if this graph was undirected, the child
would always
hence when converting a directed
graph to an undirected graph, we must add links between "unmarried"
parents who share a common child (i.e., "moralize" the graph) to prevent us reading off incorrect
independence statements.
Now consider the second column in which we have two diverging arrows from X (so X is a
If X is hidden,
the children are dependent, because they have a hidden common cause,
so the ball passes through.
If X is observed, its children are rendered conditionally
independent, so the ball does not pass through.
Finally, consider the case in which we have one incoming and outgoing
arrow to X. It is intuitive that the nodes upstream
and downstream of X are dependent iff X is hidden, because
conditioning on a node breaks the graph at that point.
Bayes nets with discrete and continuous nodes
The introductory example used nodes with categorical values and multinomial distributions.
It is also possible to create Bayesian networks with continuous valued nodes.
The most common distribution for such variables is the Gaussian.
For discrete nodes with continuous parents, we can use the
logistic/softmax distribution.
Using multinomials, conditional Gaussians, and the softmax
distribution, we can have a rich toolbox for making complex models.
Some examples are shown below. For details, click
(Circles denote continuous-valued random variables,
squares denote discrete rv's, clear
means hidden, and shaded means observed.)
For more details, see this excellent paper.
A Unifying Review of Linear Gaussian Models,
Sam Roweis & Zoubin Ghahramani.
Neural Computation 11(2) (1999) pp.305-345
by representing the hidden (and observed) state in terms of state
variables, which can have complex interdependencies.
The graphical structure provides an easy way to specify these
conditional independencies, and hence to provide a compact
parameterization of the model.
Note that "temporal Bayesian network" would be a better name than
"dynamic Bayesian network", since
it is assumed that the model structure does not change, but
the term DBN has become entrenched.
We also normally assume that the parameters do not
change, i.e., the model is time-invariant.
However, we can always add extra
hidden nodes to represent the current "regime", thereby creating
mixtures of models to capture periodic non-stationarities.
There are some cases where the size of the state space can change over
time, e.g., tracking a variable, but unknown, number of objects.
In this case, we need to change the model structure over time.
See also the section on
A generative model for generative models
The figure below, produced by Zoubin Ghahramani and Sam Roweis, is a
good summary of the relationships between some popular graphical models.
IEEE Trans. Inform. Theory, vol. 46, no. 2 (March 2000),
pp. 325--343.
F. R. Kschischang, B. J. Frey and H.-A. Loeliger, 2001.
IEEE Transactions on Information Theory, February, 2001.
The amount of work we perform when computing a marginal is bounded by
the size of the largest term that we encounter. Choosing a summation
(elimination) ordering to
minimize this is NP-hard, although greedy algorithms work well in
Dynamic programming
If we wish to compute several marginals at the same time, we can use Dynamic
Programming (DP) to avoid the redundant computation that would be involved
if we used variable elimination repeatedly.
If the underlying undirected graph of the BN is acyclic (i.e., a tree), we can use a
local message passing algorithm due to Pearl.
This is a generalization of the well-known forwards-backwards
algorithm for HMMs (chains).
For details, see
"Probabilistic Reasoning in Intelligent Systems", Judea Pearl,
1988, 2nd ed.
"Fusion and propogation with multiple observations in belief networks",
Peot and Shachter, AI 48 (1991) p. 299-318.
If the BN has undirected cycles (as in the water sprinkler example),
local message passing algorithms run the risk of double counting.
e.g., the information from S and R flowing
into W is not independent, because it came from a common cause, C.
The most common approach is therefore to convert the BN into a tree,
by clustering nodes together, to form what is called a
junction tree, and then running a local message passing algorithm on
this tree. The message passing scheme could be Pearl's algorithm, but
it is more common to use a variant designed for undirected models.
For more details, click
The running time of the DP algorithms is exponential in the size of
the largest cluster (these clusters correspond to the intermediate
terms created by variable elimination). This size is called the
induced width of the graph. Minimizing this is NP-hard.
Approximation algorithms
Many models of interest,
such as those with repetitive structure, as in
multivariate time-series or image analysis,
have large induced width, which makes exact
inference very slow.
We must therefore resort to approximation techniques.
Unfortunately, approximate inference is #P-hard, but we can nonetheless come up
with approximations which often work well in practice. Below is a list
of the major techniques.
Variational methods.
The simplest example is the mean-field approximation,
which exploits the law of
large numbers to approximate large sums of random variables by their
means. In particular, we essentially decouple all the nodes, and
introduce a new parameter, called a variational parameter, for each
node, and iteratively update these parameters so as to minimize the
cross-entropy (KL distance) between the approximate and true
probability distributions. Updating the variational parameters becomes a proxy for
inference. The mean-field approximation produces a lower bound on the
likelihood. More sophisticated methods are possible, which give
tighter lower (and upper) bounds.
Sampling (Monte Carlo) methods. The simplest kind is importance
sampling, where we draw random samples x from P(X), the (unconditional)
distribution on the hidden variables, and
then weight the samples by their likelihood, P(y|x), where y is the
evidence. A more efficient approach in high dimensions is called Monte
Carlo Markov Chain (MCMC), and
includes as special cases Gibbs sampling and the Metropolis-Hasting algorithm.
"Loopy belief propogation". This
applying Pearl's algorithm to the original
graph, even if it has loops (undirected cycles).
In theory, this runs the risk of double counting, but Yair Weiss and
others have proved that in certain cases (e.g., a single loop), events are double counted
"equally", and hence "cancel" to give the right answer.
Belief propagation is equivalent to exact inference on a modified
graph, called the universal cover or unwrapped/ computation tree,
which has the same local topology as the original graph.
This is the same as the Bethe and cavity/TAP approaches in statistical
Hence there is a deep connection between
belief propagation and variational methods that people are currently investigating.
Bounded cutset conditioning. By instantiating subsets of the variables,
we can break loops in the graph.
Unfortunately, when the cutset is large, this is very slow.
By instantiating only a subset of values of the cutset, we can compute
lower bounds on the probabilities of interest.
Alternatively, we can sample the cutsets jointly, a technique known as block Gibbs sampling.
Parametric approximation methods.
These express the intermediate summands in a simpler
form, e.g., by approximating them as a product of smaller factors.
"Minibuckets" and the Boyen-Koller algorithm fall into this category.
Approximate inference is a huge topic:
see the references for more details.
Inference in DBNs
The general inference problem for DBNs is to compute
P(X(i,t0) | y(:, t1:t2)), where X(i,t) represents the i'th hidden
variable at time and t Y(:,t1:t2) represents all the evidence
between times t1 and t2.
(In fact, we often also want to compute joint distributions of
variables over one or more time slcies.)
There are several special cases of interest, illustrated below.
The arrow indicates t0: it is X(t0) that we are trying to estimate.
The shaded region denotes t1:t2, the available data.
Here is a simple example of inference in an LDS.
Consider a particle moving in the plane at
constant velocity subject to random perturbations in its trajectory.
The new position (x1, x2) is the old position plus the velocity (dx1,
dx2) plus noise w.
[1 0 1 0] [ x1(t-1)
[0 1 0 1] [ x2(t-1)
[ dx1(t) ]
[0 0 1 0] [ dx1(t-1) ]
[ dx2(t) ]
[0 0 0 1] [ dx2(t-1) ]
We assume we only observe the position of the particle.
[ y1(t) ] =
[1 0 0 0] [ x1(t)
] + [ vx1 ]
[0 1 0 0] [ x2(t)
[ dx1(t) ]
[ dx2(t) ]
Suppose we start out at position (10,10) moving to the right with
velocity (1,0).
We sampled a random trajectory of length 15.
Below we show the filtered and smoothed trajectories.
The mean squared error of the filtered estimate is 4.9; for the
smoothed estimate it is 3.2.
Not only is the smoothed estimate better, but we know that
it is better, as illustrated by the smaller
this can help in e.g., data association problems.
Note how the smoothed ellipses are larger at the ends, because these
points have seen less data. Also, note how rapidly the filtered
ellipses reach their steady-state (Ricatti) values.
(See my Kalman
filter toolbox for more details.)
J. AI Research, 159--225.
D. Heckerman, 1996.
Microsoft Research tech. report, MSR-TR-95-06.
and the utility function is negative quadratic loss, e.g., consider a
missile tracking an airplane: its goal is to minimize the squared
distance between itself and the target. When the utility function
and/or the system model becomes more complicated, traditional methods
break down, and one has to use
to find the
optimal policy (mapping from states to actions).
system, developed
by Eric Horvitz.
The Vista system is a decision-theoretic system that has been used at
NASA Mission Control Center in Houston for several years. The system
uses Bayesian networks to interpret live telemetry and provides advice
on the likelihood of alternative failures of the space shuttle's
propulsion systems. It also considers time criticality and recommends
actions of the highest expected utility. The Vista system also employs
decision-theoretic methods for controlling the display of information
to dynamically identify the most important information to highlight.
Horvitz has gone on to attempt to apply similar technology to
Microsoft products, e.g., the Lumiere project.
Special cases of BNs were independently invented by many different
communities, for use in e.g., genetics (linkage analysis), speech
recognition (HMMs), tracking (Kalman fitering), data compression
(density estimation)
and coding (turbocodes), etc.
For examples of other applications, see the
special issue of Proc. ACM 38(3), 1995,
and the Microsoft
Decision Theory Group page.
Applications to biology
This is one of the hottest areas.
For a review, see
Science, Nir Friedman, v303 p799, 6 Feb 2004.
is available online for free.
F. Jensen.
"An introduction to Bayesian Networks".
UCL Press.
Out of print.
Superceded by his 2001 book.
S. Lauritzen.
"Graphical Models",
The definitive mathematical exposition of the theory of graphical
S. Russell and P. Norvig.
"Artificial Intelligence: A Modern Approach".
Prentice Hall.
Popular undergraduate textbook that includes a readable chapter on
directed graphical models.
J. Whittaker.
"Graphical Models in Applied Multivariate Statistics",
This is the first book published on graphical modelling from a statistics
perspective.
R. Neapoliton.
"Probabilistic Reasoning in Expert Systems".
John Wiley & Sons.
"Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference."
Morgan Kaufmann.
The book that got it all started!
A very insightful book, still relevant today.
Review articles
P. Smyth, 1998.
Pattern Recognition Letters.
E. Charniak, 1991.
, AI magazine.
Sam Roweis & Zoubin Ghahramani, 1999.
A Unifying Review of Linear Gaussian Models,
Neural Computation 11(2) (1999) pp.305-345
Exact Inference
C. Huang and A. Darwiche, 1996.
Intl. J. Approximate Reasoning, 15(3):225-263.
R. McEliece and S. M. Aji, 2000.
L. R. Rabiner, 1989.
Proc. of the IEEE, 77(2):257--286.
Z. Ghahramani, 1998.
C.L. Giles and M. Gori (eds.),
Adaptive Processing
of Sequences and Data Structures . Lecture Notes in Artificial
Intelligence, 168-197. Berlin: Springer-Verlag.}

我要回帖

更多关于 融资融券是什么意思 的文章

更多推荐

版权声明:文章内容来源于网络,版权归原作者所有,如有侵权请点击这里与我们联系,我们将及时删除。

点击添加站长微信