Differential equations demystified pdf download






















Thus the expansions 3 and 4 represent the same function on that interval. Of course To obtain the cosine series for f , we consider the even extension f all the bn will vanish. But many physical problems take place on an interval of some other length. We must therefore be able to adapt our analysis to intervals of any length. This amounts to a straightforward change of scale on the horizontal axis.

We treat the matter in the present section. Now let us write out these last two formulas and perform a change of variables.

Math Note: We can combine the ideas of the present section with those of the last section to produce the Fourier sine series or cosine series of a function on any interval centered about the origin. What is the interest of the inner product? That is the idea that we shall explore in the present section. Let X be a vector space. This means that X is equipped with i a notion of addition and ii a notion of scalar multiplication.

These two operations are hypothesized to satisfy the expected properties: addition is commutative and associative, scalar multiplication is commutative, associative, and distributive, and CHAPTER 4 Fourier Series so forth. We shall give some interesting examples of inner products below. Let be the induced norm. Just as an exercise, we shall derive the Triangle inequality from the Cauchy— Schwarz—Bunjakovski inequality. Now taking the square root of both sides completes the argument.

This is certainly a vector space with the usual notions of addition of functions and scalar multiplication of functions. So our inner product makes sense. See also [KRA3] for a more thorough consideration of these matters. That and 2 give a system that may be solved for f and g with elementary algebra. See also [LAN]. For all practical purposes, these events mark the beginning of the mathematical theory of Fourier series see [LAN]. Now we have what are called boundary conditions: we specify the value not the derivative of our solution at two different points.

For instance, in the discussion of the vibrating string in the last section, we wanted our string to be pinned down at the two endpoints. These are typical boundary conditions coming from a physical problem.

The situation with boundary conditions is quite different from that for initial conditions. The latter is a sophisticated variation of the fundamental theorem of calculus. The former is rather more subtle.

So let us begin to analyze. Our analysis will ignore damping effects, such as air resistance. We assume that, in its relaxed position, the string is as in Fig. The string is plucked in the vertical direction, and is thus set in motion in a vertical plane.

We adopt the usual physical conceit of assuming that the displacement motion of this string element is small, so that there is only a slight error in supposing that the motion of each point of the string element is strictly vertical.

We let the tension of the string, at the point x at time t, be denoted by T x, t. Note that T acts only in the tangential direction i. We use subscripts to denote derivatives. There are also elliptic equations such as the Laplacian and parabolic equations such as the heat equation.

We shall say more about these as the book develops. But of course this is the eigenvalue problem that we treated at the beginning of the section. Ignoring the rather delicate issue of convergence which was discussed a bit in Section 4. We know from our studies in Chapter 4 that such an expansion is valid for a rather broad class of functions f.

Thus the wave equation is solvable in considerable generality. But now we understand it as an orthogonality condition, and we see how the condition arises naturally from the differential equation. We shall give a taste of these ideas in Section 5. Certainly orthogonality, and orthogonal expansions, is one of the most pervasive ideas in modern analysis. Assume that the endpoints are held at temperature 0, and that the temperature of each cross-section is constant. The problem is to describe the temperature u x, t of the point x in the rod at time t.

Let us now indicate the manner in which Fourier solved his problem. We shall derive such an equation using three physical principles: 1 The density of heat energy is proportional to the temperature u, hence the amount of heat energy in any interval [a, b] of the rod is proportional to b a u x, t dx. Heat has no sources or sinks.

Now 3 tells us that the only way that heat can enter or leave any interval portion [a, b] of the rod is through the endpoints. And 2 tells us exactly how this happens.

Here [ ] denotes the greatest integer function. The bulk, and remainder, of the book consists of separate chapters in which the expansions for particular functions are computed. Math Note: You will notice several parallels between our analysis of the heat equation in this section and the solution of the wave equation in Subsection 5.

In both cases this led to trigonometric solutions. And for the general solution we considered a trigonometric series. Thus there are unifying principles that occur repeatedly in different parts of the theory of differential equations. Certainly Fourier series is one of those principles.

The study of this equation and its solutions and subsolutions and their applications is a deep and rich branch of mathematics called potential theory. There are applications to heat, to gravitation, to electromagnetics, and to many other parts of physics.

The equation plays a central role in the theory of partial differential equations, and is also an integral part of complex variable theory. Two-dimensional and higher-dimensional analysis is quite different from the analysis in one dimension. We shall get a taste of the higher-dimensional tools in the next section. We have studied this situation in detail in Section 5.

Consider a thin aluminum disc of radius 1, and imagine applying a heat distribution to the boundary of that disc. We seek to understand the steady-state heat distribution on the entire disc. But in fact it is possible to produce a closed formula for this solution. There is a great deal of information about w and its relation to f contained in this formula.

Math Note: The Poisson kernel and integral is but one example of a reproducing kernel in mathematics. These are powerful tools for analyzing and continuing or extending functions. This setting is the fairly broad and far-reaching subject of Sturm—Liouville problems. It turns out and we have seen several instances of this phenomenon that the sequence of eigenfunctions associated with a wide variety of boundary value problems enjoys the orthogonality property.

In other circumstances, we may wish to prescribe the values of y at two distinct points, say at a and at b. We now begin to examine the conditions under which such a boundary value problem has a nontrivial solution. What are the eigenvalues and eigenfunctions for this problem? But now, as motivation for the work in this section, we review. Let us denote by W x the Wronskian determinant1 of the two solutions ym , yn. We want the right-hand side of this last equation to vanish.

In the second instance, we may conclude that ym , yn are linearly independent. Otherwise they are linearly dependent. The idea of orthogonality with respect to a weight has now arisen for us in a concrete context.

Certainly Sturm—Liouville problems play a prominent role in engineering problems, especially ones coming from mechanics. There is an important question that now must be asked. Namely, are there enough of the eigenfunctions yj so that virtually any function f can be expanded as in 2? But there is no hope that a large class of functions f can be spanned by just y1 , y3 , y7.

Our intention here has been merely to acquaint the reader with some of the language of Sturm—Liouville problems. Show that the moving string has the same general shape, regardless of the value of c.

More generally, the overall theory of transforms has become an important part of modern mathematics. The idea of a transform is that it turns a given function into another function. We are already acquainted with several transforms: I. We can most fruitfully study linear transformations that are given by integration.

We sometimes write the Laplace transform of f x as F p. L[1] pn n! We shall not actually perform all the integrations for the Laplace transforms in Table 6.

We content ourselves with the third one, just to illustrate the idea. It may be noted that the Laplace transform is a linear operator.

Here a and b are real constants. We apply the Laplace transform L to both sides of 3 , of course using the linearity of L. We may also gather like terms together. The following examples will illustrate the idea. A useful general property of the Laplace transform concerns its interaction with translations. Thus L is invertible on its image. We are able to verify this assertion empirically through our calculations; the general result is proved in a more advanced treatment.

This is the solution of our initial value problem. Math Note: Since we know how to calculate the Laplace transform of the derivative of a function, it is natural also to consider the Laplace transform for the antiderivative of a function.

Derive a suitable formula. And of course we apply the usual form for the Laplace transform of the derivative to the second term on the left. The result is. It is also a matter of some interest to integrate the Laplace transform. We can anticipate how this will go by running the differentiation formulas in reverse. The last property listed concerns convolution, and we shall treat that topic in the next section. The convolution formula is particularly useful in calculating inverse Laplace transforms.

We leave the details for you. Usually k is a mathematical model for the physical process being studied. The objective is to solve for y. As you can see, the integral equation involves a convolution. And, not surprisingly, the Laplace transform comes to our aid in unraveling the equation. In fact we apply the Laplace transform to both sides of 2.

Imagine a wire bent into a smooth curve Fig. The curve terminates at the origin. Imagine a bead sliding from the top of the wire, without friction, down to the origin. Then the total time for the descent of the bead is some number T y that depends on the shape of the wire and on the initial height y. What is interesting about this problem, from the point of view of the present section, is that its mathematical formulation leads to an integral equation of the sort that we have just been discussing.

And we will be able to solve it using the Laplace transform. We use u, v as the coordinates of any intermediate point on the curve. The expression on the left-hand side is the standard one from physics for kinetic energy. And the expression on the right is the potential energy. We think of f y as the unknown. A curve with this property if in fact one exists is called a tautochrone.

These are the parametric equations of a cycloid Fig. Math Note: We see that the tautochrone turns out to be a cycloid. This problem and its solution is one of the great triumphs of modern mechanics. An additional very interesting property of this curve is that it is the brachistochrone. That means that, given two points A and B in space, the curve connecting them down which a bead will slide the fastest is the cycloid Fig. This last assertion was proved by Isaac Newton, who read the problem as posed by Bernoulli in a periodical.

He solved the problem in a few hours, and submitted his solution anonymously. Any physical system that responds to a stimulus can be thought of as a device or black box that transforms an input function the stimulus into an output function the response.

Notice that, since the equation is inhomogeneous, these zero initial conditions cannot force the solution to be identically zero. With some effort, we can rewrite the equation 4 in an even more appealing way. They allow us to represent a solution of our differential equation for a general input function in terms of a solution for a step function.

What is an impulse function? A truly rigorous treatment of the impulse requires the theory of distributions or generalized functions and we cannot cover it here. Of course this is the same solution that we obtained in the last example, using the other superposition formula. Calculate each of the following Laplace transforms: a L[x 2 sin ax] b L[xex ] 9. Such is not the case. It is sometimes possible to say something qualitative about solutions.

And we have also seen that certain important equations that come from physics are fortuitously simple, and can be attacked effectively. But the bottom line is that many of the equations that we must solve for engineering or other applications simply do not have closed-form solutions.

Just as an instance, the equations that govern the shape of an airplane wing cannot be solved explicitly. How do we come to terms with the intractability of differential equations?

The advent of high-speed digital computers has made it both feasible and, indeed, easy to perform numerical approximation of solutions. The subject of the numerical solution of differential equations is a highly developed one, and is applied daily to problems in engineering, physics, biology, astronomy, and many other parts of science.

Solutions may generally be obtained to any desired degree of accuracy, graphs drawn, and almost any necessary analysis performed. CHAPTER 7 Numerical Methods Not surprisingly—and like many of the other fundamental ideas related to calculus—the basic techniques for the numerical solution of differential equations go back to Newton and Euler. This is quite amazing, for these men had no notion of the computing equipment that we have available today.

Their insights were quite prescient and powerful. In the present chapter, we shall only introduce the most basic ideas in the subject of numerical analysis of differential equations. First, the derivatives in the equation are replaced by differences as in replacing the derivative by a difference quotient. Second, the continuous variable x is replaced by a discrete variable. Third, the real number line is replaced by a discrete set of values. Any type of approximation argument involves some sort of loss of information; that is to say, there will always be an error term.

It is also the case that these numerical approximation techniques can give rise to instability phenomena and other unpredictable behavior.

Whenever possible, the user should also employ qualitative techniques. Endeavor to determine whether the solution is bounded, periodic, or stable.

How do the different solutions interact with each other? In this way you are not using the computing machine blindly, but are instead using the machine to aid and augment your understanding. The spirit of the numerical method is this. The initial condition tells us that the point 0, 1 lies on the graph of the solution y. Thus the graph will proceed to the right with slope 1.

Let us assume that we shall do our numerical calculation with mesh 0. So we proceed to the right to the point 0. Now we return to the differential equation to obtain the slope of the solution at this new point. Thus, when we proceed to sketch our approximate solution graph to the right of 0. Of course this is a very simple-minded example, and it is easy to imagine that the approximate solution is diverging rather drastically and unpredictably with each iteration of the method.

In subsequent sections we shall learn techniques of Euler which formalize the method just described and Runge—Kutta which give much better, and more reliable, results. The Euler method is obtained from the most simple technique for approximating the integral. Namely, we assume that the integrand does not vary much on the interval [x0 , x1 ], and therefore that a rather small error will result if we replace f x, y by its value at the left endpoint.

Then the points x0 , y0 , x1 , y1 ,. Figure 7. This process is iterated in the succeeding lines. Table 7. The displayed data make clear that reducing the step size will increase accuracy.

In the next section we shall discuss errors, and in particular at what point there is no advantage to reducing the step size. For example, try approximating 4. How can you determine in advance the size of the error in such a calculation? Numerical methods only give approximate answers.

In order for the approximate answer to be useful, we must know how close to the true answer our approximate answer is. How do we get our hands on the error, and how do we estimate it? Any time decimal approximations are used, there is a rounding-off procedure involved. Round-off error is another critical phenomenon that we must examine.

This means that all numerical answers are rounded to eight places. The last equality may seem rather odd—in fact it appears to be false. But this is how the computer will reason: it rounds to eight decimal places! The same phenomenon will occur with the calculation of y2. The last example is to be taken quite seriously. If you are not aware of the dangers of round-off error, and why such errors occur, then you will be a very confused scientist indeed.

One way to address the problem is with double precision, which gives place decimal accuracy. Another way is to use a symbol manipulation program like Mathematica or Maple in which one can preset any number of decimal places of accuracy.

In the present book, we cannot go very deeply into the subject of round-off error. What is most feasible for us is to acknowledge that round-off error must be dealt with in advance, and we shall assume that we have set up our problem so that round-off error is negligible.

We shall instead concentrate our discussions on discretization error, which is a problem less contingent on artifacts of the computing environment and more central to the theory. One commonly used technique is to redo the calculation in double precision on a computer using one of the standard software packages, this would mean place decimal accuracy instead of the usual 8-place accuracy.

If the answer seems to change substantially, then some round-off error is probably present in the regular precision 8-place accuracy calculation. Here y xn is the exact value at xn of the solution of the differential equation, and yn is the Euler approximation. Thus our error estimate takes the form Mh2.

For an error is made at each step of the Euler method—or of any numerical method—so we must consider the total discretization error. This is just the aggregate of all the errors that occur at all steps of the approximation process. To get a rough estimate of this quantity, we notice that our Euler scheme iterates in n steps, from x0 to xn , in increments of length h.

Thus, for this problem, C is a universal constant. Of course the actual error is less than this somewhat crude bound. Math Note: In practice, we shall not be able to solve the differential equation being studied. That is, after all, why we are using numerical techniques and a computer.

So how do we, in practice, determine when h is small enough to achieve the accuracy we desire? When the distance between two successive calculations is within the desired tolerance for the problem, then it is quite likely that they both are also within the desired tolerance of the exact solution. That amounts to averaging the values at the two endpoints.

This is the philosophy that we now employ. This generated the iterative scheme of the last section. What we can do instead is to replace y x1 by its approximate value as found by the Euler method. It is an example of a class of numerical techniques called predictor—corrector methods.

Thus, in particular, the total discretization error is proportional to h2 instead of h, as before , so we expect more accuracy for the same step size. First, the point at x1 , z1 is predicted using the original Euler method, then this point is used to estimate the slope of the solution curve at x1. This result is then averaged with the original slope estimate at x0 , y0 to make a better prediction of the solution—namely, x1 , y1. We continue this process and obtain the values shown in Table 7.

The aggregate error is about 1 percent, whereas with the former Euler method it was more than 13 percent. This is a substantial improvement.

Of course a smaller step size results in even more dramatic improvement in accuracy. We have predicted that halving the step size will decrease the aggregate error by a factor of 4. These results bear out that prediction. In the next section we shall use a method of subdividing the intervals of our step sequence to obtain greater accuracy.

This results in the Runge—Kutta method. Check your calculus book for instance [STE, p. We cannot provide all the rigorous details of the derivation of the fourth-order Runge—Kutta method.

We instead provide an intuitive development. Just as in our earlier work, this algorithm can be applied to any number of mesh points in a natural way. This new analytic paradigm, the Runge—Kutta technique, is capable of giving extremely accurate results without the need for taking very small values of h thus making the work computationally expensive.

The total truncation error is thus of the order of magnitude of h4. And the amount of computation involved was absolutely minimal. Notice that our approximate value for y 1 is 3. The relative error is less than 0. If we cut the step size in half, to 0. Now the relative error is less than 0. But there is a tradeoff in that the calculations become very complicated rather quickly, and thus computationally expensive. In each case, compare your results to the exact solution and discuss how well or poorly the Euler method has worked.

Compare your results to the exact solution. Compare your result to the exact solution and discuss how well or poorly the Euler method has worked. Compare your result to the exact solution. In an example below we shall see how a system occurs in the context of dynamical systems having several degrees of freedom. In another context, we shall see a system of differential equations used to model a predator—prey problem in the study of population ecology. For cultural reasons, and for general interest, we shall next turn to the n-body problem of classical mechanics.

It, too, can be modeled by a system of ordinary differential equations. Here G is a constant that depends on the force of gravity. This is the Newtonian model of the universe. It is thoroughly deterministic. Of course this mathematical model can be taken to model the motions of the planets in our solar system.

It is not known, for example, whether one of the planets the Earth, let us say will one day leave its orbit and go crashing into the sun. That is indeed the case; we treat them in this section. Otherwise it is nonhomogeneous. Thus it will not be surprising that the theory we are about to describe is similar to the theory of second-order linear equations.

We begin with a fundamental existence and uniqueness theorem. Let x0 and y0 be arbitrary numbers. The next theorem—familiar in form—will be the key to constructing more useful solutions. We therefore call the newly created solution a linear combination of the given solutions. Thus Theorem 8. As an instance, in Example 8. The next obvious question to settle is whether the collection of all linear combinations of two independent solutions of the homogeneous system is in fact all the solutions i.

By Theorem 8. This will now reduce to a simple and familiar algebra problem. Please read our short guide how to send a book to Kindle. The file will be sent to your email address. It may take up to minutes before you receive it. The file will be sent to your Kindle account. It may takes up to minutes before you received it. Please note : you need to verify every book you want to send to your Kindle.

Check your mailbox for the verification email from Amazon Kindle. Related Booklists. Vectors and the Geometry of Space Vector-Valued Functions Functions of …. The intuitive idea of a set is probably even older than that of number. Members of a herd of animals, for example, could be matched with stones in a sack without members. Label: Livre.

Posting Komentar Catatan: Hanya anggota dari blog ini yang dapat mengirim komentar. Search This Blog. Popular Posts. Voir la critique Financial Statement Analysis and Voir la critique Wargaming for Leaders: Strategic Voir la critique Propeller Handbook: The Complete Voir la critique Dom Casmurro Portuguese Edition DAdamo Peter J.

Eberl Jason T. Francis Will Frank Mark G. Friedberg Ira S. Gruber Peter M.



0コメント

  • 1000 / 1000