My weave and spiral code is an example of applying math in a colorful way! Namely, I apply radians, arc length, sine and cosine, and function transformations, and, like every program in VPython, I use vectors. A spiral in the xz-plane can be obtained by (bold denotes a vector):
θ = 2 π spiralRate t
r = rstart (1 + θ/π)
x = ( -r cos(θ), 0, r sin(θ) )
where the signs can be changed (and the cosine and sine can be swapped) to change the starting location and spiral direction, t is time, and the position vector x is given as (x,y,z). Note that the default for VPython is to have the x and y directions be the "usual" ones, and the +z direction is towards you out of the screen. Can you predict the starting location and direction of the above spiral? If spiralRate = 2 and t = 1/2, how many spirals have been done? At this time, what are θ and r?
A weave traveling in the x direction is obtained by:
distance = speed t
phase = 2 π distance / cycleLength
x1 = ( xstart + distance, A sin(2 phase), 1.5 A sin(phase) )
x2 = ( xstart + distance, A sin(2 (phase + 2 π / 3)), 1.5 A sin(phase + 2 π / 3) )
x3 = ( xstart + distance, A sin(2 (phase + 4 π / 3)), 1.5 A sin(phase + 4 π / 3) )
where A is an amplitude, and cycleLength = 6.7 A seems to be a good choice. Change the 1.5 if you'd like. Can you figure out how to change the direction of the weave? Thinking about the path a single object takes, why is there a 2 multiplying the phase for motion in the y component? Why are there phase shifts added to the 2nd and 3rd objects?
Eventually, you may wish to see my code for how both a spiral and weave can be combined!
My bounce code applies the following fundamental physics equations (bold denotes a vector):
a = ΣF / m
average a = Δv / Δt
average v = Δx / Δt
However, since this is a computer taking tiny time steps, Δt, I can write these tiny time intervals as dt to denote that I no longer have to think about average values (the average over a tiny amount of time gives the actual instantaneous value at that time!). So the equations can be rearranged to look like:
a = ΣF / m
v = v0 + a dt
x = x0 + v dt
where x0 and v0 are from the previous time step (Δx = x - x0 and Δv = v - v0), and the forces are calculated using x0 and v0. For simulating any 3D motion, we can use these equations in this order to calculate the position x at each new moment in time! You just first need to be able to calculate the forces! Why do you get more accurate results by making dt as small as possible? What would happen if you set dt = 0.00000000000001?
My gravity code uses the above equations for a, v, and x in 2D with multiple objects! This simulation is more difficult due to the nature of inverse-square laws such as Newton's law of universal gravitation. The force gets strong very quickly during close approaches, so adaptive step sizes (having more time steps when objects are closer) is needed. However, even doing adaptive steps has problems because the very simple numerical method described above (called the symplectic Euler method) is like trying to push a shopping cart in a straight line by pushing with just one finger of one hand (you are not allowed to move your finger!).
To do this type of thing more realistically (unlike how I did it), we could calculate the results of high-speed close encounters without directly simulating them much like how the bounce code did the elastic bounce by just changing the sign of the vertical velocity. In more complicated simulations such as this three-body gravity simulation when all three objects are very close, the by-hand calculations cannot be done (which is usually why you're simulating it in the first place), so numerical methods such as Runge-Kutta methods must be used. To do this, we must first convert the 6 second order differential equations (two for each object since we are in 2D) into 12 first order differential equations. The Runge-Kutta method makes my Runge-Kutta gravity code look very nice if you select adaptive stepping! Unlike my first gravity code, the Runge-Kutta code involves calculus!
If you would like, you may try to reduce the step size of the symplectic-Euler code until good results are obtained, and you will see that the necessary step sizes are so incredibly small that the code will run incredibly slowly.
We just saw that the Runge-Kutta method is needed for simulations where accuracy is important. So can we now simulate everything? Theoretically, yes. However, simulating continuous material—fluids or solids—will take an unrealistically powerful computer because we would have to simulate every atom. If you really enjoy making simulations, the field of CFD (computation fluid dynamics) may interest you. Fluids (or elastic/plastic solids) are simulated by calculating pressure, temperature, density, velocity, etc. at various points located throughout the fluid. Just like our simulations here, these values are calculated at a large number of snapshots in time. This is quite advanced stuff—advanced and powerful—that will let you simulate so many things: electromagnetic fields according to Maxwell's equations, quantum wave functions, and the curved spacetime metrics of Einstein's general relativity.
Compared to fluids, we can much more easily simulate rigid objects such as a double pendulum! To do this, simple numerical methods should work just fine (though I will use the Runge-Kutta method anyway). However, the physics needs to be more sophisticated because we are not simulating every atom: Lagrangian mechanics will be used as shown in the previous link. Here is my double pendulum code. This is a fascinating system because you cannot calculate the solution by hand (analytically)! The motion is chaotic, so it must be simulated.
As for the two lines of code with lots of calculations, I used a computer algebra system called Mathematica to do the algebra for me to generate those two lines of code so that I wouldn't have to constantly wonder if I did the algebra correctly.
As a final demonstration of the sort of things that can be simulated, I do a freely bouncing object in my rigid-object bounce code. I technically use the Runge-Kutta method for the free-fall motion, though this only involves a slight change to one line of code compared to Euler's method:
y = y0 + v0y dt
of Euler's method becomes
y = y0 + v0y dt - ½ g dt2
where g is the magnitude of free-fall acceleration. Without using the Runge-Kutta method, energy is not conserved, so the object may keep bouncing higher and higher!
Note that Euler's method is the same as the symplectic Euler method I described above except
x = x0 + v0 dt
To do the elastic bounce, no numerical method is used. Instead, we need physics! Use conservation of kinetic energy to get
vx2 + vy2 + I/m ω2 = (vx + Δvx)2 + (vy + Δvy)2 + I/m (ω + Δω)2
where vx and vy are the components of velocity of the center of mass (CM), I is the moment of inertia of the CM, Δvx and Δvy are the total change in velocity throughout the bounce, and
Δω = m/I (rx Δvy - ry Δvx)
is the total change in angular speed throughout the bounce (in radians per time unit), where r is the vector from the CM to the corner that is bouncing off the ground. If the floor has no friction, Δvx = 0, then solving for Δvy is not difficult. For friction, I chose to implement the "rolling without slipping" condition
vx + Δvx = ry (ω + Δω)
As for the two lines of code with lots of calculations, I used a computer algebra system called Mathematica to do the with-friction algebra for me so that I wouldn't have to constantly wonder if I did the algebra correctly. It was still tricky because there were two solutions of the equations, and one was for when rx < 0, and the other was for when rx > 0. Because the solutions differed only by a sign, I could use |rx| / rx to use the same code for both solutions. I also had to be careful to realize that
√(ry2) / ry = |ry| / ry = -1