【有限元方法】Newton

编程入门 行业动态 更新时间:2024-10-27 07:23:47

【<a href=https://www.elefans.com/category/jswz/34/1749176.html style=有限元方法】Newton"/>

【有限元方法】Newton

Newton-Raphson Method

Linear vs Nonlinear Analysis:

  • At this point, we can conduct a linear analysis no problem

∫ ∑ i , j = 1 3 σ i j ε i j ∗ d v = ∫ t n ⋅ u ∗ d s + ∫ ρ b ⋅ u ∗ d v ⇒ ∫ e [ B ] T [ C ] [ B ] d x ⏟ k e u e = ∫ ∂ e [ N ] T t n d s + ∫ e [ N ] T ρ b d v \begin{aligned} &\int \sum_{i,j=1}^3\sigma_{ij}\varepsilon_{ij}^*dv=\int t_n\cdot u^*ds + \int \rho b \cdot u^*dv \\ \Rightarrow &\underbrace{\int_e[B]^T[C][B]dx}_{k_e}u_e=\int_{\partial e}[N]^Tt_n ds + \int_e[N]^T\rho b dv \end{aligned} ⇒​∫i,j=1∑3​σij​εij∗​dv=∫tn​⋅u∗ds+∫ρb⋅u∗dvke​ ∫e​[B]T[C][B]dx​​ue​=∫∂e​[N]Ttn​ds+∫e​[N]Tρbdv​

  • Two Assumption:

(1) Small Deformations
(2) Linear Elastic Materials

  • This allowed us to solve our system using a system of linear equations:

[ K ] [ u ] = [ F ] \begin{aligned} [K] [u] = [F] \end{aligned} [K][u]=[F]​

  • Unfortunately, the world is a cruel place and the majority of realistic scenarios require a nonlinear analysis:

Stress-Strain Relationship:
∫ ∑ i , j = 1 3 σ i j ε i j ∗ d v \begin{aligned} &\int \sum_{i,j=1}^3\sigma_{ij}\varepsilon_{ij}^*dv \end{aligned} ​∫i,j=1∑3​σij​εij∗​dv​

t n t_n tn​ and ρ b \rho b ρb are solution dependent:

∫ t n ⋅ u ∗ d s + ∫ ρ b ⋅ u ∗ d v \begin{aligned} \int t_n\cdot u^*ds + \int \rho b \cdot u^*dv \end{aligned} ∫tn​⋅u∗ds+∫ρb⋅u∗dv​

Types of Nonlinearities

There are many types of nonlinearities:

1. Geometric Nonlinearity

  • Buckling
  • Large deformation of the material

2. Material Nonlinearity

  • Plasticity
  • Damage

3. Kinematic Nonlinearity

  • Contact

Newton-Raphson Method (Single Variable)

To solve nonlinear equations we typically employ the Newton-Raphson method (also referred to as Newton’s method). To see how this method works, let’s examine a single variable scenario:

If we hace a continuous function f f f, we can perform a Taylor series expansion:

f ( x ( i + 1 ) ) ≈ f ( x ( i ) ) + ∂ f ( x ( i ) ) ∂ x ( x ( i + 1 ) − x ( i ) ) + 1 2 ! ( ∂ 2 f ( x ( i ) ) ∂ x 2 ) ( x ( i + 1 ) − x ( i ) ) + . . . \begin{aligned} f(x^{(i+1)})\approx f(x^{(i)})+\frac{\partial f(x^{(i)})}{\partial x}(x^{(i+1)}-x^{(i)})+\frac{1}{2!}(\frac{\partial ^2f(x^{(i)})}{\partial x^2})(x^{(i+1)}-x^{(i)})+... \end{aligned} f(x(i+1))≈f(x(i))+∂x∂f(x(i))​(x(i+1)−x(i))+2!1​(∂x2∂2f(x(i))​)(x(i+1)−x(i))+...​

Neglecting higher-order terms:

f ( x ( i + 1 ) ) ≈ f ( x ( i ) ) + ∂ f ( x ( i ) ) ∂ x ( x ( i + 1 ) − x ( i ) ) \begin{aligned} f(x^{(i+1)})\approx f(x^{(i)})+\frac{\partial f(x^{(i)})}{\partial x}(x^{(i+1)}-x^{(i)}) \end{aligned} f(x(i+1))≈f(x(i))+∂x∂f(x(i))​(x(i+1)−x(i))​

Rearrange equation:

( x ( i + 1 ) − x ( i ) ) = ( f ( x ( i + 1 ) ) − f ( x ( i ) ) ) f ′ ( x ( i ) ) Δ x = ( x ( i + 1 ) − x ( i ) ) K t = f ′ ( x ( i ) ) \begin{aligned} (x^{(i+1)}-x^{(i)})&=\frac{(f(x^{(i+1)}) - f(x^{(i)}))}{f'(x^{(i)})} \\ \Delta x &= (x^{(i+1)}-x^{(i)}) \\ K_t &= f'(x^{(i)}) \end{aligned} (x(i+1)−x(i))ΔxKt​​=f′(x(i))(f(x(i+1))−f(x(i)))​=(x(i+1)−x(i))=f′(x(i))​

Simply equation:

Δ x = F − f ( x ( i ) ) K t = K t − 1 ( F − f ( x ( i ) ) ) \begin{aligned} \Delta x &=\frac{F- f(x^{(i)})}{K_t} = K_t^{-1}(F-f(x^{(i)})) \end{aligned} Δx​=Kt​F−f(x(i))​=Kt−1​(F−f(x(i)))​

Now let’s examine the parameters of the equation:

F = F = F= Desired Value
x i = x^{i} = xi= Initial Value of x x x
f ( x ( i ) ) = f(x^{(i)}) = f(x(i))= Initial Value of the Function
K t = K_t = Kt​= Slope of the Tangent Line
Δ x = \Delta x= Δx= Increment in x x x : solve it

In FEA:

  • F − f ( x ( i ) ) = F - f(x^{(i)})= F−f(x(i))= Different between applied forces ( F ) (F) (F) and internal forces ( f ( x ( i ) ) ) (f(x^{(i)})) (f(x(i)))
  • Δ x = \Delta x = Δx= Displacement increment

Example 1 - Single Variable Newton-Raphson Method

If f ( x ) = x f(x) = \sqrt{x} f(x)=x ​, find x x x such that f ( x ) = 3 f(x) = 3 f(x)=3

Function Information:

f ( x ) = x ⇒ K t = ∂ f ∂ x = 1 2 x \begin{aligned} f(x) = \sqrt{x} \Rightarrow K_t = \frac{\partial f}{\partial x} = \frac{1}{2\sqrt{x}} \end{aligned} f(x)=x ​⇒Kt​=∂x∂f​=2x ​1​​

Δ x = K t − 1 ( F − f ( x ( i ) ) ) \Delta x =K_t^{-1}(F-f(x^{(i)})) Δx=Kt−1​(F−f(x(i)))

x ( i ) x^{(i)} x(i) f ( x ( i ) ) f(x^{(i)}) f(x(i)) K t K_t Kt​ Δ x \Delta x Δx x ( i + 1 ) x^{(i+1)} x(i+1)
110.545

Δ x = K t − 1 ( F − f ( x ( i ) ) ) = ( 0.5 ) − 1 ( 3 − 1 ) = 4 \Delta x =K_t^{-1}(F-f(x^{(i)})) =(0.5)^{-1}(3-1)=4 Δx=Kt−1​(F−f(x(i)))=(0.5)−1(3−1)=4

x ( i ) x^{(i)} x(i) f ( x ( i ) ) f(x^{(i)}) f(x(i)) K t K_t Kt​ Δ x \Delta x Δx x ( i + 1 ) x^{(i+1)} x(i+1)
110.545
52.240.223.428.42

x ( i ) x^{(i)} x(i) f ( x ( i ) ) f(x^{(i)}) f(x(i)) K t K_t Kt​ Δ x \Delta x Δx x ( i + 1 ) x^{(i+1)} x(i+1)
110.545
52.240.223.428.42
8.422.900.170.578.99

Newton-Raphson Method (Multi-Variable)

Unfortunately, most scenarios (especially finite element) involve many multi-variable equations:

f 1 ( x 1 , x 2 , . . . , x n ) = 0 f 2 ( x 1 , x 2 , . . . , x n ) = 0 ⋮ f n ( x 1 , x 2 , . . . , x n ) = 0 f_1(x_1, x_2, ..., x_n) = 0 \\ f_2(x_1, x_2, ..., x_n) = 0 \\ \vdots \\ f_n(x_1, x_2, ..., x_n) = 0 f1​(x1​,x2​,...,xn​)=0f2​(x1​,x2​,...,xn​)=0⋮fn​(x1​,x2​,...,xn​)=0

We can perform the Taylor series expansion for each equation:

f 1 ( x 1 ( i + 1 ) , x 2 ( i + 1 ) , . . . , x n ( i + 1 ) ) ≈ f 1 ( x 1 ( i ) , x 2 ( i ) , . . . , x n ( i ) ) + ∂ f 1 ∂ x 1 ∣ x ( i ) ( x 1 ( i + 1 ) − x 1 ( i ) ) + ∂ f 1 ∂ x 2 ∣ x 2 ( i ) ( x ( i + 1 ) − x 2 ( i ) ) + ⋯ + ∂ f 1 ∂ x n ∣ x n ( i ) ( x ( i + 1 ) − x n ( i ) ) \begin{aligned} &f_1(x_1^{(i+1)}, x_2^{(i+1)}, ..., x_n^{(i+1)})\approx f_1 (x_1^{(i)}, x_2^{(i)}, ..., x_n^{(i)}) \\ &+ \frac{\partial f_1}{\partial x_1}|_{x^{(i)}}(x_1^{(i+1)}-x_1^{(i)})+ \frac{\partial f_1}{\partial x_2}|_{x_2^{(i)}}(x^{(i+1)}-x_2^{(i)}) + \cdots +\frac{\partial f_1}{\partial x_n}|_{x_n^{(i)}}(x^{(i+1)}-x_n^{(i)}) \end{aligned} ​f1​(x1(i+1)​,x2(i+1)​,...,xn(i+1)​)≈f1​(x1(i)​,x2(i)​,...,xn(i)​)+∂x1​∂f1​​∣x(i)​(x1(i+1)​−x1(i)​)+∂x2​∂f1​​∣x2(i)​​(x(i+1)−x2(i)​)+⋯+∂xn​∂f1​​∣xn(i)​​(x(i+1)−xn(i)​)​

Writing all the equations in matrix form:

[ f 1 ( x ( i + 1 ) ) f 2 ( x ( i + 1 ) ) ⋮ f n ( x ( i + 1 ) ) ] = [ f 1 ( x ( i ) ) f 2 ( x ( i ) ) ⋮ f n ( x ( i ) ) ] + [ ∂ f 1 ∂ x 1 ∂ f 1 ∂ x 2 ⋯ ∂ f 1 ∂ x n ∂ f 2 ∂ x 1 ∂ f 2 ∂ x 2 ⋯ ∂ f 2 ∂ x n ⋮ ⋮ ⋱ ⋮ ∂ f n ∂ x 1 ∂ f n ∂ x 2 ⋯ ∂ f n ∂ x n ] [ ( x 1 ( i + 1 ) − x 1 ( i ) ) ( x 2 ( i + 1 ) − x 2 ( i ) ) ⋮ ( x n ( i + 1 ) − x n ( i ) ) ] \begin{aligned} \begin{bmatrix} f_1(x^{(i+1)}) \\ f_2(x^{(i+1)}) \\ \vdots \\ f_n(x^{(i+1)}) \end{bmatrix} = \begin{bmatrix} f_1(x^{(i)}) \\ f_2(x^{(i)}) \\ \vdots \\ f_n(x^{(i)}) \end{bmatrix} + \begin{bmatrix} \frac{\partial f_1}{\partial x_1} & \frac{\partial f_1}{\partial x_2} & \cdots & \frac{\partial f_1}{\partial x_n} \\ \frac{\partial f_2}{\partial x_1} & \frac{\partial f_2}{\partial x_2} & \cdots & \frac{\partial f_2}{\partial x_n} \\ \vdots & \vdots & \ddots & \vdots \\ \frac{\partial f_n}{\partial x_1} & \frac{\partial f_n}{\partial x_2} & \cdots & \frac{\partial f_n}{\partial x_n} \end{bmatrix} \begin{bmatrix} (x_1^{(i+1)}-x_1^{(i)})\\ (x_2^{(i+1)}-x_2^{(i)}) \\ \vdots \\ (x_n^{(i+1)}-x_n^{(i)}) \end{bmatrix} \end{aligned} ​f1​(x(i+1))f2​(x(i+1))⋮fn​(x(i+1))​ ​= ​f1​(x(i))f2​(x(i))⋮fn​(x(i))​ ​+ ​∂x1​∂f1​​∂x1​∂f2​​⋮∂x1​∂fn​​​∂x2​∂f1​​∂x2​∂f2​​⋮∂x2​∂fn​​​⋯⋯⋱⋯​∂xn​∂f1​​∂xn​∂f2​​⋮∂xn​∂fn​​​ ​(x1(i+1)​−x1(i)​)(x2(i+1)​−x2(i)​)⋮(xn(i+1)​−xn(i)​)​ ​​

We can move the vector containing the value of the functions at the initial guess to the other side to obtain the following:

[ ∂ f 1 ∂ x 1 ∂ f 1 ∂ x 2 ⋯ ∂ f 1 ∂ x n ∂ f 2 ∂ x 1 ∂ f 2 ∂ x 2 ⋯ ∂ f 2 ∂ x n ⋮ ⋮ ⋱ ⋮ ∂ f n ∂ x 1 ∂ f n ∂ x 2 ⋯ ∂ f n ∂ x n ] [ ( x 1 ( i + 1 ) − x 1 ( i ) ) ( x 2 ( i + 1 ) − x 2 ( i ) ) ⋮ ( x n ( i + 1 ) − x n ( i ) ) ] = [ f 1 ( x ( i + 1 ) ) f 2 ( x ( i + 1 ) ) ⋮ f n ( x ( i + 1 ) ) ] − [ f 1 ( x ( i ) ) f 2 ( x ( i ) ) ⋮ f n ( x ( i ) ) ] \begin{bmatrix} \frac{\partial f_1}{\partial x_1} & \frac{\partial f_1}{\partial x_2} & \cdots & \frac{\partial f_1}{\partial x_n} \\ \frac{\partial f_2}{\partial x_1} & \frac{\partial f_2}{\partial x_2} & \cdots & \frac{\partial f_2}{\partial x_n} \\ \vdots & \vdots & \ddots & \vdots \\ \frac{\partial f_n}{\partial x_1} & \frac{\partial f_n}{\partial x_2} & \cdots & \frac{\partial f_n}{\partial x_n} \end{bmatrix} \begin{bmatrix} (x_1^{(i+1)}-x_1^{(i)})\\ (x_2^{(i+1)}-x_2^{(i)}) \\ \vdots \\ (x_n^{(i+1)}-x_n^{(i)}) \end{bmatrix} = \begin{bmatrix} f_1(x^{(i+1)}) \\ f_2(x^{(i+1)}) \\ \vdots \\ f_n(x^{(i+1)}) \end{bmatrix} - \begin{bmatrix} f_1(x^{(i)}) \\ f_2(x^{(i)}) \\ \vdots \\ f_n(x^{(i)}) \end{bmatrix} ​∂x1​∂f1​​∂x1​∂f2​​⋮∂x1​∂fn​​​∂x2​∂f1​​∂x2​∂f2​​⋮∂x2​∂fn​​​⋯⋯⋱⋯​∂xn​∂f1​​∂xn​∂f2​​⋮∂xn​∂fn​​​ ​(x1(i+1)​−x1(i)​)(x2(i+1)​−x2(i)​)⋮(xn(i+1)​−xn(i)​)​ ​= ​f1​(x(i+1))f2​(x(i+1))⋮fn​(x(i+1))​ ​− ​f1​(x(i))f2​(x(i))⋮fn​(x(i))​

or in a more simple form:

[ K ] [ Δ x ] = [ F ] − [ f ( x ( i ) ) ] [K] [\Delta x] = [F] - [f(x^{(i)})] [K][Δx]=[F]−[f(x(i))]

Moving [ K t ] [K_t] [Kt​] to the other side:

[ Δ x ] = [ K t ] − 1 ( [ F ] − [ f ( x ( i ) ) ] ) [\Delta x] = [K_t]^{-1}([F]-[f(x^{(i)})]) [Δx]=[Kt​]−1([F]−[f(x(i))])

Same as single variable case but now we have vectors and matrices

Convergence Checks:

With multiple variables it is difficult to determine if the method has converged or not. To make things easier, the norm of the vectors is used:

1. Force (and Moment) Convergence:

  • Force convergence is achieved when the norm of the vector f ( x ( i ) ) − F f(x^{(i)}) - F f(x(i))−F divided by the norm of vector F F F is less than the specified tolerance e R e_R eR​

∣ ∣ f ( x ( i ) ) − F ∣ ∣ ∣ ∣ F ∣ ∣ ≤ e R \frac{||f(x^{(i)})-F||}{||F||} \leq e_R ∣∣F∣∣∣∣f(x(i))−F∣∣​≤eR​

2. Displacement Convergence:

  • Force convergence is achieved when the norm of the vector Δ u \Delta u Δu divided by the norm of vector x ( i ) x^{(i)} x(i) is less than the specified tolerance e u e_u eu​
    ∣ ∣ Δ x ∣ ∣ ∣ ∣ x ( i ) ∣ ∣ ≤ e e \frac{||\Delta x||}{||x^{(i)}||} \leq e_e ∣∣x(i)∣∣∣∣Δx∣∣​≤ee​

Example 2 - Multi-Variable Newton-Raphson Method:

If f ( x 1 , x 2 ) = [ 20 x 1 4 x 2 + 3 x 2 3 , 20 x 1 2 x 2 3 ] f(x_1, x_2) = [20x_1^4x_2+3x_2^3, 20x_1^2x_2^3] f(x1​,x2​)=[20x14​x2​+3x23​,20x12​x23​], find { x 1 , x 2 } \{x_1, x_2\} {x1​,x2​} such that f ( x 1 , x 2 ) = [ 20 , 1 ] f(x_1, x_2) = [20, 1] f(x1​,x2​)=[20,1]

Function Information:

f 1 ( x 1 , x 2 ) = 20 x 1 4 x 2 + 3 x 2 3 f 2 ( x 1 , x 2 ) = 20 x 1 2 x 2 3 . ⇒ [ K t ] = [ ∂ f 1 ∂ x 1 ∂ f 1 ∂ x 2 ∂ f 2 ∂ x 1 ∂ f 2 ∂ x 2 ] = [ 80 x 1 3 x 2 20 x 1 4 + 9 x 2 2 40 x 1 x 2 3 60 x 1 2 x 2 2 ] \begin{aligned} f_1(x_1, x_2) &= 20x_1^4x_2+3x_2^3 \\ f_2(x_1, x_2) &= 20x_1^2x_2^3. \\ \Rightarrow [K_t] &= \begin{bmatrix} \frac{\partial f_1}{\partial x_1} &\frac{\partial f_1}{\partial x_2} \\ \frac{\partial f_2}{\partial x_1} &\frac{\partial f_2}{\partial x_2} \end{bmatrix} = \begin{bmatrix} 80x_1^3x_2& 20x_1^4+9x_2^2 \\ 40x_1x_2^3& 60x_1^2x_2^2 \end{bmatrix} \end{aligned} f1​(x1​,x2​)f2​(x1​,x2​)⇒[Kt​]​=20x14​x2​+3x23​=20x12​x23​.=[∂x1​∂f1​​∂x1​∂f2​​​∂x2​∂f1​​∂x2​∂f2​​​]=[80x13​x2​40x1​x23​​20x14​+9x22​60x12​x22​​]​

Sample Calculations ( 1 s t 1^{st} 1st Iteration):
f 1 ( 1 , 1 ) = 20 ( 1 ) 4 ( 1 ) + 3 ( 1 ) 3 = 23 f 2 ( 1 , 1 ) = 20 ( 1 ) 2 ( 1 ) 3 = 20 [ K t ] = [ 80 ( 1 ) 3 ( 1 ) 20 ( 1 ) 4 + 9 ( 1 ) 2 40 ( 1 ) ( 1 ) 3 60 ( 1 ) 2 ( 1 ) 2 ] ⇒ [ K t ] = [ 80 29 40 60 ] [ Δ x ] = [ K t ] − 1 ( [ F ] − [ f ] ) [ Δ x ] = [ 80 29 40 60 ] − 1 ( [ 20 1 ] ) − [ 23 20 ] ) = [ 0.102 − 0.385 ] ) \begin{aligned} f_1(1, 1) &= 20(1)^4(1) + 3(1)^3 = 23 \\ f_2(1, 1) &= 20(1)^2(1)^3 = 20 \\ [K_t] & = \begin{bmatrix} 80(1)^3(1) & 20(1)^4+9(1)^2 \\ 40(1)(1)^3 & 60(1)^2(1)^2 \end{bmatrix} \\ \Rightarrow [K_t] &= \begin{bmatrix} 80 & 29 \\ 40 & 60 \end{bmatrix} \\ [\Delta x] &= [K_t]^{-1}([F]-[f]) \\ [\Delta x] &=\begin{bmatrix} 80 & 29 \\ 40 & 60 \end{bmatrix} ^{-1}( \begin{bmatrix} 20 \\ 1 \end{bmatrix}) - \begin{bmatrix} 23 \\ 20 \end{bmatrix}) = \begin{bmatrix} 0.102 \\ -0.385 \end{bmatrix}) \end{aligned} f1​(1,1)f2​(1,1)[Kt​]⇒[Kt​][Δx][Δx]​=20(1)4(1)+3(1)3=23=20(1)2(1)3=20=[80(1)3(1)40(1)(1)3​20(1)4+9(1)260(1)2(1)2​]=[8040​2960​]=[Kt​]−1([F]−[f])=[8040​2960​]−1([201​])−[2320​])=[0.102−0.385​])​

x 1 ( i ) x_1^{(i)} x1(i)​ x 2 ( i ) x_2^{(i)} x2(i)​ f 1 ( x ( i ) ) f_1(x^{(i)}) f1​(x(i)) f 2 ( x ( i ) ) f_2(x^{(i)}) f2​(x(i)) Δ x 1 \Delta x_1 Δx1​ Δ x 2 \Delta x_2 Δx2​ e R e_R eR​ e u e_u eu​
1123200.102-0.38596.128.1
1.1020.61518.8385.6500.125-0.21523.919.7
1.2270.40018.3251.9270.096-0.0859.69.9
⋮ \vdots ⋮ ⋮ \vdots ⋮ ⋮ \vdots ⋮ ⋮ \vdots ⋮ ⋮ \vdots ⋮ ⋮ \vdots ⋮ ⋮ \vdots ⋮ ⋮ \vdots ⋮
1.3480.30219.9791.0010.001-0.00010.10.04

6 Iterations

Other Methods:

Over the years the Newton-Raphson method has been modified to help improve stability and reduce computation time:

1. Modified Newton-Rapshson Method

  • [ K t ] [K_t] [Kt​] is recalculated at selected iterations (say every 5 t h 5^{th} 5th increment) or kept constant during the entire analysis
  • Requires additional computations to converge but is able to reduce computation cost

Secant Method (Quasi-Newton)

  • Approximates [ K t ] [K_t] [Kt​] after the first iterations

[ K t ] i = f ( x ( i ) ) − f ( x ( i − 1 ) ) x ( i ) − x ( i − 1 ) [K_t]_i = \frac{f(x^{(i)})-f(x^{(i-1)})}{x^{(i)}-x^{(i-1)}} [Kt​]i​=x(i)−x(i−1)f(x(i))−f(x(i−1))​

  • Don’t need to calculate derivatives!
  • Provides a good convergence rate while reducing the computational cost

Example 3- Single Variable Modified Newton-Raphson Method:

If f ( x ) = x f(x) = \sqrt{x} f(x)=x ​, find x x x such that f ( x ) = 3 f(x) = 3 f(x)=3

Function Information:

f ( x ) = x ⇒ K t = ∂ f ∂ x = 1 2 x \begin{aligned} f(x) = \sqrt{x} \Rightarrow K_t = \frac{\partial f}{\partial x} = \frac{1}{2\sqrt{x}} \end{aligned} f(x)=x ​⇒Kt​=∂x∂f​=2x ​1​​

Δ x = K t − 1 ( F − f ( x ( i ) ) ) \Delta x =K_t^{-1}(F-f(x^{(i)})) Δx=Kt−1​(F−f(x(i)))

x ( i ) x^{(i)} x(i) f ( x ( i ) ) f(x^{(i)}) f(x(i)) K t K_t Kt​ Δ x \Delta x Δx x ( i + 1 ) x^{(i+1)} x(i+1)
110.545

x ( i ) x^{(i)} x(i) f ( x ( i ) ) f(x^{(i)}) f(x(i)) K t K_t Kt​ Δ x \Delta x Δx x ( i + 1 ) x^{(i+1)} x(i+1)
110.545
52.240.51.526.52

x ( i ) x^{(i)} x(i) f ( x ( i ) ) f(x^{(i)}) f(x(i)) K t K_t Kt​ Δ x \Delta x Δx x ( i + 1 ) x^{(i+1)} x(i+1)
110.545
52.240.51.526.52
6.522.550.50.897.41

x ( i ) x^{(i)} x(i) f ( x ( i ) ) f(x^{(i)}) f(x(i)) K t K_t Kt​ Δ x \Delta x Δx x ( i + 1 ) x^{(i+1)} x(i+1)
110.545
52.240.51.526.52
6.522.550.50.897.41
7.412.720.50.557.97

Slower Convergence!!

Material Nonlinearity Example


Wolfram Mathematica

(* Material Nonlinearity Example*)Clear["Global *"](*Parameters *)  L = 1.0;A = 1;p = X1^2';(* Newton-Raphson Parameters*)
Xi = 1;
ErrorLimit = 0.5;
MaxIter = 1;(* Stress-Strain Relationship *)
s = 10 * u'[X1] + 100000 * u'[X1]^3;(* Exact Solution *)
DE = D[s*A, X1]+p;
sol = NDSolve[{DE == 0, u[0] == 0, u[L] == 0}, u[X1], X1];   // Numerical  
uExact = u[X1] /. sol[[1]]; Plot[uExact, {X1, 0, L}]

(* RR Approximation *)(* Approximation Function *)
uApprox = a0 + a1*X1 + a2*X1^2;(* Essential Boundary Conditioons *)
BC1 = uApprox /. {X1 -> 0};
BC2 = uApprox /. {X1 -> L};
sol1 = Solve[{BC1 == 0, BC2 == 0}, {a0, a1}];
{a0, a1} = {a0, a1} /. sol1[[1]];(* Total Internal Strain Energy *)
S = 10 * e + 100000 * e^3;
SED = Integrate[S, {e, 0, e}];      // 5e^2 + 25000e^4, Strain Energy Density
e = D[uApprox, X1];
TSE = Integrate[SED, {X1, 0, L}];  // 1.66667a2^2+5000 a2^4,   Total Strain Energy(* Work *)
W = Integrate[p * uApprox, {X1, 0, L}];   // -0.05a2(* Potential Energy *)
PE = TSE - W;             // 0.05a2 + 1.66667a2^2 + 5000a2^4(* Newton-Rapshon Method *)
// What equation trying to solve?ai = {a2};
Func = D[PE, a2];    // f(x)
Kt = D[Func, a2];    // f'(x)For[i = 1, i <= maxIter, i++(* Calculate Function and Kt at Current Iteration *)
FuncIter = Func /. {a2 -> Xi};
KtIter = Kt /. {a2 -> Xi};(* Solving for Delta X *)
Delta = (-1) * FuncIter / KtIter;(* Finding Xi for Next Iteration *)
XInit = Xi;
Xi = Xi + Delta;(* Break Loop Once Convergence Occurs *)
Error. = (Delta / XInit) * 100;
If[Abs[Error] <= ErrorLimit, Print["Analysis Converged!"];Print["     Coefficient a2 = ",  Xi];Print["    Error= ", Error, " (%)"];a2 = Xi;Conv = 1;Break[]];(* Track Results *)
Print["  Iteration: ", i];
Print["  Coefficient a2 = ", Xi];
Print["Error = ", Error, "(%)"];(* Error Message if Max Number of Iterations Reached *)
If[i==maxIter, Print["Max of Number of Iterations Reached"]];]Plot[{uExact, uApprox}, {X1, 0, L}]

更多推荐

【有限元方法】Newton

本文发布于:2023-11-15 10:46:09,感谢您对本站的认可!
本文链接:https://www.elefans.com/category/jswz/34/1598371.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
本文标签:有限元   方法   Newton

发布评论

评论列表 (有 0 条评论)
草根站长

>www.elefans.com

编程频道|电子爱好者 - 技术资讯及电子产品介绍!