Statistics and Probability Letters 131 (2017) 78–86 Contents lists available at ScienceDirect Statistics and Probability Letters journal homepage: www.elsevier.com/locate/stapro Piecewise linear process with renewal starting points Nikita Ratanov Universidad del Rosario, Cl. 12c, No. 4-69, Bogotá, Colombia a r t i c l e i n f o a b s t r a c t Article history: This paper concerns a Markovian piecewise linear process, based on a continuous-time Received 9 June 2017 Markov chain with a finite state space. The process describes the movement of a particle Received in revised form 19 August 2017 that takes a new linear trend starting from a new random point (with state-dependent Accepted 21 August 2017 distribution) after each trend switch. The distribution of particle’s position is derived in a Available online 30 August 2017 closed form. In some special cases the distributions of the level passage times are provided MSC: explicitly. primary 60J27 © 2017 Elsevier B.V. All rights reserved. secondary 60J75 60K99 Keywords: Continuous-time homogeneous Markov chain Piecewise linear process Renewal process First passage time 1. Introduction and main definitions Let ε( = ε()t), t ≥ 0, be a continuous-time homogeneous Markov chain with the finite state space E. Let Φ(t) = etΛ = Φij(t) i j E be the transition semi-group with the infinitesimal generator Λ. Consider the sequence of switching, ∈ times, 0 < T1 < T2 < · · · < Tn < · · · , T0 = 0, and denote by N(t) the number of switchings occurred up to time t, N(t) = max{n | Tn ≤ t}. We study the piecewise linear random process X(t) = xN(t) + cε(TN(t))(t − TN(t)), t ≥ 0. (1.1) Here ci, i ∈ E, are deterministic constants, and xn, n ≥ 0, are independent random variables with distributions determined by the current state ε(Tn). Process X(t), t ≥ 0, describes the location of a particle thatmoves linearly and takes a new starting point after each trend switch. Process X resembleswell-studiedMarkovian growth-collapse processes, see e.g. Boxma et al. (2006). In contrastwith (1.1) the growth-collapse processes presume a constant trendwith additive ormultiplicative down jumps. Suchmodels occur in insurance mathematics and related fields, see Asmussen (2003, XIV-5) o∫r Rolski et al. (1999, Chapters 5 and 11), and inproduction/inventory models studied by Shanthikumar and Sumita (1983), among other∫s. On the other hand, (1.1) is related to jump-telegraph process T(t) t:= 0 c t ε(u)du + 0 Yε(u−)dNu, which is a piecewise linear process jumping from the current position, see e.g. Di Crescenzo et al. (2013) and López and Ratanov (2012); piecewise linear processes with jumps are presented by Ratanov (2014). Jump-telegraph processes are widely applied in various fields, including financial market modelling, see Kolesnik and Ratanov (2013). E-mail address: nikita.ratanov@urosario.edu.co. http://dx.doi.org/10.1016/j.spl.2017.08.010 0167-7152/© 2017 Elsevier B.V. All rights reserved. N. Ratanov / Statistics and Probability Letters 131 (2017) 78–86 79 In contrast to jump-telegraph process, X(t), t ≥ 0, (1.1), completely updates starting points. Possible applications of this type of processes could be in financial modelling and insurance mathematics. The paper is structured as follows. In Section 2 we give explicit formulae for transition densities, expectations and limit distributions of X(t) (as t → ∞). Section 3 is devoted to a detailed analysis of the level passage times in the case of the two-state underlying Markov process ε. 2. Distribution Define the transition probabilities p(t, dy | x) and p(t, dy) by entries pi(t, dy | x) = P{X(t) ∈ dy | ε(0) = i, X(0) = x}, −∞ < x, y < ∞, (2.1) and ∫ ∞ pi(t, dy) = pi(t, dy | x)gi(dx), −∞ < y < ∞, i ∈ E, (2.2) −∞ where gi(dx) is the distribution of the initial starting point at the initial state i = ε(0). By conditioning on the first switching one can obtain the following integral e∑quati∫onst p (t, dy x) e−λitδ (dy) λ e−λ τi | = x+cit + ij i pj(t − τ , dy)dτ , i ∈ E, (2.3) j 0∈E, j̸=i ∑ where δa(dy) denotes δ-measure l∑ocalis∫ed at point a and λi = j E, j iλij. Further, by (2.2)∈ ̸=t p (t, dy) e−λitg tci (dy) λ e−λ τi = i + ij i pj(t − τ , dy)dτ , i ∈ E. (2.4) j 0∈E, j̸=i Here gai∫(dy) is the displace∫ment of measure gi: for any integrable test-function φ ∞ ∞ φ(y)gai (dy) = φ(y + a)gi(dy). −∞ −∞ Systems (2.3) and (2.4) can be solved explicit∑ly. Note that when all trends vanish, ci = 0, i ∈ E, the distribution of X(t) = xN(t) is given by P{X(t) ∈ dx | ε(0) = i} = j EΦij(t)gj(dx). where Φij(t) are the entries of the transition semi-group∈ etΛ introduced in Section 1. Theorem 2.1. The transition probabilities p∫(t, dy | x) and⎡p(t, dy), (2.1)–(2.2), hav⎤e the form∑ t ∑ p (t, dy x) e−λ t τ ci | = i δ −λ τ kx+tc ⎣ k ⎦i (dy) + Φij(t − τ ) λjke gk (dy) dτ , (2.5) ∫j∈E 0 ∑ ⎡ k∈E k ̸=j ∑ ⎤t pi(t, dy) = e−λitg tci ⎣ −λkτ τ ck ⎦ i (dy) + Φij(t − τ ) λjke gk (dy) dτ , i ∈ E. (2.6) j E 0∈ k∈E k ̸=j Let ∫ ∞ Mi(t) = E {Xi(t) | ε(0) = i, Xi(0) = x} gi(dx), i ∈ E. −∞ Then, ∑∫ ⎡t ∑ ⎤ M −λ t −λ τi(t) = e i (mi + tci) + Φij(t − τ )⎣ λjke k (mk + τ ck)⎦ dτ , (2.7) ∫ j 0∈E k∈E, k̸=j where m ∞i = xgi(dx), i ∈ E.−∞ Proof. By conditioning on the last switching time and using the time-reversal property, see e.g. Brémaud (1999), one can derive (2.5): the first term corresponds to the case of no switchings and other summands describe the movement of the particle, which starts at time 0 from the state i ∈ E and makes the last switching at time t − τ . Eq. (2.6) follows from (2.2). Formula (2.7) follows from (2.6). □ 80 N. Ratanov / Statistics and Probability Letters 131 (2017) 78–86 Example 2.1. Let ci = 1, λi = λ, i ∈ E, and the random points xn are identically distributed with distribution g . In this case (2.6) becomes ∫ t p(t, dy) e−λt t= g (dy) + λe−λτ gτ (dy)dτ . (2.8) 0 After applying Fubini’s theorem[ one ca∫n see that th]e distribution (2.8) of X(t) isy p(t, dy) e−λt= g t (dy) λe−λy+ eλxg(dx) dy. (2.9) y−t In particular, if g(dy) = δ(dy), that is xn = 0 n ≥ 0, then by (2.9) p(t, dy) e−λt= δt (dy) λe−λy+ 10 0, this formula looks simpler: t ( ) 2 t ( )1 e−λ 1 e− λ− − M(t) m c1 ∆c − λ∆m= + c + m c , (2.14)λ 0 2λ λ∆ − ∆ where ∆c = c0 − c1, ∆m = m0 − m1. To prove (2.14) first note that∫in this case formula (2.13) becomest M(t) e−λt= (m + tc) λ2 e−λτ+ A(t − τ )(m + τc)dτ , 0 where ( ) 1 1 e−2λt −2λtA(t) − 1 + e= , 2λ 1 + e−2λt 1 e−2λt− such that ( ) ( ) 1 0 1 dA(t)A(0) and e−2λt 1 −1= 1 0 = .λ dt −1 1 Therefore, ( ) ∫ ( ) dM(t) e−λt c0 − λ(∆m t + t∆c) 2e−2λt eλτ ∆m + τ∆c= + λ dτ . dt c1 + λ(∆m + t∆c) 0 −∆m − τ∆c N. Ratanov / Statistics and Probability Letters 131 (2017) 78–86 81 By integrating we have ( ) ( ) dM(t) e−λtc e−2λt ∆m (e−λt e−2λt ) ∆c= (− λ − −dt ) −(∆m ) −∆c e−λt c1 e−2λt ∆c − λ∆m= c + ,0 λ∆m − ∆c which gives (2.14). From Theorem 2.1 one can obtain the limiting distribution, as t → ∞. Let ε = ε(t) be an ergodic regular homogeneous Markov chain, Brémaud, (1999), that is lim Φij(t) = πj, i, j ∈ E. (2.15) t→∞ Theorem 2.2. Pro∑cess X(t) converges in distribution, as t → ∞. The limit distribution is p̄(dy) = πjλjkψk(λk, dy), (2.16) j,k∈∫E, k̸=j where (s ) ∞ e−sτ gτ cψ , kk · = 0 k (·)dτ is the time-Laplace transform of distribution g tck k (·). The limit of expec∑tations is∑given(by )c λ lim Mi(t) k jk = πj mk + , i ∈ E. (2.17) t→∞ λ λ j∈E k∈E k k, k̸=j Proof. Applying (2.15)∑to (2.6)∑by Lebes∫gue’s dominated convergence theorem we obtain∞ lim p (t, dy) π λ −λ τ τ cki = j jk e k gk (dy)dτ , (2.18)t→∞ j∈E k 0∈E, k̸=j which gives (2.16). By (2.7) ∑ ∫ ∞ lim M (t) π λ e−λ τi = j jk k (mk + τ ck)dτ , t→∞ j,k∈E, j 0̸=k which gives (2.17). □ Formula (2.16) l∑ooks in detail as ∫ ∫πjλ [ y ∞ ]p̄(dy) jk= e−λky/ck eλkz/ckg (dz)1 λ z/c c k {ck>0} + k k∑e gk(dz)1{ck<0} dy| |j,k∈E, k k −∞ y̸=j (2.19)λjk + πj gk(dy)1{ck=0}.λ j,k∈E k j k, ̸= To prov∫e this, no∫te that with any test-fun∫ction[∫φ applied to the integra]l term of (2.18) we have ∞ ∞ ∞ ∞ (y) e−λkτ gτ cφ kk (dy)dτ = φ(y + c −λ τ kτ )e k dτ gk(dy). −∞ 0 −∞ 0 By applyi∫ng again Fubini’s[th∫eorem we obtain] 1 ∞ y φ(y)e−λky/ck eλkz/ckg c k (dz) dy, if ck > 0; k −∫∞ [ −∫∞ ] 1 ∞ ∞ φ(y) e−λky/ck eλkz/ckgk(dz) dy, if cc k < 0; | k| −∞ y and ∫ 1 ∞ φ(y)gk(dy), if ck = 0. λk −∞ Example 2.2 (continued). Note that in the case of Example 2.2 the limiting distribution (2.19) can be simplified to λ λ p̄(dy) 0 1= [ψ0(λ0, dy) + ψ1(λ1, dy)] , (2.20)2λ 82 N. Ratanov / Statistics and Probability Letters 131 (2017) 78–86 and formula (2.17) becomes [ ] λ0λ1 m0 m1 c clim M0(t) = lim M1(t) 0 1 = + + + . t→∞ t→∞ 2λ λ0 λ1 λ2 20 λ1 3. First passage time Let T xi = T x i (y) = inf{t > 0 : Xi(t) ≥ y}, be the first passage time through the fixed level y starting with the initial point x, Xi(0) = x < y; ε(0) = i ∈ E. To avoid overshooting we assume that the distributions of xn are supported in (−∞, y). From viewpoint of possible applications, this means that the supporting level y is visible for a moving particle. If all velocities are nonpositive, cj ≤ 0, j ∈ E, then T xj (y) = ∞, a.s. Note that in the case of a positive velocity ci the distribution of T xi (y) has an atom: if ci > 0, then P x {Ti (y) = (y − x)/ci} = exp (−λi(y − x)/ci). Let fi(x, y, t) be the density function of T xi (y), fi(x, y, t) := P{T xi (y) ∈ dt}/dt, t > 0, i ∈ E. Conditioning on the first switching we∑obt∫aint [∫ ]∧ξi ∞ fi(x, y, t) −λ ξ −λ= e i iδ(t − ξi) + λije iτ fj(z, y, t)gj(dz) dτ j∑E j i ∫0 −∞∈ , ̸=ci>0 t [∫ ]∞ (3.1) −λ + λije i§τ fj(z, y, t)gj(dz) dτ , i ∈ E, j E j 0 −∞∈ , ̸=i ci≤0 where δ(·) is Dirac’s δ-function and ξi = (y − x)/ci. In what follows we study in detail the ‘‘flip-flop’’ case of the two-state Markov process ε, ε(t) ∈ E = {0, 1}. 3.1. Positive velocities Let both trends be positive, c0 ≥ c1 > 0, and the alternating distributions g0, g1 of renewal starting points satisfy the condition gi((−∞, y)) = 1, i ∈ {0, 1}. Theorem 3.1. The first passage times T x0 and T x 1 have proper distributions, P{T x0 (y) < x ∞} = P{T1 (y) < ∞} = 1, x < y. (3.2) Let the renewal starting points be exponentially distributed over the half-line (−∞, y), gi(dz) = ai exp(−ai(y − z))1{z 0, and µi = λi + ciai, i ∈ {0, 1}. Then, th[e density functions f0 and f1 are given by ] ∗ ∗ fi(x, y, t) e−λiξ= iδ(t − ξi) + λi A1−iu(t ∧ ξ , λ s∗i i − 0)e −s0t + B1−iu(t ξ , λ s∗)e−s t∧ i i − 1 1 , i ∈ {0, 1}. (3.3) Here √ √ y − x µ0 + µ1 − D µ + µ + D ξ ∗i := , s0 = , s ∗ 0 1 1 = , (3.4)ci 2 2 D (µ µ )2= 0 − 1 + 4λ0λ1; constants Ai and Bi are given by µ0µ1 − λ0λ (µ λ )s∗ λ λ µ ∗1 − 0 − 0 0 0 1 − 0µ1 + (µ0 − λA B 0 )s1 0 = √ , 0 = √ , (3.5) D D µ0µ1 − λ0λ1 − (µ1 − λ1)s∗0 λ0λ1 − µ0µ1 + (µ1 − λ )s ∗ A B 11 = √ , 11 = √ , (3.6) D D and ∫ ⎧⎨1 e−βξξ − u(ξ, β) −βτ , β ̸= 0,:= e dτ = β (3.7) 0 ⎩ ξ, β = 0. N. Ratanov / Statistics and Probability Letters 131 (2017) 78–86 83 Proof.⎪⎧For the two-state case, by (3.1) t∫he following coupled equations hold: for t > 0⎪⎪⎨ t∧ξ0f0(x, y, t) e−λ= 0ξ0δ(t − ξ0) λ e−λ τ+ ∫ 0 0 h1(y, t − τ )dτ , ⎩⎪⎪⎪ 0 (3.8) t∧ξ1 f1(x, y, t) = e−λ1ξ1δ(t ξ ) λ e−λ1τ∫− 1 + 1 h0(y, t − τ )dτ ,0 where hi(y, t) = [Gifi] (y ∞ , t) := fi(z, y, t)gi(dz), i ∈ E = {0, 1}. Let−∞ αi(y, s) = [Lt→shi] (y, s) = [Lt→s (Gifi)] (y, s), s ≥ 0, be the time-Laplace transform of hi, i ∈ {0, 1}. By a{pplying to system (3.8) the time-Laplace transformationLt→s we have [Lt→sf0] (x, y, s) = exp(−(λ0 + s)ξ0) + λ0u(ξ0, λ0 + s)α1(y, s), (3.9) [Lt→sf1] (x, y, s) = exp(−(λ1 + s)ξ1) + λ1u(ξ1, λ1 + s)α0(y, s), since Lt→sδ(t − ξ ) = exp(−sξ ) and, by Fubi∫ni’s theorem,t ξ ∫ ∫∧ ∞ t∧ξ L e−∫λτh(y, t τ∫)dτ e−st −λτt→s − = dt e h(y, t − τ )dτ0 0 0ξ ∞ e−λτdτ e−st= h(y, t − τ )dt = u(ξ, λ + s)α(y, s). 0 τ Here function u = u(ξ, β) is defined by (3.7). Note that Lt→s is commutative with G0 and G1. Applying operator G0 to the first equation of (3.9) and operator G1 to the second⎧⎪one, we obtain the linear algebraic system,⎪⎨ λ [ ]α0(y, s) 0= ĝ0 (β0(s)) + 1 − ĝ0 (β0(s)) α1(y, s),⎪⎪ λ0 + s⎩ λ [ ]α1(y 1, s) = ĝ1 (β1(s)) + 1 − ĝ1 (βs 1(s)) α0(y, s),λ1 + where (s) ∫λ0+s λ +sβ0 = c , β 10 1(s) = c and1 ∞ ĝ (β) e−β(y−x)i = gi(dx), β > 0, i ∈ {0, 1}. (3.10) −∞ The integral in (3.10) converges by the condition, gi([y, ∞)) = 0. The⎧⎪solution of the algebr{aic system is given by⎨⎪⎪ λ [ ] }α0(y, s) (Q (s))−1= ĝ0 (β0(s) 0) + 1 − ĝ0 (β0(s)) ĝ1 (β1(s)) , ⎪⎪⎪ λ0 + s ⎩ { [ ]} (3.11)α (y, s) −1 λ11 = (Q (s)) (ĝ1 (β1(s)() + ))( ĝ0 (β0((s)) 1)−) ĝs 1 (β1(s)) ,λ1 + where Q (s) 1 λ0λ1 1 λ +s λ +s= − ( s)( − ĝ 0 1 − ĝ 1 , s > 0. λ0+ λ1+s) 0 c0 1 c1 By (3.11) it is easy to see that α0(y, 0) = α1(y, 0) = 1. Therefore, due to (3.9) the first passage times T x0 (y) and T x 1 (y) have proper distributions P{T x0 (y) < x ∞} = P{T1 (y) < ∞} = 1, ∀y, y > x. Functions α0(y, s) and α1(y, s) can be obtained explicitly whenever the renewal starting points are exponentially distributed over half-line (−∞, y), gi(dz) = ai exp(−ai(y − z))1{z 0 are constants, that is ĝi(β) = ai/(ai + β). In this case we have β λ 1 i + s − ĝi (β) | λi+s = | λi+s = , i ∈ {0, 1},β= c β=i ai + β ci µi + s where µ0 = a0c0 + λ0, µ1 = a1c1 + λ1. Therefore λ λ Q (s) 1 0 1 q(s) = − = , s > 0, (µ0 + s)(µ1 + s) (µ0 + s)(µ1 + s) 84 N. Ratanov / Statistics and Probability Letters 131 (2017) 78–86 Fig. 1. Density functions f0(0, 2, t) (absolutely continuous parts) with c0 = 1, c1 = 0.5: for (a) a0 = a1 = 1; λ0 = λ1 = 1, 0.5, 0.2 (from top to bottom); for (b) a0 = a1 = 1, 2, 5, respectively, λ0 = λ1 = 1, 0.5, 0.2 (from top to bottom). and (3.⎪⎧11) becomes⎨ a0c0s + µ1a0c0 + λ0a c(s) 1 1⎪α0 = ,⎩ q(s)a1c1s + µ0a1c1 + λ1a0c (3.12)0α1(s) = ,q(s) where q(s) = (µ0 + s)(µ1 + s) − λ0λ1, s > 0. Let s∗0 and s ∗ 1 be defined by q(s) ∗ ∗ ∗ ∗ = (s + s0)(s + s1). It is easy to see that, s0, s1 > 0, and Ai Bi αi(s) = + , i ∈ {0, 1},s ∗ ∗+ s0 s + s1 where s∗0, s ∗ 1 are given by (3.4) and coefficients Ai, Bi, i ∈ {0, 1}, are defined by (3.5)–(3.6). Making the inverse Laplace transformation we obtain functions h0 and h1, ∗ ∗ ∗ ∗ h0(t) A e−s0t= 0 + B e−s1t , h (t) A e−s0t B e−s1t0 1 = 1 + 1 , t > 0. (3.13) By (3.8) and (3.13) the density functions of T x0 (y) and T x 1 (y) are given by (3.3) (see Fig. 1). □ Remark 3.1. Formulae (3.8) give the solution of the problem for arbitrary distributions g0 and g1: the density functions f0 and f1 are expressed by means of the inverse Laplace transforms h0 and h1 of α0 and α1, (3.11). Example 3.1. Let c0 = c1 = 1, λ0 = λ1 = λ and a0 = a1 = a (see Example 2.1). Then, by (3.3) we have f −λ(y−x)0 = f1 = e δ(t − (y − x)) + λa · u(t (y x), λ a)e−at∧ − − . Remark 3.2. Themean of the first passage time T x(y) for aMarkovian growth-collapse process has been studied in detail, see Löpker and Stadje (2011). Meanwhile, formula (3.3) presents the explicit representation of the distribution (in the special case of two-state processes with exponentially distributed jumps). 3.2. Velocities of opposite signs Let the trends be of opposite signs, c = c0 > 0 ≥ c1, and distributions g0, g1 satisfy the condition gi((−∞, y)) = 1, i ∈ {0, 1}. Since c1 ≤ 0, the distributions of T x0 (y) and T x 1 (y), x < y, do not depend on g1 and c1. Further, T x 1 (y) does not depend on the starting point x. Let f0(x, y, t) and f1(y, t) be the density functions of distributions of T x0 (y) and T x 1 (y) respectively. Theorem 3.2. The first passage times T x0 and T x 1 have proper distributions, (3.2). Let the renewed starting point of the upwards movement be exponentially distributed, g0(dx) ≡ g(dx) = a exp(−a(y − x))1{x 0, (3.15) D where ξ = (y − x)/c and 1 √ 1 √ s∗0 = (λ ∗ 0 + λ1 + ac − D), s1 = (λ ∗ 0 + λ1 + ac + D), 0 < s0 < s ∗ 1;2 2 (3.16) D = (λ 2 20 + λ1 + ac) − 4acλ1 = (λ1 − ac) + λ0(λ0 + 2λ1 + 2ac) and function u = u(ξ, β) is defined by (3.7). By definitions (3.16) it turns out that λ0 < s∗1 always holds. Further, the equality λ0 = s ∗ 0 holds, only if acλ λ = 10 ac .+λ1 Proof.⎪⎧System (3.1) becomes⎪⎨ ∫ t∧ξ⎪f0(x, y, t) e∫ −λ0ξ= δ(t −λ τ− ξ ) + λ0e 0 f1(y, t − τ )dτ , ⎪⎩ t 0 (3.17)f1(y, t) = λ1e−λ1τh(y, t − τ )d∫τ , t > 0.0 Here ξ = (y − x)/c, c = c0, and h(y, t) ∞ = f0(z, y, t)g(dz), where g = g0 corresponds the distribution of the starting−∞ point of the upward motion. Pass{ing to the time-Laplace transform, we get[Lt→sf0] (x, y, s) = exp(−(λ0 + s)ξ ) + λ0u(ξ, λ0 + s) [Lt→sf1] (y, s), λ [ f ] (y s) 1 (y s) (3.18)Lt→s 1 , = α , , λ1 + s where ∫ [∫ ] ∞ ∞ α(y, s) = [Lt→sh] (y, s) e−st= f0(x, y, t)g(dx) dt 0 −∞ is the time-Laplace transform of h and function u = u(ξ, β) is defined by (3.7). Similarly to the proof of Theorem 3.1 one can obtain λ (y s) ĝ(( s) c) 0 ( ) λ 1 ĝ(( s) c) 1α , =∫ λ0 + / + − λ0 + / α(y, s),λ0 + s λ1 + s where ĝ(β) ∞ e−β(y−x)= g(dx). Therefore −∞ ĝ((λ0(+ s)/c)α(y, s) = ) . (3.19) 1 λ− 0λ1(λ0+s)(λ1+s) 1 − ĝ((λ0 + s)/c) ∫ ( )If the (distribution of) the renewed starting point is exponential, g(dx) = a exp(−a(y − x))1{x 0 > c1: for (a) with c = 1, a = 1, λ0 = 1 and λ1 = 2, 1; λ1 = 1, 2; λ1 = 0.5 3; for (b) with λ0 = λ1 = 1 and c = 1, a = 1 1; c = 2, a = 0.5 2; c = 5, a = 0.2 3. Remark 3.3. The distribution f̄ = f̄ (t) of the times between consequent passages through the fixed level y satisfies the equation ∫ t f̄ (t) = λ e−λ τ0 0 f1(t − τ )dτ . 0 By (3.15) ∫ λ λ t [ ] f̄ (t) 0 1 ac e−λ0τ e−s ∗(t τ ) s∗0 − e− 1(t−τ )= √ [ − dτD 0λ0λ ]1ac ∗ ∗u(t, λ s∗)e−s0t u(t, λ s∗)e−s t= √ 0 − 0 − 0 − 1 .D 1 4. Conclusion Distributions of a piecewise linear process which starts after each tendency switching from a new random point are completely described. In the case of two-state process the distributions of the level passage time are presented explicitly. Applications of these processes will be presented elsewhere later. Acknowledgements I would like to thank three anonymous referees for valuable remarks that considerably improved the primary version of the paper. References Asmussen, S., 2003. Applied Probability and Queues. Springer, NY. Boxma, O., Perry, D., Stadje, W., Zacks, S., 2006. A Markovian growth-collapse model. Adv. Appl. Probab. 36, 221–243. Brémaud, P., 1999. Markov Chains. Gibbs Fields, Monte-Carlo Simulation, and Queues. Springer. Di Crescenzo, A., Iuliano, A., Martinucci, B., Zacks, S., 2013. Generalized telegraph process with random jumps. J. Appl. Probab. 50, 1–14. Kolesnik, A.D., Ratanov, N., 2013. Telegraph Processes and Option Pricing. Springer, Heidelberg. López, O., Ratanov, N., 2012. Option pricing under jump-telegraph model with random jumps. J. Appl. Probab. 49, 838–849. Löpker, A., Stadje, W., 2011. Hitting times and running maximum of Markovian growth-collapse processes. J. Appl. Probab. 48, 295–312. Ratanov, N., 2014. On piecewise linear processes. Statist. Probab. Lett. 90, 60–67. Rolski, T., Schmidli, H., Schmidt, V., Teugels, J., 1999. Stochastic Processes for Insurance and Finance. Wiley. Shanthikumar, J., Sumita, U., 1983. General shock models associated with correlated renewal sequences. J. Appl. Probab. 20, 600–614.