Reza Ismail Hasan

Student ID. 155 11 076

Course: KL4220 Subsea Pipeline

Lecturer: Prof. Ir. Ricky Lukman Tawekal, MSE, Ph. D.
Eko Charnius Ilman, ST.MT

Ocean Engineering Program, Institut Teknologi Bandung

http://www.ocean.itb.ac.id/en/

Wednesday, February 17, 2016

Posted By: Unknown - 11:03 PM

RISK-BASED CASING DESIGN

Oilfield tubulars have been traditionally designed using a deterministic working stress design (WSD) approach, which is based on multipliers called safety factors (SFs). The primary role of a safety factor is to account for uncertainties in the design variables and parameters, primarily the load effect and the strength or resistance of the structure. While based on experience, these factors give no indication of the probability of failure of a given structure, as they do not explicitly consider the randomness of the design variables and parameters. Moreover, the safety factors tend to be rather conservative, and most limits of design are established using failure criteria based on elastic theory.
Reliability-based techniques have been formally applied to the design of load-bearing structures in several disciplines. However, their application to the design of oilfield tubulars is relatively new. Two different reliability-based approaches have been considered: the more fundamental quantitative risk assessment (QRA) approach and the more easily applied load and resistance factor design (LRFD) format. Comparison of SF to the estimated design reliability offers a reliability-based interpretation of WSD and gives insight into the design reliabilities implicit in WSD.

BACKGROUND

In all design procedures, a primary goal is to ensure that the total load effect of the applied loads is lower than the strength of the tubular to withstand that particular load effect, given the uncertainty in the estimate of the load effect, resistance, and their relationship.
The load effect is related to the resistance of the tubular by means of a relationship, often known as the “failure criterion,” which is thought to represent the limit of the tubular under that particular load effect. Thus, the failure criterion is specific to the response of the tubular to that load effect. Three conventional design procedures are considered: WSD, QRA, and LRFD.
Clearly, the relationship between the load effect and resistance and the means of ensuring safety or reliability are different in each of these procedures. In what follows, zi are the variables and parameters (such as tension, pressure, diameter, yield stress, etc.) that determine the load effect and resistance; Q is the total load effect; and R is the total resistance in response to the load effect, Q.

WORKING STRESS DESIGN (WSD)

WSD is the conventional casing design procedure, that is, the familiar deterministic approach to the design of oilfield tubulars. In WSD, the load effect is separated from the resistance by means of an arbitrary multiplier, the SF. The estimated load effect is often the worst-case load, Qw, based on deterministic design values for the parameters, zi, that determine the load effect. The estimated resistance is often the minimum resistance, Rmin, based on deterministic design values for the parameters that determine the resistance. The design values chosen in formulating the resistance are such that the resulting resistance is a minimum. In most cases, the limits of design are established using failure criteria based on elastic theory. In some cases, such as collapse, WSD employs empirical failure criteria. In general, the design procedure can be represented by the relationship
Vol2 page 0331 eq 001.png………………..(1)
The ratio Rmin/SF is called the safe working stress of the structure, hence, the name of the procedure.
The role of the SF is to account for uncertainties in the design variables and parameters, primarily the load effect and the strength or resistance of the structure. The magnitude of the SF is usually based on experience, though little documentation exists on their origin or impact. Different companies use different acceptable SFs for their tubular design. SFs give little indication of the probability of failure of a given structure, as they do not explicitly consider the randomness of the design variables and parameters. Some other limitations of this approach are listed in brief next.
  • WSD designs to worst-case load, with no regard to the likelihood of occurrence of the load.
  • WSD mostly uses conservative elasticity-based theories and minimum strength in design (though this is not a requirement of WSD).
  • WSD gives the engineer no insight into the degree of risk or safety (though the engineer assumes that it is acceptably low), thus making it impossible to accurately assess the risk-cost balance.
  • SFs are based on experience and not directly computed from the uncertainties inherent in the load estimate (though these uncertainties are implicit in the experience).
  • WSD sometimes makes the design engineer change loading or accept smaller SFs to fit an acceptable WSD, without giving him the means to evaluate the increased risk.

RELIABILITY-BASED DESIGN APPROACHES

Both QRA and LRFD are reliability-based approaches. The general principles of reliability-based design are given in ISO 2394, International Standard for General Principles on Reliability of Structures,[1] and a detailed discussion of the underlying theory is given by Kapur and Lamberson.[2] In reliability-based approaches, the uncertainty and variability in each of the design variables and parameters is explicitly considered. In addition, a limit-states approach is used rather than elasticity-based criteria. Thus, the “failure criterion” of WSD is replaced by a limit state that represents the true limit of the tubular for a given load effect. Such probabilistic design approaches allow the estimation of a probability of failure of the structure, thus giving better risk-consistent designs.

QUANTATIVE RISK ASSESSMENT (QRA)

In quantative risk assessment, the limit state is considered directly. The limit state is the relationship between the load effect and resistance that represents the true limit of the tubular. Conceptually, the limit state G(Zi) is written as
Vol2 page 0332 eq 001.png………………..(2)
where Zi are the random variables and parameters that determine the load effect and resistance for the given limit state. G(Zi) is known as the limit-state function (LSF). InEq. 2, the upper case Z is used to represent the parameters to remind us that the parameters are treated as random variables in QRA. The LSF usually represents the ultimate limit of load-bearing capacity or serviceability of the structure, and the functional relationship depends upon the failure mode being considered. G(Zi) < 0 implies that the limit state has been exceeded (i.e., failure). The probability of failure can be estimated if the magnitude and uncertainty of each of the basic variables, Zi, is known and the mechanical models defining G(Zi) are known through the use of an appropriate theory. The uncertainty in Q(Zi) and R(Zi) is calculated from the uncertainty in each of the basic variables and parameters, Zi , through an appropriate uncertainty propagation model, such as Monte Carlo simulation. Fig. 1 illustrates the concept, with the load effect and resistance being shown as random variables. The shaded region shows the interference area, which is indicative of Pf, the probability of failure. It is the area where the loads exceed the strength, hence, this is the area of failure. The interference area can be estimated using reliability theory.
The probability that any given design may fail can be estimated, given an appropriate limit state and estimated magnitude and uncertainty of each of the basic variables and a reliability analysis tool. The approach previously mentioned, although simple in concept, is usually difficult to implement in practice. First, the LSF is not always a manageable function and is often cumbersome to use. Second, the uncertainty in the load and resistance parameters must be estimated each time a design is attempted. Third, the probability of failure must be estimated with an appropriate reliability analysis tool. It is tempting to treat each of the parameters, Zi, as normal variates and use a first-order approach to the propagation of uncertainty. However, such an analysis would be in error because the variables are usually not normal, and first-order propagation gives reliable information only on the central tendencies of the resultant distributions and is erroneous in estimating the tail probabilities.
From Fig. 1, it is clear that it is the tail probabilities that are of interest in our work. Therefore, it is important to do a full Monte Carlo simulation to estimate the probability of failure of any real design with real variables. However, this too, is not easy because to obtain probability of failure information of the order 10–n, the simulation has to go through l0n+2 iterations. Clearly, this is a computer-intensive effort.

LOAD AND RESISTANCE FACTOR DESIGN (LFRD)

The load and resistance factor design approach is a reliability-based approach that captures the reliability information characteristic of quantitative risk assessment and presents it in a design format far more amenable to routine use, just like WSD. The limit state is the same as the one considered by QRA. However, the design approach is simplified by the use of a design check equation (DCE).
LRFD allows the designer to check a design using a simplified DCE. The DCE is usually chosen to be a simple and familiar equation (for instance, the von Mises criterion in tubular design). Appropriate characteristic values of the design parameters are used in the DCE, along with partial factors that account for the uncertainties in the load and resistance and the difference between the DCE and the actual limit state. Thus, ifQchar(zi) and Rchar(zi), respectively, represent the characteristic value of the load effect and of resistance, with zi being the characteristic values of each of the parameters and variables, the DCE can be represented by the inequality
Vol2 page 0333 eq 001.png………………..(3)
where load factor (LF) and resistance factor (RF) are the partial factors required. In the literature, LF and RF are usually referred to as the load factor and resistance factor, respectively. The LF takes into account the uncertainty and variability in load effect estimation, while the RF takes into account the uncertainty and variability in the determination of resistance, as well as any difference between the LSF and DCE. Any design that satisfies Eq. 3 is a valid design. The design check equation can be functionally identical to the LSF, or the functional relationship can be a simple formula specified by the design code or familiar WSD formulas. Note that Eq. 3 is merely a conceptual representation. In practice, it might not be possible to separate the load effects and resistance in the way suggested by Eq. 3. Moreover, several load effects and resistance terms may be present in the DCE, with varying uncertainties, requiring the use of several partial factors.

SIMILARITY TO WSD

We observe, from Eq. 3 that the partial factors are, in a sense, similar to the SF used in WSD. Comparing Eq. 3 to Eq. 1, we notice that both equations are based on deterministic values, and the SF in Eq. 1 is replaced by two partial factors. Indeed, the ratio LF/RF is analogous to the SF used in WSD, if the DCE happens to be identical to the WSD failure criterion. Thus, in concept, it may be said that
Vol2 page 0334 eq 001.png………………..(4)
Despite these similarities, however, there are three crucial differences. First, the loads and resistances are estimated using a set methodology. Second, the load effect and the resistance are treated separately, thus allowing the partial factors to separately account for the uncertainties in each. And third, the magnitude of loads and resistances is based on reliability, rather than being arbitrarily set.
Partial factors are chosen through a process of calibration, where the deterministic DCE with partial factors is calibrated against the probabilistic LSF. Partial-factor values are chosen such that their use in the DCE results in a design that has a preselected target reliability or target probability of failure, as determined from the LSF using reliability analysis. For the partial factors to do so, the calibration process should prescribe a scope of the application of LRFD, and the values of the partial factors should be optimized to ensure a uniform reliability across the scope. The objective is to obtain a set of factors that results in a design within this target probability. In brief, the procedure may be summarized as follows:
  • Choose a desired target probability of failure.
  • Identify the characteristic values of each of the parameters, and the uncertainty and variability about these values.
  • For an assumed set of load and resistance factors, generate a set of “passed” designs from the DCE, across the scope of the structure, for all possible load magnitudes. In other words, all designs that pass the DCE are valid designs. The passing of a design is, of course, controlled by the assumed value of the load and resistance factors.
  • For each of the passed designs, estimate the probability of failure from the LSF, taking into account the uncertainty in each of the variables.
  • Determine the statistical minimum reliability assured by the assumed set of load and resistance factors. This is the reliability (or equivalently, probability of failure) that results from the use of these partial factors. In other words, the probability of failure of any design that results from the use of these partial factors in the DCE will, statistically, be less than or equal to the probability of failure.
  • Repeat until the set of partial factors results in the desired target probability of failure.
At the end of the process, we have a set of partial factors and their corresponding design reliability. If several target reliabilities are to be aimed for, the procedure is repeated, until a new set of partial factors is obtained.
It must be noted that this is a very brief summary of the approach. Calibration is usually the most time-consuming and rigorous step in devising an LRFD procedure. Several reliability-theory and statistical details such as uncertainty estimation, preprocessing of high-reliability designs, zonation, uniformity of reliability, multiple partial factor calibration, etc. have been omitted for brevity.

CRITIQUE OF RISK-BASED DESIGN

WSD has been used successfully for many years to design casing. It is a simple system, understood by the average drilling engineer, of comparing a calculated worst-case load against the rating of the casing. The safety factors used may neither be based on strict logic nor be the same across industry, but the concept is simple and the numbers are similar. Generally, the system has served the industry well. Risk-based design advocates criticize WSD because the failure models do not always use the ultimate load limit as the failure criterion, but this is not inherent to WSD. In an ideal world, where casing is always within specification, using average safety factors and worst-case estimates of loads, the casing should always be overdesigned.
However, WSD makes no allowance for casing manufactured below minimum specification. The SF used may or may not compensate for the fact that a below-strength joint is in a critical location. The risks cannot be quantified, so there is no way of comparing the relative risks of different designs. It can also lead to a situation in which it is impossible to produce a practical design under extreme downhole conditions. There would be a temptation in this case either to try to justify a reduction in the SF, perhaps by relying on improved procedures, or to re-estimate the loads downward. Also, the system does not usually consider low levels of H2S, causing brittle failure in burst. Improvements such as better quality control, more accurate failure equations, and considering brittle burst could be utilized within a WSD system.
It is reasonable for the nonstatistician to accept that the strengths of joints of casing of the same weight and grade from the same mill will vary symmetrically around a mean value. The product is manufactured from nominally the same materials and by the same process, with the aim of producing identical properties. The predictability of the “resistance” side of the equation has been confirmed by large-scale examination and testing of the finished product.
The “load” side of the equation, such as formation pressures and kick volumes, may not be so predictable. There is also a much smaller data bank available for estimating probabilities. Further, human factors may influence the size of a kick by such things as speed of reaction in closing the well in and choosing the correct choke pressures when killing a kick.
The designer using risk-based casing design has the same problem that the WSD user has—which loads to consider in the design. The risk-based designer has an additional task, the assignment of probabilities to these loads. One could argue that these loads should be weighted according to the severity of the resulting failure.
If risk-based designs are used to justify thinner/lower-grade casing and pipe manufactured to the same quality standards as used as with WSDs, the wells will not be safer. If risk-based design systems are used by people who do not understand the system, or only use partial factors rather than the full system, wells will not be safer. If the load data have been underestimated, the wells will not be safer, especially in high-temperature/high-pressure wells.
To produce wells that are as safe as those designed using WSD, a risk-based design system needs to include:
  • More accurate failure equations
  • Account taken of brittle fracture in low levels of H2S
  • Improved quality control of tubulars and connections
  • Accurate load data
  • Engineers who understand the system and the well
  • A full training and competence assurance program

NOMENCLATURE

G(Zi)= the limit state function
Qchar= characteristic value for the load effect, lbf
Zi= the random variables and parameters that determine the load effect and resistance for the given limit state

REFERENCES

  1.  ISO 2394, International Standard for General Principles on Reliability of Structures, second edition. 1986. Geneva: International Organization for Standardization.

PIPELINE INSPECTION

Posted By: Unknown - 11:01 PM
In the United States, millions of miles of pipeline carrying everything from water to crude oil. The pipe is vulnerable to attack by internal and external corrosion, cracking, third party damage and manufacturing flaws. If a pipeline carrying water springs a leak bursts, it can be a problem but it usually doesn't harm the environment. However, if a petroleum or chemical pipeline leaks, it can be a environmental disaster. More information on recent US pipeline accidents can be found at the, National Transportation Safety Board's Internet site. In an attempt to keep pipelines operating safely, periodic inspections are performed to find flaws and damage before they become cause for concern.
When a pipeline is built, inspection personnel may use visual, X-ray, magnetic particle, ultrasonic and other inspection methods to evaluate the welds and ensure that they are of high quality. The image to the left show two NDT technicians setting up equipment to perform an X-ray inspection of a pipe weld. These inspections are performed as the pipeline is being constructed so gaining access the inspection area is not problem. In some areas like Alaska, sections of pipeline are left above ground like shown above, but in most areas they get buried. Once the pipe is buried, it is undesirable to dig it up for any reason.

So, how do you inspect a buried pipeline?

Have you ever felt the ground move under your feet? If you're standing in New York City, it may be the subway train passing by. However, if you're standing in the middle of a field in Kansas it may be a pig passing under your feet. Huh??? Engineers have developed devices, called pigs, that are sent through the buried pipe to perform inspections and clean the pipe. If you're standing near a pipeline, vibrations can be felt as these pigs move through the pipeline. The pigs are about the same diameter of the pipe so they range in size from small to huge. The pigs are carried through the pipe by the flow of the liquid or gas and can travel and perform inspections over very large distances. They may be put into the pipe line on one end and taken out at the other. The pigs carry a small computer to collect, store and transmit the data for analysis. In 1997, a pig set a world record when it completed a continuous inspection of the Trans Alaska crude oil pipeline, covering a distance of 1,055 km in one run.
Pigs use several nondestructive testing methods to perform the inspections. Most pigs use a magnetic flux leakage method but some also use ultrasound to perform the inspections. The pig shown to the left and below uses magnetic flux leakage. A strong magnetic field is established in the pipe wall using either magnets or by injecting electrical current into the steel. Damaged areas of the pipe can not support as much magnetic flux as undamaged areas so magnetic flux leaks out of the pipe wall at the damaged areas. An array of sensor around the circumference of the pig detects the magnetic flux leakage and notes the area of damage. Pigs that use ultrasound, have an array of transducers that emits a high frequency sound pulse perpendicular to the pipe wall and receives echo signals from the inner surface and the outer surface of the pipe. The tool measures the time interval between the arrival of a reflected echos from inner surface and outer surface to calculate the wall thickness.
ISUPIGPIGDiagram
Figure 1. Pig and the diagram.

On some pipelines it is easier to use remote visual inspection equipment to assess the condition of the pipe. Robotic crawlers of all shapes and sizes have been developed to navigate the pipe. The video signal is typically fed to a truck where an operator reviews the images and controls the robot.
PipeCrawler
Figure 2. Pipe crawler.
References:

Pipeline Material Selection

Posted By: Unknown - 10:59 PM

Originally written by Krupavaram Nalli, Tebodin & Partners LLC, Sultanate of Oman
With the recent spate of material failures in the oil and gas industry around the world, the role of a material and corrosion engineer in selecting suitable material has become more complex, controversial and difficult. Further, the task had become more diverse, since now modern engineering materials offer a wide spectrum of attractive properties and viable benefits.
From the earlier years or late ’70s, the process of materials selection that had been confined exclusively to a material engineer, a metallurgist or a corrosion specialist has widened today to encompass other disciplines like process, operations, integrity, etc. Material selection is no more under a single umbrella but has become an integrated team effort and a multidisciplinary approach. The material or corrosion specialist in today’s environment has to play the role of negotiator or mediator between the conflicting interests of other peer disciplines like process, operations, concept, finance, budgeting, etc.
With this as backdrop, this article presents various stages in the material selection process and offers a rational path for the selection process toward a distinctive, focused and structured holistic approach.
What is material selection in oil and gas industry? Material selection in the oil and gas industry - by and large - is the process of short listing technically suitable material options and materials for an intended application. Further to these options, it is the process of selecting the most cost- effective material option for the specified operating life of the asset, bearing in mind the health, safety and environmental aspects and sustainable development of the asset, technical integrity and any asset operational constraints envisaged in the operating life of the asset.
What stages are involved? The stages involved in the material selection process can be outlined as material selection 1) during the concept or basic engineering stage, 2) during the detailed engineering stage, and 3) for failure prevention (lessons learned).

Concept Stage

Material selection during the concept stage basically means the investigative approach for the various available material options for the intended function and application. In this stage, a key factor for the material selection is an up-front activity taking into consideration operational flexibility, cost, availability or sourcing and, finally, the performance of the material for the intended service and application.
The material and corrosion engineer’s specialized expertise or skills become more important as the application becomes critical, such as highly sour conditions, highly corrosive and aggressive fluids, high temperatures and highly stressed environments, etc.
It is imperative at this concept stage that the material selection process becomes an interdisciplinary team approach rather an individualistic material and corrosion engineer’s choice. However, some level of material selection must be made in order to proceed with the detailed design activities or engineering phase.
The number and availability of material options in today’s industry have grown tremendously and have made the selection process more intricate than a few decades back. The trend with research and development in the materials sciences will continue to grow and may make the selection even more complex and intriguing.
It should be understood that, at the concept design stage, the selection is broad and wide. This stage defines the options available for specific application with the available family of materials like metals, non metals, composites, plastics, etc. If an innovative and cost-effective material choice is to be made from an available family of options, it is normally done at this stage.
At times, material constraints from the client or operating company or the end user may dictate the material selections as part of a contractual obligation. Sourcing, financial and cost constraints at times may also limit and obstruct the material selections except for vey critical applications where the properties and technical acceptability of the material is more assertive and outweighs the cost of the material.
Materials availability is another important criterion on the material selection which impacts the demanding project schedules for the technically suitable material options. Also, different engineering disciplines may have different and specific requirements like constructability, maintainability, etc. However, a compromise shall be reached at this stage among all the disciplines concerned to arrive at a viable economic compromise on the candidate material.

 

Detailed Engineering Stage

Materials selection during the detailed design stage becomes more focused and specific. The material selection process narrows down to a small group or family of materials, say: carbon steels, stainless steels, duplex stainless steels, Inconels or Incoloys, etc. In the detail design stage, it narrows down to a single material and other conditions of supply like Austenitic stainless steels, Martensitic stainless steels, cast materials, forged materials, etc.
Depending on the criticality of the application at this stage the material properties, manufacturing processes and quality requirements will be addressed to more precise levels and details. This may sometimes involve extensive material-testing programs for corrosion, high temperature, and simulated heat treatment as well as proof testing.
From the concept to detailing stage is a progressive process ranging from larger broad possibilities to screening to a specific material and supply condition.
At times, the selection activity may involve a totally new project (greenfield) or to an extension of existing project (brownfield). In the case of an existing project, it could be necessary to check and evaluate the adequacy of the current materials; it may be necessary at times to select a material with enhanced properties. The candidate material shall normally be investigated for more details in terms of cost, performance, fabricability, availability and any requirements of additional testing in the detail engineering stage.

Failure Prevention (Lessons Learned)

Material selection and the sustainability of material to prevent any failure during the life of the component is the final selection criterion in the process.
Failure is defined as an event where the material or the component did not accomplish the intended function or application. In most cases, the material failure is attributed to the selection of the wrong material for the particular application. Hence, the review and analysis of the failure is a very important aspect in the material selection process to avert any similar failures of the material in future.
The failure analysis - or the lessons learned - may not always result in better material. The analysis may, at times, study and consider the steps to reduce the impact on the factors that caused the failure. A typical example would be to introduce a chemical inhibition system into the process to mitigate corrosion of the material or to carry out a post-weld heat treatment to minimize the residual stresses in the material which has led to stress corrosion cracking failure.
An exhaustive review and study of the existing material that failed, including inadequacy checks and a review of quality levels imposed on the failed materials, is required before an alternate and different material is selected for the application.
The importance of the failure analysis cannot be overstressed in view of the spate of failures in recent times in the oil and gas industry. The results of failure analysis and study will provide valuable information to guide the material selection process and can serve as input for the recommendation in the concept and design stages of the project. It strengthens and reinforces the material selection process with sound back-up information.
Let us take a general view of material recommendations for pipelines. Some of the materials most relevant for use in pipelines in the Middle East are indicated for information and guidance in Table 1. The recommendations are general in nature and each pipeline is to be studied in detail case by case as regards operating conditions, fluid compositions, etc. before any final selections.
Also, other considerations - like the total length of the pipeline, above or below ground installation, nature of the pipeline (export line or processing line, etc.) – that are to be taken into consideration during the detailed engineering phase.
Table 1: General Material Selection for Pipelines in Oil and Gas Industry.
materialchart
Notes: CA: Corrosion Allowance, CS: Carbon Steel, CRA: Corrosion Resistant Alloy and GRP: Glass Reinforced Plastics. The recommendations in Table 1 are for guidance only. Each pipeline is to be analyzed on a case-by-case basis based on operating conditions and fluid compositions.

 

Conclusion

To maintain the integrity of the asset and provide a safe, healthful working environment it is always a welcome event to have the material selection process be executed as a holistic team approach rather than an individual metallurgist’s or corrosion specialist’s choice.

References: “A Rational Approach To Pipeline Material Selection”.http://www.pipelineandgasjournal.com/rational-approach-pipeline-material-selection. January 2014.

Pipeline Installation in Deep Water

Posted By: Unknown - 10:57 PM
Pipeline is a transportation of goods through a pipe. For underwater pipeline the most common goods is oil and gas. Generally pipeline is welding using 5G position with two kind welding techniques. First one is downhill, the most common use for pipeline. The welding start from above the pipe and go through the bottom of the pipe. Second is uphill, the opposite way of downhill. Pipeline welding using downhill technique because this technique is faster than uphill. But for toughness, uphill technique is still better than downhill. Celluloid is the most common welding consumable that use for welding in pipeline. For welding technique is depend on client requirement, but nowadays welding in pipeline is using MIG or for manual is GTAW.
The pipeline have some several parts, there are:
  • Mainline. Mainline is the primary line that transport goods such as oil or gas from manifold to ship or rig.
  • Tie-in. Tie-in contains of flange, pipe, and pipe bend. The function of tie-in is a connector between risers and mainline.
    • Flange: Flange is a joint that can freely open or close. Flange is bolting not welding to make it easier to maintain.
    • Pipe bend: A pipe that bend to transport the goods from mainline to riser.
  • Riser: A part of pipeline to transport the goods from mainline to sea surface.
Generally for pipeline construction there are two kind of design, strain-based pipeline and stress-based pipeline. Stress-based pipeline is a conventional design of pipeline. The pipeline characteristic is stress-based, in the other word is the pipeline can bear the stress good enough. This is designed for safety use but the cost is expensive because using too many material. Nowadays the pipeline design is using strain-based design. It is cheaper and more efficient, but need a good calculation of design.

 

Pipe Laying

Installing pipeline underwater is called pipe laying. There are three types of laying, S-lay, J-lay, and nowadays there is a reeling installation. The different from three of them is the shape of the pipe when went from a barge to seabed. S-lay laying pipe usually use for small size of pipe, when J-lay and reeling is for big size installation.

Pipeline Construction Line

There are two types of pipeline construction, conventional construction and reeling. Nowadays contractor start change their pipeline construction to reeling method. Conventional construction is to construct the pipeline above the ship/ barge. In the other hand reeling construction is to make the pipe in land, reel it and take it to the sea.
The example of conventional pipeline method is on US EPCI company’s barge. Figure 1 is a flow chart of pipeline conventional method.
image
Figure 1. Conventional pipeline installation flowchart.
The barge will bring the pipe to the construction site, another barge also bring it into the site. In location site the barge will put down the anchor. Using a help from tugboat the barge need to get a good position of an anchor to hold the barge getting pulled by pipeline when the pipeline start to release from the barge. When it settle the barge start their construction line.
First the pipe from deck or barge will go into construction line. The pipe will went through beadstall. In beadstall, the pipe will be check the code, cleaning the pipe, and also some preheating is applied into the pipe. If there is something wrong with the pipe code or rejected the pipe will go to quarantine place. After preheating, the pipe will go to the station. Station is a place where the pipe start to weld, NDT test, coating, and in the end is release it into the seabed. How many station on the ship is depends on client demand, but usually there is 9 stations on the ship. First station is for root welding station. Welding technique use in project is depend on the client demand. When it finish it will go to other station to do another welding layer, and so on in other station until finishing the capping weld. After finishing the welding, NDT test will do to check the welding result. The most common NDT test is UT or nowadays is AUT. But sometimes radiography test also use to inspect the welding. If there is a problem, the construction line will stop and welder will be call to repair the welding. If it can't be repair, the pipeline will be cut off and replace with a new pipe. After the NDT station pipeline will go to the next station which is a coating station. In this station they will applied a coating for pipeline joint, a pipeline coating already installed in land. The coating that use for joint is Heat Shrink Sleeves (HSS) coating. After finishing the coating, anode installation will be done, depends on client demand. Later the pipe will be release into the sea. Pipe will go from the barge trough a stinger and later go into the seabed. The pipeline will leave the barge and went through the seabed create an "S" shape. That pipe laying is called S-lay pipe.

Reference: Hutagalung, Andi A., Albert Hutama. 2013. “Phase III Preparation CAPEX NO102 and QC Department Batam Marine Base”. Bandung: ITB.

Vortex Induced Vibration of Offshore Pipeline

Posted By: Unknown - 10:57 PM
In fluid dynamics, vortex-induced vibrations (VIV) are motions induced on bodies interacting with an external fluid flow, produced by – or the motion producing – periodical irregularities on this flow.
A classical example is the VIV of an underwater cylinder. You can see how this happens by putting a cylinder into the water (a swimming-pool or even a bucket) and moving it through the water in the direction perpendicular to its axis. Since real fluids always present some viscosity, the flow around the cylinder will be slowed down while in contact with its surface, forming the so called boundary layer. At some point, however, this boundary layer can separate from the body because of its excessive curvature. Vortices are then formed changing the pressure distribution along the surface. When the vortices are not formed symmetrically around the body (with respect to its midplane), different lift forces develop on each side of the body, thus leading to motion transverse to the flow. This motion changes the nature of the vortex formation in such a way as to lead to a limited motion amplitude (differently, then, from what would be expected in a typical case of resonance).
VIV manifests itself on many different branches of engineering, from cables to heat exchanger tube arrays. It is also a major consideration in the design of ocean structures. Thus study of VIV is a part of a number of disciplines, incorporating fluid mechanics, structural mechanics, vibrations, computational fluid dynamics (CFD), acoustics, statistics, and smart materials.
Pipelines at the bottom of the sea are susceptible to ocean currents. Even relatively calm currents can induce turbulences in the wake of the pipeline, which results in the pipeline to start 'dancing'. Pipe vibrations can trigger fatigue, with catastrophic fracture as a result. Consequently, when designing submarine pipelines, caution is being paid to avoid such vibrations. Our research engineers use powerful software to predict submarine pipeline stability.

“Dancing at Great Depth”

Even relatively calm currents can induce turbulences in the wake of the pipeline, resulting in pipeline oscillations. The pipeline vibrations can trigger fatigue, causing accelerated damage. Since fatigue damage can give rise to complete fracture with catastrophic consequences, extreme caution is being paid in order to avoid such vibrations when designing submarine pipelines. Flow patterns around submarine pipelines greatly depend on the velocity of the sea currents and on the tube diameter. When the current becomes too strong, turbulences show up in the wake of the pipeline. This vortex shedding exerts an alternating force on the pipeline. Consequently, the pipeline is being subjected to cyclic loading. The pipeline starts to dance, following a characteristic ‘number-eight’ path. Under cyclic loading, the pipe is being exposed to fatigue, which could cause the pipe to fail under surprisingly modest stresses.
Vortex induced vibrations

Current State of Art

Much progress has been made during the past decade, both numerically and experimentally, toward the understanding of the kinematics (dynamics) of VIV, albeit in the low-Reynolds number regime. The fundamental reason for this is that VIV is not a small perturbation superimposed on a mean steady motion. It is an inherently nonlinear, self-governed or self-regulated, multi-degree-of-freedom phenomenon. It presents unsteady flow characteristics manifested by the existence of two unsteady shear layers and large-scale structures.
There is much that is known and understood and much that remains in the empirical/descriptive realm of knowledge: what is the dominant response frequency, the range of normalized velocity, the variation of the phase angle (by which the force leads the displacement), and the response amplitude in the synchronization range as a function of the controlling and influencing parameters? Industrial applications highlight our inability to predict the dynamic response of fluid–structure interactions. They continue to require the input of the in-phase and out-of-phase components of the lift coefficients (or the transverse force), in-line drag coefficients, correlation lengths, damping coefficients, relative roughness, shear, waves, and currents, among other governing and influencing parameters, and thus also require the input of relatively large safety factors. Fundamental studies as well as large-scale experiments (when these results are disseminated in the open literature) will provide the necessary understanding for the quantification of the relationships between the response of a structure and the governing and influencing parameters.
It cannot be emphasized strongly enough that the current state of the laboratory art concerns the interaction of a rigid body (mostly and most importantly for a circular cylinder) whose degrees of freedom have been reduced from six to often one (i.e., transverse motion) with a three-dimensional separated flow, dominated by large-scale vortical structures.


References:
”Vortex-induced Vibration”. http://en.wikipedia.org/wiki/Vortex-induced_vibration. January 2014.
“Vortex Induced Vibrations, A Swinging Problem”. http://www.ocas.be/Vortex-Induced-Vibrations. Janaury 2014.

Copyright © Subsea Pupeline Engineering™ is a registered trademark.

Designed by Templateism. Hosted on Blogger Platform.