Thursday, October 31, 2019

Porter's Five Forces Strategy Analysis as it applies to the Auto Essay

Porter's Five Forces Strategy Analysis as it applies to the Auto Industry - Essay Example For international organizations, decisions have to be made on whether the strategies would be the same for every country it competes with as well as giving managers the mandate to choose their own strategies. Functional strategies for particular operations derived from business level strategies include marketing, accounting and finance. An automotive industry manufactures, designs, develops, markets and sell motor vehicles and is considered the world’s most significant economic sector in terms of revenue generation. The American automobile industry is the only industry that has never changed for years since its inception. Businesses begin, grow, develop, and end just like human beings. Some do not complete their life cycle as a result of their interruptions. They undergo a myriad of challenges that make them eventually die. Contrary to a human being, a business can change its methods of operation to more efficient mechanisms for improvement. From this view, the American automo bile industry has raised the question of whether it will be able to adapt or it will end from its stagnating condition. Before establishing an organization’s business-level strategy, it must discern the determining factors of profit maximization of an industry. The tool of analyzing these factors is what is known as Porter’s Five Forces Model. ... Introduction to the Auto Industry As defined earlier, an automotive industry manufactures, markets, designs, develops and sells motor vehicles. It does not include industries attached to automobiles after delivery to the client such as fuel stations, electronics and repair shops. An automobile industry involves producing and selling individual powered vehicles such as trucks, passenger cars, farm equipment and other commercial vehicles. The auto industry has facilitated the growth of infrastructure for long distance commuters, entertainment and shopping, growth of market centers, increased urbanization and industrialization (Burgess, 1980). The industry is also one of the key employers thus contributing to economic growth. Until 2005, the US dominated the world in production of automobile. Majority of the auto dealers in the US were blacksmith and carriage shops. Progress was soon developed when the car replaced the horse and buggy. Blacksmith shops were everywhere in the market cent ers and played the role of serving customers at a great deal. The inventors of automobile industries were engineers like Henry Leland and Henry Ford. Blacksmith shops were service oriented whereas carriage shops required time to time management together with the horses that drew them. Since their goal was to provide exceptional satisfaction to the customers needs, they slowly became auto dealers of servicing their customer’s vehicles. They were able to compete with service stations such as Jiffy Lube, Midas, and Meineke among others. From that time the number of dealers began to increase giving rise to many franchised automobile dealers. This trend went down from 1950 until 2007 (Tuman 19). 3.1 Industry definition The first fifty years saw the industry

Monday, October 28, 2019

Homosocial Communication Practices Essay Example for Free

Homosocial Communication Practices Essay The issue on mixed or single gender school has raised very many arguments; interestingly there is no profound truth that single-sex schools provide quality education compared to mixed school. But mainly the choice of school depends on where the parent fill their child won’t get good education. The school choice can as well be determined by the individual child’s abilities and weaknesses. Being a parent with a school going girl child, and with much interest to sent her to a public school, my choice would be in a K-12 single-sex school for girls. The reasons behind my decision are that single-sex educational setting often controls student’s academic ability. Girls and boys do better in single-sex schools than in mixed-sex schools (Becker, 2001). Single-sex girl’s schools provide them with confidence and achievement; this is evident since they can take non-traditional courses considered for boys especially advanced mathematics and Physics. The girls have freedom of expression in absence of boys who would make jokes upon what the girls say assuming they where in a mixed class thus learning becomes more comfortable (Forgasz Leder, 1995). Better expression gives the girls the much desired inside to conceptualize scientific concepts. There is also a better teacher-student relationship because teaches do not compare between different sexes. The single-sex class setting provides and creates very many opportunities that cannot exist in mixed classes, these opportunities result to better understanding of life concepts Teaches in single-sex schools undergo specialized training on how to interact with the students thus a one-to-one specialized handling of issues which would otherwise not be solved in a mixed gender school, test scores and grades improve significantly (Forgasz, Leder, 1995). The major disadvantage of same sex school for my child is that they lack enough exposure in interacting with opposite sexes; this reduces their level of maturity and even self-discipline. These can result in shy behavior traits since they lack exposure Later in life it becomes a big challenge interacting with men since they lacked the exposure, do not understand their beliefs and way of life from their early ages. Emotional development is as well not fully established in their lives (Haag, 2000). The establishment of single sex schools means that districts must have twice the number of school as opposed to if they had mixed sex schools within the same district. The number of teacher employed is twice even if classes are small and uneconomical teaching. These would result to a nightmare in timetabling, logistical and budget challenges. This could affect the quality of education being offered within the institutions. The required same-sex education skills and extra training provided to the teachers handling these classes may not be provided and the full benefits of the single-sex school may not be accomplished in the long run (Edison Penelope 1982). In conclusion, though same sex school offer children all the opportunities to effectively explore and maximize their potentials in an open and friendly environment, they mainly equip the young ones with one side of what they need in life which is the academics and offer less of other life knowledge skills required later in live. So as parents we must be considerate on our children whole being without laying much emphasis on the academics and ignoring the social part of life. References Edison T. Penelope T. (1982). The independent school experience: aspects of the normative environment of single-sex. Journal of Educational Psychology. Becker, J. R. (2001). Single-gender schooling in the public sector in California: Promise and practice. Forgasz, H. J. , Leder, G. C. (1995). Single-sex mathematics classes: Who benefits? Lawrence Erlbaum Associates. . Haag, P. (2000). K-12 single-sex education: What does the research say? ERIC Digest.

Saturday, October 26, 2019

The Seismic Exploration Survey Information Technology Essay

The Seismic Exploration Survey Information Technology Essay Seismic surveys aims at measuring the earths geological properties employing various physics principles of electric, gravitational, thermal and elastic theories. It was first employed successfully in Texas and Mexico by a company named Seismos in 1924. Since then, many oil companies have used the services of seismology to forecast the presence of hydrocarbon. Major oil companies have actively researched in the seismic technology and this has also found applications in various other researches by scientists around the world. Seismic exploration surveys are method employed in exploration geophysics that uses principles of reflection seismology to estimate the subsurface properties. The method requires a controlled source of energy that can generate seismic waves and highly sensitive receivers that can sense the reflected seismic waves. The time delay in sending and receiving signals can optimally be used to calculate the depth of the formation. Since different formation layers have different densities, they reflect back seismic waves at different velocities. This aspect can be used to estimate the depth of the target formation, usually shale or other rock formations that can form a cap rock or contain oil. Seismic surveys form a part of the preliminary exploration surveys and form the basis for further study of the area under consideration. Seismic waves are a form of elastic waves. When these waves travel through the medium, it creates impedance. The impedance generated between two layers will be different due to density contrast and thus at boundaries, some waves are reflected while other travel through the formation. For this reason, seismic exploration surveys require optimum energy waves which can penetrate through kilometers deep inside the earth to gather data. Hundreds of channels of data are recorded using multiple transmitters and reflectors spread over thousands of meters. Each seismic survey uses a specific type of wave and its arrival pattern in multichannel record. Seismic waves are categorized as : Body waves P-waves S-waves Surface waves Rayleigh wave Love wave For seismic survey, S-wave or the shear wave is the main concern. Seismic waves can be generated by Vibroseis. It employs the use of heavy damping of weight on the surface that generate seismic waves in the subsurface. Alternatively explosives can also be used that can be dug inside the surface to a few meters. The explosion can generate seismic waves. In marine acquisition, streamers are used to gather data. Coil shooting is employed by streamers to gather data. Seismic acquisition has evolved over time and with better technologies in place, the reliability of seismic surveys has been increasing. The 4-D seismic technology being the newest addition to the seismic technology is based upon time varying solutions to the data gathered. The better the acquisition, better are the correspondence analysis. The various seismic acquisition techniques apply to where the survey is being carried out. Surveys have effectively been carried out on land, seas or transition zones. The various techniques applied are : 2-D Seismic Survey they employ the use of seismic maps based on time and depth. Various group of seismic lines are acquired at significant gaps between adjacent lines. 3-D Seismic Survey a cubical arrangement of different slices that is arranged using computer algorithms and can be viewed on software. For a 3-D survey, different surveys are carried out at closely spaced line locations over the area which can be combined to form a cube. 4-D Seismic Survey a relatively new technology, which is an alteration to the 3-D survey. It takes into account the changes happening in the subsurface strata over the production years. Thus it takes into account time as the fourth dimension. This can be very beneficial while determining the well locations in field development. Processing of seismic data is the most important aspect since it undermines the potential of the interpretation process. Processing has mainly been done through various analysis that are majorly mathematical functions fed into computers. A major part of processing is done simultaneously along with acquisition. The data collected can be demultiplexed, convoluted or deconvoluted. This has been dealt with further in the project. Seismic data processing uses the concepts of geometrical analysis and powerful techniques of fourier analysis. The digital filtering theory and practical applications of digital techniques to enhance the images of subsurface geology can virtually be applied to any information sampled in time. The basis aspects of processing is to recognize and remove noise from the signal, correct the Normal Move Out (NMO), and stacking of data to form a chart of seismic image that can be used for further study. Interpretation follows exploration and processing of data. The structural interpretation of seismic images determines all decisions in hydrocarbon exploration and production. Since drilling a well for exploration proves costly, maximum information is derived from the seismic data to establish an opinion about the probability of finding petroleum in the structures. However, drilling is required to verify whether the structures are petroleum rich or not. Thus the main challenge is to establish a model which includes geologically reasonable solutions. Computer-aided seismic interpretation has been of much interest in the later years. The use of unique and highly complicated software has been recommended by various petroleum organizations, which can serve high reliability. However, automating the whole seismic process is an impossible job due to high heterogeneity and varying contrasts between data sources in different parts of the world. Horizon tracking and autopicking is gaining interest among various researchers and developers. This has successfully not been sought as yet. This project is aimed to study the various problems faced in horizon tracking while trying to execute an automated seismic interpretation process. Horizon tracking is basically carried out through autotrackers which are either feature based or correlation based. Feature based looks for similar configuration while the correlation method is more robust and less sensitive to noise. However, tracking across discontinuities is a difficult job. Thus the project is aimed at finding a way to track horizon across fault lines. CHAPTER 2 LITERATURE REVIEW SEISMIC EXPLORATION SURVEY Seismic exploration surveys in the field of oil and gas are an application of reflection seismology. It is a method to estimate the properties of the earths surface from reflected seismic waves. When a seismic wave travels through the rock surface it creates impedance. A wave travels through materials under the influence of pressure. Because molecules of the rock material is bound elastically to one another, the excess pressure results in a wave propagating through the solid. A seismic survey can reveal pockets of lower density material and their location. Although this cannot be guaranteed that oil can be found in these pockets, since the presence of water is also possible. Acoustic impedance is given by :- Z = pV ,where p density of the material and V acoustic velocity of wave Acoustic impedance is important in :- the determination of acoustic transmission and reflection at the boundary of two materials having different acoustic impedances. the design of ultrasonic transducers. assessing absorption of sound in a medium. Thus the acoustic impedance of each rock formation in the subsurface will be different due to different densities. This density contrast is helpful in tracking the waves in the subsurface and an acoustic impedance chart is obtained which is known as a seismic chart. However, the impedances recorded by the instruments on the surface is not correct due to noise and other factors that change the impedance factor of the wave. When a seimic wave is reflected off a boundary between two materials with different impedances, some energy is reflected while some continues through the boundary. The amplitude of this wave can be predicted by multiplying the amplitude of the incoming wave by the Seismic Reflection Coefficient, R. ,where Z1 and Z0 are impedances of the two rock formations. Similarly the amplitude of wave travelling through the formation can be determined using the Transmission Coefficient, T. ,where Z1 and Z0 are impedances of the two rock formations. By noting the changes in strength of the wave, we can infer the change in acoustic impedances and thus conclude the change in density and elastic modulus. This change can be used to notify the structural changes in the subsurface and thus predict the formation based upon impedances. It might also happen that when the seismic wave hits the boundary between two surfaces it will be reflected or bent. This is given by Snells Law. The reflection and transmission coefficients are found by applying the appropriate boundary conditions and using Zoeppritz equations. These are a set of equations which determine the partitioning of energy in a wavefield at a boundary across which the properties of rock or the fluid changes. They relate the amplitudes of P-waves and S-waves at each side of the surface. Zoeppritz equations have been useful in deriving workable approximations in Amplitude versus Offset (AVO). These studies attempt with some success to predict the fuid content in the rock formations. The parameters to be used for each seismic survey depends on various variables, including whether the survey is being carried out on land or a marine environment. Other geophysical issues such as sea depth, terrain also play a big role. Safety issues are also important. A Seismic Exploration Survey is broadly divided into three steps :- Seismic Data Acquisition Seismic Data Processing Seismic Data Interpretation Each step in the survey needs high reliability and complicated equipments that can deliver the best results. More often, based on these results, the drilling of exploration wells is based. Since drilling can prove costly, thus capital investment is one of the major concern of every company. The Seismic Exploration Survey can be shown as :- SEISMIC DATA ACQUISITION Seismic data acquisition refers to collection of seismic data. The acquired data is further sent to a computer network where processing of data takes place. With better technologies, the prospect of better acquisition surveys have come into place. A generation and recording of seismic data requires :- Receiver configurations includes geophones of hydrophones in the case of marine acquisition. Transmitter configurations includes laying of transmitter as according to the survey configuration predecided. Orientation of streamers in case of marine surveys. Proper computer network to carry the information from receivers to the programming network. When a survey is conducted, seismic waves generated by dynamite or vibrators travel through the subsurface strata, which are in turn reflected or refracted. These reflected waves and their time to complete one interval is noted by the receivers. The receiver configuration has to be well determined so that maximum data can be collected over an area. ACQUISITION ON LAND In a typical land seismic acquisition process, the survey is planned in an attempt to minimize the terrain constraints. It basically includes the sensor layout scheme and the source development scheme. The source development scheme is used to configure the number of transmitters being used to send the signal down the surface. One or more transmitters can be used based on the programme employed. Similarily one or many receivers can be employed to collect the reflected waves data. The receiver configuration is an important aspect. The configuration can be in such a way that the closest receiver gathers only the high amplitude wave on the first line of receivers or it can be different based on the signal strength and seismic line survey. The data collected through receiver or geophones is converted to binary data that can is further handed over to the computer network for processing. MARINE ACQUISITION Marine acquisition involves processes such as :- Wide-Azimuth Marine Acquisition Azimuth surveys provide a step-change improvement in imaging of seismic data. These surveys provide illumination in complex geology and natural attenuation of some multiples. Azimuth shooting illustrates the acquisition of data in all directions. This acquisition technique can help in generating 3-D models. Coil Shooting this technique acquires marine seismic data while following a circular path by improving upon multi and wide azimuth techniques. This includes vessel steering, streamers and sources in a fashion which delivers greater range of azimuths. Sometime single-sensor recording while steering the vessel in different directions has proved to be more beneficial in case of noise attenuation and signal fidelity. Different seismic surveys can be classified as :- Two-dimensional Survey Three-dimensional Survey Four-dimensional Survey TWO DIMENSIONAL SURVEYS In such a survey seismic data is acquired simultaneously along a group of seismic lines which are differentiated with some gaps, usually 1 km or more. A 2-D survey contains many lines acquired orthogonally to the strike of the geological structures with a minimum number of lines acquired parallel to geological structures to allow line-to-line tying of the seismic data and interpretation and mapping of structures. This technique generates a 2-D cross-section of the deep seabed and is used primarily when initially reconnoitering for the presence of oil and gas reservoirs. THREE DIMENSIONAL SURVEYS Multiple streamers shoot on closely spaced lines. The seismic data gathered on close spacing, the 3-D seismic cube can be formed. This innovation requires use of high performance computers and advanced data processing techniques. The computer generated model can be analyzed in greater detail by viewing the model in vertical and horizontal time slices, or even an inclined section can be viewed. In a standard 3-D seismic survey, the streamers are placed at about 50-150 meters apart, each streamer being 6-8 kilometers long. Airguns are fired every10-20 seconds. However, many other objectives and economical constraints determine the specific acquisition parameters. FOUR DIMENSIONAL SURVEYS The 4-D survey is also called the time-lapse survey. It involves processing of repeated seismic surveys over an area of reservoir under production. The changes occurring in the reservoir due to production and injection can be determined overtime which further helps in field development of the reservoir. One important aspect of a 4-D survey is that there should be minimum difference in the position of the seismic lines when a repeated survey is done after sometime. Significant cost savings can be done by the use of 4-D surveys due to better planning and understanding of reservoir characteristics. DIFFERENT SHOT METHODS The common shot gather uses one transmitter source (vibroseis or explosives) and many receivers (geophones) places at some distance from the source. They geophones are placed at equal spacings from each other. Commom midpoint gather is the most widely used survey technique. It uses one transmitter placed at the midpoint exactly above the formation area to be surveyed. Receivers are set in all the directions surrounding the transmitter. Common offset gather uses multiple shot and receiving technique. Common receiver position gather, as the name states, has only on receiver. While the many shots are employed, the various seismic waves reflecting back to the receiver have different amplitudes and frequencies, thus can be varied and collected differently. COMMON MIDPOINT METHOD It was discovered that relection seismic sections can be improvised by repeated sampling of the subsurface formations using different travel paths of the seismic waves. This can easily be achieved by using commom midpoint method which states that increasing the spacing between source and receiver about a commom midpoint and generating duplicated data of the subsurface coverage. The processing of a common midpoint gather system requires sorting of data from the Commom Shot Gather into a Commom Midpoint Gather. The data collected is usually in the form : In this method, the inclination of the data occurs since the wavefronts reaching out to the receivers are at an inclined angle, this results in much larger raypath than the corresponding receiver placed close to the shot point. In order to use the recordings to a common depth point, one needs to correct the data for all the time travel distances. This is known as Normal Moveout Correction (NMO). After NMO, the summation of various wavepaths gives us a horizontal section at time travel equal to zero. This is known as time stacking procedure. After NMO correction the data is shown as :- SEISMIC DATA PROCESSING A reference seismic processing sequence is applied to input raw gathers to obtain reference seismic output data. A series of test seismic processing sequences are applied to the input raw gathers to obtain test seismic output data. The RMS value of the test seismic output data is normalized to that of the reference seismic output data on a trace by trace basis. The normalized difference between the test and the reference seismic output data is calculated on a sample by sample basis in the time domain and are displayed on color coded plots in the time scale format over the CDP range. Linear regression is performed for each CMP gather to obtain the stack and the zero offset calculated for each time index and the difference is recorded. The normalized differences between the error for the test and the reference sequences are calculated and displayed on color coded plots. The order of sensitivity for each processing step in the reference processing sequence is determined. If necessary, a ny processing step is rejected and the reference processing sequence is revised. 2 WELL-DRIVEN SEISMIC Integrating well data throughout the seismic workflow for superior imaging and inversion   Well-Driven Seismic (WDS) is the integration of borehole information throughout the surface-seismic workflow to provide better seismic images, more reliable stratigraphic interpretation, and greater confidence in global reservoir characterization. Wireline logs (compressional, shear, and density), VSPs, and surface-seismic data represent the elastic response of the earth at various resolution scales. A principle of the Well-Driven Seismic concept is that these data should be processed with respect to their mutual consistency, i.e., that the seismic data must tie with logs and VSPs in time and depth. The aim of the Well-Driven Seismic method is to involve all the available borehole information to optimize the entire seismic workflow to deliver seismic images of superior resolution (in time or depth) and calibrated prestack seismic amplitudes that are suitable for inversion and detailed seismic reservoir description.   Earth properties from logs, VSPs, and surface-seismic data   The Well-Driven Seismic workflow invokes new proprietary software and analysis techniques from WesternGeco and Schlumberger to derive an earth property model from the integrated analysis of wireline logs, VSPs, and surface-seismic data. The property model includes compressional and shear velocities, attenuation (Q) factors, VTI anisotropy parameters, and interbed multiple mechanisms, and is derived at the well location (or locations) and extended across the survey area in 3D. The 3D model is applied in the seismic processing sequence for true amplitude and phase recovery, deconvolution, multiple attenuation, anisotropic prestack time and depth imaging (including of converted-wave data), AVO analysis, and 4D processing.   WELL DATA FOR HIGH RESOLUTION SEISMIC IMAGING Well information can improve many key stages of the conventional seismic processing sequence. VSP data provide excellent discrimination of primary and multiple events, and are used to guide surface-seismic multiple attenuation processes. Furthermore, interbed multiple mechanisms identified in separated VSP wavefields are used as input to data-driven multiple attenuation processes, such as the WesternGeco Interbed Multiple Prediction (IMP). Inverse-Q operators derived from VSP data (and new methods for walkaway VSP data) can significantly improve seismic resolution. WesternGeco employs a proprietary deconvolution process that is constrained by the signal-to-noise level in the seismic data and by the well reflectivity to enhance further the seismic resolution. The calibrated anisotropic velocity model is vital for prestack time and depth migration (including of converted waves) to improve steep-dip imaging, lateral positioning of reflectors, signal-to-noise ratios, and seismic resoluti on.   OPTIMIZED WELL TIES The Well-Driven Seismic method optimizes the processing sequence and the processing parameters within that sequence to tie the seismic data to the wells. Attributes based on the well tie and on the quality of the extracted wavelets are used for deterministic seismic processing decisions. Space-adaptive wavelet processing corrects 3D seismic data to true zero phase between well locations, and stabilizes residual spatial wavelet variations.   BOREHOLE CALIBREATED SEIMIC INVERSION The Well-Driven Seismic approach provides greater sensitivity to seismically derived reservoir attributes through calibrated AVO or acoustic impedance inversion. The well data are particularly important for successful processing of seismic data for inversion. Compensation for the offset-dependent effects of Q, geometric spreading, transmission losses, and anisotropy are essential for processing data over very long offsets (where the strongest AVO expression of the reservoir may be visible). The method calibrates the AVO signatures in the prestack seismic data with the offset-dependent amplitude response synthesized from well logs and/or the response expressed in the walkaway VSP to provide assurance of the seismic processing sequence.   With the seismic processing sequence optimized for resolution and consistency with the well data, Well-Driven Seismic processing is a vital prerequisite for acoustic impedance or AVO inversion and subsequent reservoir characterization. AVO AND INVERSION Amplitude variation with offset (AVO) has been used extensively in hydrocarbon exploration over the past two decades. Traditional AVO analysis involves computation of the AVO intercept, gradient, and higher-order AVO term from a fit of P-wave reflection amplitude to the sine square of the angle of incidence. This fit is based on the approximate P-wave reflection coefficient formulation in intercept-gradient form, given by Bortfeld (1961) and Shuey (1985) among others. Under the assumption of a background PS velocity ratio, the AVO intercept and gradient values can also be combined to obtain additional AVO attributes such as pseudo-S-wave data, Poissons ratio contrast, and others. AVO intercept and pseudo-S-wave data are also used in conjunction with prestack waveform inversion (PSWI) in a hybrid inversion scheme. Hybrid inversion is a combination of prestack and poststack inversion methodologies. Such a combination allows efficient inversion of large data volumes in the absence of we ll information. Amplitude Variation with Offset (AVO) inversion is a prestack technique that is readily applied to seismic gathers but which is still largely under-utilised in the exploration community despite its ability to effectively discriminate between fluid and lithology effects. AVO inversion is equally applicable to both 2D and 3D seismic data in time or depth providing that sufficient care has been taken to preserve amplitudes during processing. A reliable velocity model is also a critical component of the AVO process as accurate angle information is a prerequisite for AVO inversion. The more accurate the angles, the better the partitioning of amplitudes to P-wave and S-wave reflectivities. In addition, both angle and ray path information can be incorporated in a variety of model based amplitude corrections that are preferable and often more accurate than scalars derived from empirical equations. The inversion process is then performed, completing in about the same time as a conventional stack. The resulting outputs are a series of AVO reflectivity sections or volumes that are determined by the Zoeppritz approximation used. Fluid Factor is one of the most useful attributes derived from AVO inversion due its ability to make such distinctions and directly identify hydrocarbons. Multi-Measurement Reservoir Definition workflows include the following components: Reservoir Synthetic Modeling Forward modeling to generate pre-stack synthetics from geological models Anivec (prestack elastic modeling) Prestack Waveform Inversion (PSWI) Full waveform prestack inversion is a non-linear inversion process that estimates elastic model (Vp, Vs, and density) from prestack seismic data using a genetic algorithm. AVO Modeling and analysis AVO Conditioning Conditions angle band stacks prior to performing AVO analysis AVO Inversion Elastic impedance modeling and inversion from angle band cubes Space-adaptive Inversion Space adaptive wavelet processing and inversion to relative seismic impedance Elastic Impedance Inversion Combining low frequency trends with seismic relative inverted impedance cubes to generate absolute impedance Integrated Rock Physics Modeling Fluid and rock property analysis, modeling and substitution Rock Property Calibration Generating rock properties from seismic using transforms derived from petrophysical analysis of well data. The outputs are high-resolution absolute acoustic and shear impedance and density volumes consistent with the seismic data and the well-log data. The inverted elastic parameter volumes are used for detailed interpretation of lithofacies and pore-fluid content in the subsurface. Combined with rock physics modeling and rock property mapping through lithology classification and joint porosity-saturation inversion, the method provides a powerful tool for quantitative reservoir description and characterization. The results are the most-probable litho-class, porosity, and saturation with uncertainties of prediction at every sample point in the 3-D volume. SIGNAL PROCESSING Some elements of the seismic data processing sequence are virtually universal regardless of whether the intention is to perform  time  imaging,  depth  imaging,  multicomponent  imaging, or  reservoir  studies. Data conditioning and signal processing form the foundation of the seismic processing workflow. Signal processing encompasses a wide variety of technologies designed to address numerous challenges in the processing sequence: from data calibration and regularization through to noise attenuation, demultiple, and signal enhancement techniques. It includes Multiple Attenuation Signal Enhancement Data caliberation and regularization Noise Attenuation TIME PROCESSING Prestack time migration (PSTM) may not be the most sophisticated imaging method available, but it remains the most commonly used migration algorithm in use today. Kirchhoff PSTM combines improved structural imaging with amplitude preservation of prestack data in readiness for AVO, inversion, and subsequent reservoir characterization. Advances in this field also mean that time imaging, more than ever before, is an ideal first step in a  Depth Imaging  workflow, reducing the number of velocity model building iterations and decreasing overall turnaround time. It includes Imaging: Regularization, migration and datuming techniques   Statics portfolio   Velocities and moveout Enhanced Migration Amplitude Normalization DEPTH PROCESSING Depth Imaging is the preferred seismic imaging tool for todays most challenging exploration and reservoir-delineation projects. In areas of structural or seismic velocity model complexity, many of the assumptions underpinning traditional time-domain processing are invalid and can produce misleading results. Typical situations might be heavily faulted sequences or salt intrusions. In these cases, only the careful application of 3D prestack depth imaging can be relied on to accurately delineate geological structure, aiding risk assessment and helping operators to improve drilling success rates. TECHNOLOGY   From a technology perspective, high quality depth imaging has two main aspects: the ability to build detailed and accurate velocity models, coupled with a superior imaging algorithm. VELOCITY MODEL BUILDING Velocity Model Building is a key critical element in imaging the Earth. Tomography provides the best high resolution calibrated velocity and anisotropic Earth Models, powerful refraction tomographies detect shallow velocity anomalies. All those algorithms work with any acquisition configuration and can be applied to any geological setting. Also, these computer intensive algorithms are integrated with an interactive graphics environment for rapid and accurate quality control of the interim and final results. VECTOR PROCESSING Conventional seismic recording uses a single scalar measurement of pressure or vertical displacement throughout the 2D or 3D survey to derive images and models of the subsurface. Subsequent processing and inversion steps can be linked to the relative shear-wave contrasts in the subsurface using rock property relationships. However, sometimes it is impossible to meet a surveys seismic imaging or reservoir definition objectives using compressional (P) waves alone. SEISMIC DATA INTERPRETATION Computer aided interpretation is the mainstay of 3D seismic interpretation as the amount of data used is voluminous. The important services are:   IIWS (Intergrated Intelligence Workstation) based interpretation of 2D, 3D data     Structural mapping     Integrating seismic attributes with wireline, core and reservoir data for reservoir characterisation     Seismic modeling   3D visualisation and animation     Palinspastic restoration   Structural restoration is an established method by which to validate seismic interpretations. In addition, palinspastic reconstruction can help identify potential reservoir depocentres, enable the measurement of catchment areas at the time of hydrocarbon migration and lead to an improved understanding of complex hydrocarbon systems such as those in the deepwater. Restoration is achieved by the sequential backstripping of the present day depth model. Upon removal of each successive layer, the remaining surfaces within the model are adjusted to accoun

Thursday, October 24, 2019

America Needs a Motorcycle Helmet Law Essay -- Argumentative Persuasiv

  Ã‚  Ã‚  Ã‚  Ã‚     Ã‚  Ã‚  Ã‚  Ã‚  Millions of people all over the United States choose motorcycles over automobiles for the thrill, speed, and high performance capabilities. On the other hand, motorcycles are not at all the safest way of transportation. Motorcycles do not provide the passenger with the outer protection that cars provide, therefore, when one crashes, the results are usually much more serious. Injuries to the head are responsible for 76% of fatalities when dealing with motorcycle crashes many of which could have been prevented had the rider been wearing a helmet. For this reason, many states have adopted the motorcycle helmet law. The law states that every passengers must wear a helmet at all times when riding on a motorcycle. This law has created a great deal of controversy. One side supports the law, believing that it protects motorcyclists from danger and saves the economy a great deal of money. The other side argues that the law is unconstitutional and it violates our right to f reedom. However, statistics show overwhelming support in favor of the motorcycle helmet law. Although wearing helmets cannot prevent motorcycle crashes, they can greatly reduce the number of deaths caused by head injury as well as lowering taxes, insurance rates, and health care costs. Therefore, the helmet law should be put into effect in every state across the United States.   Ã‚  Ã‚  Ã‚  Ã‚  Helmets drastically reduce the tremendous number of deaths caused by head injuries as well as reducing the severity of any ...

Wednesday, October 23, 2019

Microchips in Humans Essay

In today’s society, technology continues to find new ways to protect our children and families. Several devises have already been developed to track children when they are away from home. These devises work by GPS signal to track the movement of the child and are worn externally or are imbedded in an item of clothing. There are also GPS devices already approved for implantation in humans. VeriChip was the only Food and Drug Administration-approved human-implantable microchip for use in medical purposes (DHHS pg. 71702-71704). These developments have sparked a debate over whether we should consider implanting microchips in humans for tracking and safety reasons and not just medical purposes. Today there are microchips implanted into pets for tracking purposes yet they do not have a GPS signal and only work once the animal is found and not to track their location. These same kinds of devises may be offered in the future for humans. Our government should never allow microchips to be implanted in humans for any purposes much less mandate their use. External tracking devices currently available have the capability to track children for safety without the need for implanting a microchip in the body. This kind of tracking device would serve no purpose in tracking the child’s location. As stated in this article: Chip implants would be of little use in tracking a missing child as readers only have a limited range (Lane par 14). The FDA believes that a person’s overall health may be affected by tissue reaction, movement of implanted chip, failure of transponder, among many other complications. In addition, it is clear that there are many risks involving implantation of these devise (DHHS pg. 71702-71704). The FDA also has a waiver to be signed that releases them from any liability in regards to these devises, not to mention they are also currently conducting research to see if these devices can cause cancer in the patient implanted with these microchips. Today there are microchips ready for implantation into humans but at what cost to our health? We already know that these devises pose several health risks and are a direct violation to human rights. However, what we do not know about the effects of these devises may be far more dangerous than anticipated. Imposing a law to mandate these devices in the future could be far more costly to society then we will ever know. Some say these devises will be  good for medical patients however; there are many other ways to track ones medical history without implanting a foreign object into the body. There are too many side effects associated with this devise, which frankly is unnecessary and not needed for any real purpose, other than for tracking of an individual every move at any given time. Personally, I do not want that kind of power given to anyone. The opposition on the other hand believes that these devises will be good for medical patients. However; there are many other ways to track ones medical history without implanting a foreign object into the body. Currently there are medical I.D. bracelets to alert health care providers to any emergencies without the need for implanting mic rochips. Implanting microchips in humans also raises the question to the right of privacy as well as health concerns related to implantation. Our right to privacy is defined as the right to one’s freedom of intrusion. If all humans were implanted with microchips, there would be no such thing as privacy as we now know it, not to mention the invasiveness of the surgery for implantation. The continuing presence of the microchip within the individual must also be taken into account when considering our human rights. In combination with the surgery, the implant represents a permanent intrusion of our privacy. With an implanted microchip, your brain is hotwired into a computer that has a GPS tracking systems to monitor your movements, every minute of every day, for the rest of your life. Just imagine how creepy that really sounds. The concept of privacy for anyone implanted would never exist again. To protect our privacy, we need to better understand its value and the purpose it serves. Privacy is understood as an important barrier that gives us space to develop an identity that is separate from the supervision, assessmen ts, and values of our society. Privacy is crucial for helping us manage all of the pressures that shape the type of person we are in society. Privacy is also used as the groundwork to protect our other fundamental rights. If our right to privacy were compromised then our other rights would soon faultier as well. To implant microchips into human being seems to be a clear case of intrusion of our bodies and our lives. Another reason our government should not allow microchips implanted in humans is that it would serve no real purpose except to track our movements and why should anyone have that much power over any individual. We need to consider the bigger picture as stated in this  article: Imagine what the government could do with this kind of technology. If it wanted to, it could use this technology to track literally every movement and behavior of everyone at any given time (Slavo par 15). Personally, this technology even being considered in our society should be a crime. The government should not allow or mandate implantation of microchips in humans for any reason. There are many reasons why these devises should not be implanted but instead should be outlawed. These devices not only pose a health risk to patients but also violate our rights to privacy. I have stated many reasons why the government should not allow microchips to be implanted in humans, not only is it morally, ethically and logically wrong but it would also serve no real purpose other than tracking our movements which should never be allowed. Personally, I do not want anyone to have that much power over me and neither should you.

Tuesday, October 22, 2019

Free Essays on The Great Britian

United Kingdom, constitutional monarchy in northwestern Europe, officially the United Kingdom of Great Britain and Northern Ireland. Great Britain is the largest island in the cluster of islands, or archipelago, known as the British Isles. England is the largest and most populous division of the island of Great Britain, making up the south and east. Wales is on the west and Scotland is to the north. Northern Ireland is located in the northeast corner of Ireland, the second largest island in the British Isles. The capital of the United Kingdom is the city of London, situated near the southeastern tip of England. People often confuse the names for this country, and frequently make mistakes in using them. United Kingdom, UK, and Britain are all proper terms for the entire nation, although the term Britain is also often used when talking about the island of Great Britain. The use of the term Great Britain to refer to the entire nation is now outdated; the term Great Britain, properly used, refers only to the island of Great Britain, which does not include Northern Ireland. The term England should never be used to describe Britain, because England is only one part of the island. It is always correct to call people from England, Scotland, or Wales British, although people from England may also properly be called English, people from Scotland Scottish, and people from Wales Welsh. The United Kingdom is a small nation in physical size. At 244,110 sq km (94,251 sq mi), the United Kingdom is roughly the size of Oregon or Colorado, or twice the size of New York State. It is located as far north in latitude as Labrador in North America, but, like the rest of northern Europe, it is warmed by the Gulf Stream flowing out of the North Atlantic Ocean. The climate, in general, is mild, chilly, and often wet. Rain or overcast skies can be expected for up to 300 days per year. These conditions make Britain lush and green, with rolling plains in the s... Free Essays on The Great Britian Free Essays on The Great Britian United Kingdom, constitutional monarchy in northwestern Europe, officially the United Kingdom of Great Britain and Northern Ireland. Great Britain is the largest island in the cluster of islands, or archipelago, known as the British Isles. England is the largest and most populous division of the island of Great Britain, making up the south and east. Wales is on the west and Scotland is to the north. Northern Ireland is located in the northeast corner of Ireland, the second largest island in the British Isles. The capital of the United Kingdom is the city of London, situated near the southeastern tip of England. People often confuse the names for this country, and frequently make mistakes in using them. United Kingdom, UK, and Britain are all proper terms for the entire nation, although the term Britain is also often used when talking about the island of Great Britain. The use of the term Great Britain to refer to the entire nation is now outdated; the term Great Britain, properly used, refers only to the island of Great Britain, which does not include Northern Ireland. The term England should never be used to describe Britain, because England is only one part of the island. It is always correct to call people from England, Scotland, or Wales British, although people from England may also properly be called English, people from Scotland Scottish, and people from Wales Welsh. The United Kingdom is a small nation in physical size. At 244,110 sq km (94,251 sq mi), the United Kingdom is roughly the size of Oregon or Colorado, or twice the size of New York State. It is located as far north in latitude as Labrador in North America, but, like the rest of northern Europe, it is warmed by the Gulf Stream flowing out of the North Atlantic Ocean. The climate, in general, is mild, chilly, and often wet. Rain or overcast skies can be expected for up to 300 days per year. These conditions make Britain lush and green, with rolling plains in the s...

Monday, October 21, 2019

Abstinence essays

Abstinence essays In the article Abstinence by Ray Hoskins (Slife, 1994) it is stated that abstinence is the only way that a person with an addiction can recover. On the other hand Michael S. Levy states in his article Individualized Care For The Treatment Of Alcoholism (Slife, 1994) that abstinence may not the best way to treat an addiction, but that the best way to treat an addiction depends on each individuals specific needs. It is clear that the only way a person can strive and conquer an addiction is to completely distance themselves from the cause of that addiction, in the case of an alcoholic; they must distance themselves from alcohol. To understand abstinence we must first understand what the terms addict and addiction mean. According to Websters Dictionary (1996) addict is defined as, surrendering (oneself) habitually or compulsively to something, as caffeine or alcohol. To break down this definition, we can say that an addict is a person that creates a habit on relying on a substance (alcohol) or an act (sex). With the understanding of what an addict is we can define addiction as a state of mind where one depends upon a substance (alcohol) or an act (sex) that affects their daily life in some form, usually in a negative way. For example in Levys Individualized Care For The Treatment Of Alcoholism (Slife, 1994) case vignette 5 states; L...a 30 year old, married male...described his drinking most every day, but was most concerned about his heavy drinking with loss of control, which generally occurred three times a week. In this case we can see that L was showing signs of addiction towards alcohol in the ways that he formed a habit, drinking everyday, and it affected his life, losing control. When a person does become addicted to a substance such as alcohol the only way to successfully stop the addiction is through the means of abstinence, or stopping completely ...