Institute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X34320081022Determination of basement geometry using 2-D nonlinear inversion of the gravity dataDetermination of basement geometry using 2-D nonlinear inversion of the gravity data27389FAJournal Article19700101Inverse modeling is one of the most elegant geophysical tools for obtaining 2-D and 3-D images of geological structure.
Determination of the geometry of bedrock, by nonlinear inverse modeling of gravity data, is the aim of this paper.
The algorithm uses a nonlinear iterative procedure for simulation of bedrock geometry. At the first step, the nonlinear problem changes to a linear problem by a proper approximation and standard method. The second step is the parameterization of the model. Finally, an initial model is suggested on the basis of geological and geophysical assumption and using the numerical analysis, the Jacobean matrix is calculated. The inversion will improve the initial model in each iteration, considering the differences between observed and calculated gravity anomalies, based on Levenberg-Marquardt's method.
The usual practice of inverting gravity anomalies of two-dimensional bodies is to replace their cross sections by an n-sided polygon and to determine the locations of the vertices that best explain the observed anomalies. The initial coordinates of the vertices are assigned and later modified iteratively so as to minimize the differences between the observed and calculated anomalies. The estimation of the initial values is a separate and indeed a critical exercise. This selection determines the convergent solution to the problem. It seems that inversion schemes replacing the two-dimensional bodies by a series of juxtaposing prisms, instead of a polygonal cross section, do not require any a priori calculation of the initial values of the parameters that define the outline of the body. This paper presents such an inversion scheme for determining the density surface such as the basement topography above an assigned depth Z and density contrast .
The method does not require input of initial values of any other parameters. It is also applicable for determining structure with a flat top or a flat bottom.
The program determines depths to the top of the basement surface below each point of gravity anomaly along a profile.
The practical effectiveness of this method is demonstrated by the inversion of synthetic and real examples. The real data is acquired over the site of the construction of a new line of the Tehran underground railway.
Finally the results are compared with the geological information.Inverse modeling is one of the most elegant geophysical tools for obtaining 2-D and 3-D images of geological structure.
Determination of the geometry of bedrock, by nonlinear inverse modeling of gravity data, is the aim of this paper.
The algorithm uses a nonlinear iterative procedure for simulation of bedrock geometry. At the first step, the nonlinear problem changes to a linear problem by a proper approximation and standard method. The second step is the parameterization of the model. Finally, an initial model is suggested on the basis of geological and geophysical assumption and using the numerical analysis, the Jacobean matrix is calculated. The inversion will improve the initial model in each iteration, considering the differences between observed and calculated gravity anomalies, based on Levenberg-Marquardt's method.
The usual practice of inverting gravity anomalies of two-dimensional bodies is to replace their cross sections by an n-sided polygon and to determine the locations of the vertices that best explain the observed anomalies. The initial coordinates of the vertices are assigned and later modified iteratively so as to minimize the differences between the observed and calculated anomalies. The estimation of the initial values is a separate and indeed a critical exercise. This selection determines the convergent solution to the problem. It seems that inversion schemes replacing the two-dimensional bodies by a series of juxtaposing prisms, instead of a polygonal cross section, do not require any a priori calculation of the initial values of the parameters that define the outline of the body. This paper presents such an inversion scheme for determining the density surface such as the basement topography above an assigned depth Z and density contrast .
The method does not require input of initial values of any other parameters. It is also applicable for determining structure with a flat top or a flat bottom.
The program determines depths to the top of the basement surface below each point of gravity anomaly along a profile.
The practical effectiveness of this method is demonstrated by the inversion of synthetic and real examples. The real data is acquired over the site of the construction of a new line of the Tehran underground railway.
Finally the results are compared with the geological information.Institute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X34320081022A review on the theory of electromagnetic exploration methodsA review on the theory of electromagnetic exploration methods27390FAJournal Article19700101Institute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X34320081022Lithology discrimination using Poisson’s and Lame coefficients under reservoir condition-a laboratory test-on samples from a field in the southwest of IranLithology discrimination using Poisson’s and Lame coefficients under reservoir condition-a laboratory test-on samples from a field in the southwest of Iran27391FAJournal Article19700101Institute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X34320081022Zonation of two oil reservoirs in Iran with a statistical method based on well log dataZonation of two oil reservoirs in Iran with a statistical method based on well log data27392FAJournal Article19700101Different methods exist for the zonation of oil reservoirs based on petrophysical data and well logs. Among them are:
Permeability-Porosity cross polt, Pickett and Soder and Gill methods. In this study a statistical zonation technique has been used for the Marun and Pazanan oil reservoirs in Iran based on effective porosity, density, sonic, neutron and resistivity data.
Petrophysical interpretation of results can reveal the zones of high porosity, permeability and behavior of productive zones along the reservoirs. The present study used well log data (resistivity, neutron, density, sonic and radioactive logs) and Geolog software to obtain reservoir parameters such as porosity, water saturation, hydrocarbonic zones, lithology ana cut offs for the Marun and Pazanan oil fields.
Based on a variance analysis method, proposed by Testerman in 1962, a stasistical zonation was programmed and applied to the reservoirs parameters to obtain the best zones boundaries. The main advantage of this method is that the limits (or boundaries) of the different zones are determined automatically according to a condition which has been previously defined.Different methods exist for the zonation of oil reservoirs based on petrophysical data and well logs. Among them are:
Permeability-Porosity cross polt, Pickett and Soder and Gill methods. In this study a statistical zonation technique has been used for the Marun and Pazanan oil reservoirs in Iran based on effective porosity, density, sonic, neutron and resistivity data.
Petrophysical interpretation of results can reveal the zones of high porosity, permeability and behavior of productive zones along the reservoirs. The present study used well log data (resistivity, neutron, density, sonic and radioactive logs) and Geolog software to obtain reservoir parameters such as porosity, water saturation, hydrocarbonic zones, lithology ana cut offs for the Marun and Pazanan oil fields.
Based on a variance analysis method, proposed by Testerman in 1962, a stasistical zonation was programmed and applied to the reservoirs parameters to obtain the best zones boundaries. The main advantage of this method is that the limits (or boundaries) of the different zones are determined automatically according to a condition which has been previously defined.Institute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X34320081022On the optimum method for estimation of regularization parameter of downward continuation in the problem of geoid computation without Stokes formulaOn the optimum method for estimation of regularization parameter of downward continuation in the problem of geoid computation without Stokes formula27393FAJournal Article19700101One of the main steps within the geoid computation methodology without applying the Stokes formula is downward continuation of the harmonic residual observables from the surface of the Earth down to the surface of the reference ellipsoid. This downward continuation is done via the Abel-Poisson integral and its derivatives. This integral in which the unknowns, i.e. harmonic residual potential values on the surface of the reference ellipsoid, are under integral sign, is a Fredholm integral equation of the first kind. Solution of the aforementioned integral equation is an unstable problem and like any unstable problem requires regularization. One of the most important issues of every regularization method is estimation of the regularization parameter.
The aim of this paper is the comparison of different methods for estimation of the regularization parameter of the Tikhonov regularization method when applied to the downward continuation of incremental gravity observables for the geoid computation without applying the Stokes formula. For this purpose, the following regularization parameter selection methods, which are free from the knowledge of norm of vector of observation errors, are considered: (i) Discrepancy Principle (DP), (ii) Generalized Cross-Validation (GCV), (iii) L-Curve (LC), and (iv) Flattest Slope (FS). Each regularization parameter estimation method has its own concept for identification of optimum regularization parameter and as such they can result in different regularization parameters for the same problem. For example, in the DP method, the optimum regularization parameter is selected in a way that the estimated factor variance is less sensitive to the variations of the regularization parameters. In the GCV method, the optimum regularization parameter is the one that is less sensitive to the reduction of input information. LC makes a balance between regularization of the solution and the introduced error by the regularization. In FS, the estimation of optimum regularization parameter is based on having the least changes in the solution of the problem vs. changes of the regularization parameter.
The aforementioned methods are applied to: (i) the real data for geoid computation without applying the Stokes formula in a geographical region of Iran (43.5?E< <64.5?E, 23.5?N< <40.5?N) based on a methodology, which algorithmically consists of remove, downward continuation using ellipsoidal Abel-Poisson integral, restore, and application of ellipsoidal Bruns formula, and (ii) a simulation which is designed for the same geographical area.
According to the simulation study the LC method results in (i) least relative error, (ii) Largest Effective Number of Degree of Freedom and (iii) closest regularization parameter to the actual one. Therefore, it can be concluded that LC amongst the tested methods for the estimation of regularization parameter, is the most efficient one and its application is recommended for the geoid computation methodology without applying the Stokes formula.One of the main steps within the geoid computation methodology without applying the Stokes formula is downward continuation of the harmonic residual observables from the surface of the Earth down to the surface of the reference ellipsoid. This downward continuation is done via the Abel-Poisson integral and its derivatives. This integral in which the unknowns, i.e. harmonic residual potential values on the surface of the reference ellipsoid, are under integral sign, is a Fredholm integral equation of the first kind. Solution of the aforementioned integral equation is an unstable problem and like any unstable problem requires regularization. One of the most important issues of every regularization method is estimation of the regularization parameter.
The aim of this paper is the comparison of different methods for estimation of the regularization parameter of the Tikhonov regularization method when applied to the downward continuation of incremental gravity observables for the geoid computation without applying the Stokes formula. For this purpose, the following regularization parameter selection methods, which are free from the knowledge of norm of vector of observation errors, are considered: (i) Discrepancy Principle (DP), (ii) Generalized Cross-Validation (GCV), (iii) L-Curve (LC), and (iv) Flattest Slope (FS). Each regularization parameter estimation method has its own concept for identification of optimum regularization parameter and as such they can result in different regularization parameters for the same problem. For example, in the DP method, the optimum regularization parameter is selected in a way that the estimated factor variance is less sensitive to the variations of the regularization parameters. In the GCV method, the optimum regularization parameter is the one that is less sensitive to the reduction of input information. LC makes a balance between regularization of the solution and the introduced error by the regularization. In FS, the estimation of optimum regularization parameter is based on having the least changes in the solution of the problem vs. changes of the regularization parameter.
The aforementioned methods are applied to: (i) the real data for geoid computation without applying the Stokes formula in a geographical region of Iran (43.5?E< <64.5?E, 23.5?N< <40.5?N) based on a methodology, which algorithmically consists of remove, downward continuation using ellipsoidal Abel-Poisson integral, restore, and application of ellipsoidal Bruns formula, and (ii) a simulation which is designed for the same geographical area.
According to the simulation study the LC method results in (i) least relative error, (ii) Largest Effective Number of Degree of Freedom and (iii) closest regularization parameter to the actual one. Therefore, it can be concluded that LC amongst the tested methods for the estimation of regularization parameter, is the most efficient one and its application is recommended for the geoid computation methodology without applying the Stokes formula.Institute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X34320081022A methodology for combination of GPS/Leveling geoid as boundary data with the gravity boundary data within a gravimetric boundary value problemA methodology for combination of GPS/Leveling geoid as boundary data with the gravity boundary data within a gravimetric boundary value problem27394FAJournal Article19700101Nowadays, combination of GPS heights with orthometric heights, derived from precise leveling, is broadly used to obtain point-wise solutions of the geoid, which is called “GPS/Leveling geoid”. The “GPS/Leveling geoid” is commonly used to constrain the gravimetric geoid solutions in a least squares surface fitting process. In this paper, unlike the usual application, the “GPS/Leveling geoid” is used as a boundary data. More specifically, in this paper we have developed a methodology for combination of “GPS/Leveling geoid”, as the boundary data, with other geodetic boundary data within the Fixed-Free Two-Boundary Value Problem (FFTBVP) for geoid computations. The proposed methodology can be explained algorithmically as follows:
1. Removal of the global topography and terrain effects via ellipsoidal harmonic expansion to degree and order 360 plus the centrifugal effect from the gravity boundary data at the surface of the Earth using the known GPS coordinates of the boundary points.
2. Removal of the local terrain masses using analytical solution of the Newton integral in the “cylindrical equiareal map projection” of the reference ellipsoid.
3. Formation of integral equations of the Abel-Poissn type for the harmonic residual gravity boundary data at the surface of the Earth, derived from the aforementioned remove steps.
4. Linearization and discretization of the formulated integral equations.
5. Application of the “GPS/Leveling geoid” within the ellipsoidal Bruns formula as the constraints to the system of equations developed in step (4) for the residual gravity data.
6. Least squares solution of the developed constraint problem of step (5), to estimate incremental gravitational potential values over the solution grid used for linearization in step (4) on the surface of the reference ellipsoid.
7. Restoration of the removed effects of steps (1) and (2) over the grid points on the reference ellipsoid.
8. Application of the Bruns formula to compute point-wise geoid over the grid points on the reference ellipsoid.
As a case study the proposed method is used for the geoid computation within the geographical region of Iran based on gravity and GPS/Leveling geoid as boundary data. The numerical results show the success of the methodology.
Finally the advantages of the proposal methodology can be summarized as follows:
1. Strictly following the principle of Gravimetric Boundary Value Problems (GBVP) for the geoid computation
2. Increasing the degree of freedom of the GBVP from a statistical point of view.
3. Making the downward continuation step of the GBVP solution more stable.Nowadays, combination of GPS heights with orthometric heights, derived from precise leveling, is broadly used to obtain point-wise solutions of the geoid, which is called “GPS/Leveling geoid”. The “GPS/Leveling geoid” is commonly used to constrain the gravimetric geoid solutions in a least squares surface fitting process. In this paper, unlike the usual application, the “GPS/Leveling geoid” is used as a boundary data. More specifically, in this paper we have developed a methodology for combination of “GPS/Leveling geoid”, as the boundary data, with other geodetic boundary data within the Fixed-Free Two-Boundary Value Problem (FFTBVP) for geoid computations. The proposed methodology can be explained algorithmically as follows:
1. Removal of the global topography and terrain effects via ellipsoidal harmonic expansion to degree and order 360 plus the centrifugal effect from the gravity boundary data at the surface of the Earth using the known GPS coordinates of the boundary points.
2. Removal of the local terrain masses using analytical solution of the Newton integral in the “cylindrical equiareal map projection” of the reference ellipsoid.
3. Formation of integral equations of the Abel-Poissn type for the harmonic residual gravity boundary data at the surface of the Earth, derived from the aforementioned remove steps.
4. Linearization and discretization of the formulated integral equations.
5. Application of the “GPS/Leveling geoid” within the ellipsoidal Bruns formula as the constraints to the system of equations developed in step (4) for the residual gravity data.
6. Least squares solution of the developed constraint problem of step (5), to estimate incremental gravitational potential values over the solution grid used for linearization in step (4) on the surface of the reference ellipsoid.
7. Restoration of the removed effects of steps (1) and (2) over the grid points on the reference ellipsoid.
8. Application of the Bruns formula to compute point-wise geoid over the grid points on the reference ellipsoid.
As a case study the proposed method is used for the geoid computation within the geographical region of Iran based on gravity and GPS/Leveling geoid as boundary data. The numerical results show the success of the methodology.
Finally the advantages of the proposal methodology can be summarized as follows:
1. Strictly following the principle of Gravimetric Boundary Value Problems (GBVP) for the geoid computation
2. Increasing the degree of freedom of the GBVP from a statistical point of view.
3. Making the downward continuation step of the GBVP solution more stable.Institute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X34320081022Efficiency of coherency attributes in seismic data interpretationEfficiency of coherency attributes in seismic data interpretation27395FAJournal Article19700101Seismic coherency is a measure of lateral changes in acoustic impedance that are caused by variations in structure, stratigraphy, lithology, porosity, and fluid content. Seismic coherency is a geometrical attribute that establishes temporal and lateral relationships with other attributes. Seismic coherency can be defined by coherency attributes. When coherency attributes are applied to seismic data, they can define continuity between two or more traces within a seismic window. The rate of seismic continuity is a sign of geological continuity. In interpretation of 3-D seismic data, a coherency cube can be extremely effective in delineating geological continuity or discontinuity, such as minor faults.
There are three solutions to calculate coherency attributes. They are cross-correlation, semblance and eigenstructure. These approaches are based on the continuity of traces in a time or depth interval in which similar traces show high coherency, while non-similar traces show low coherency. Cross-correlation algorithm was proposed by Bahorich and Farmer (1995) then it was completed by Marfurt et al (1998). In this approach, to calculate the coherency three traces are chosen (one as a base and two others in the direction of in-line and x-line). First, coherency is calculated in a finite time interval along in-line then along x-line. Finally, coherency is achieved by multiplying the root of maximum coherency value in each time interval along in-line and x-line. Semblance algorithm was introduced by Marfurt et al (1998). This method is employed using, as narrow as possible, a temporal window analysis typically determined by the highest usable frequency in the input seismic data. Near-vertical structural features, such as faults are better enhanced when using a longer temporal analysis window. By this algorithm, we are able to balance the conflicting requirements between maximizing lateral resolution and increasing S/N ratio. Eigenstructure algorithm was presented by Grestenkorn and Marfurt (1999). This algorithm is based on the estimation of coherency using covariance matrix.
To study the ability of coherency attributes in delineating minor faults we generated several 3-D synthetic seismic cubes with horizontal, dipping, and cross dipping layers with minor faults. We also studied the effect of the dominant frequency, signal to noise ratio and the size of the analysis cube in calculating coherency attributes using Matlab software. Using a Ricker wavelet with dominant frequency of 30 Hz and signal to noise ratio of 1 we found that the analysis cube with size for horizontal layers and for dipping layers are appropriate for coherency calculation. Semblance and eigenstructure algorithms are useful to detect minor faults in 3-D synthetic seismic cubes.
We applied all three approaches of coherency attributes on 3-D real data. Seismic data were belonged to the Khangiran gas field in NE Iran. The main reservoir is the Mozdooran formation (limestone) and its cap rock is red siltstone. In seismic data, the sample interval was 4 ms with a distance between traces of 25 m along in-line and x-line directions with 101 in-lines and 71 x-lines and 1000 ms time interval. Coherency attributes proved to be very effective in defining minor geological discontinuity even of 4 ms.Seismic coherency is a measure of lateral changes in acoustic impedance that are caused by variations in structure, stratigraphy, lithology, porosity, and fluid content. Seismic coherency is a geometrical attribute that establishes temporal and lateral relationships with other attributes. Seismic coherency can be defined by coherency attributes. When coherency attributes are applied to seismic data, they can define continuity between two or more traces within a seismic window. The rate of seismic continuity is a sign of geological continuity. In interpretation of 3-D seismic data, a coherency cube can be extremely effective in delineating geological continuity or discontinuity, such as minor faults.
There are three solutions to calculate coherency attributes. They are cross-correlation, semblance and eigenstructure. These approaches are based on the continuity of traces in a time or depth interval in which similar traces show high coherency, while non-similar traces show low coherency. Cross-correlation algorithm was proposed by Bahorich and Farmer (1995) then it was completed by Marfurt et al (1998). In this approach, to calculate the coherency three traces are chosen (one as a base and two others in the direction of in-line and x-line). First, coherency is calculated in a finite time interval along in-line then along x-line. Finally, coherency is achieved by multiplying the root of maximum coherency value in each time interval along in-line and x-line. Semblance algorithm was introduced by Marfurt et al (1998). This method is employed using, as narrow as possible, a temporal window analysis typically determined by the highest usable frequency in the input seismic data. Near-vertical structural features, such as faults are better enhanced when using a longer temporal analysis window. By this algorithm, we are able to balance the conflicting requirements between maximizing lateral resolution and increasing S/N ratio. Eigenstructure algorithm was presented by Grestenkorn and Marfurt (1999). This algorithm is based on the estimation of coherency using covariance matrix.
To study the ability of coherency attributes in delineating minor faults we generated several 3-D synthetic seismic cubes with horizontal, dipping, and cross dipping layers with minor faults. We also studied the effect of the dominant frequency, signal to noise ratio and the size of the analysis cube in calculating coherency attributes using Matlab software. Using a Ricker wavelet with dominant frequency of 30 Hz and signal to noise ratio of 1 we found that the analysis cube with size for horizontal layers and for dipping layers are appropriate for coherency calculation. Semblance and eigenstructure algorithms are useful to detect minor faults in 3-D synthetic seismic cubes.
We applied all three approaches of coherency attributes on 3-D real data. Seismic data were belonged to the Khangiran gas field in NE Iran. The main reservoir is the Mozdooran formation (limestone) and its cap rock is red siltstone. In seismic data, the sample interval was 4 ms with a distance between traces of 25 m along in-line and x-line directions with 101 in-lines and 71 x-lines and 1000 ms time interval. Coherency attributes proved to be very effective in defining minor geological discontinuity even of 4 ms.Institute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X34320081022Case studies of the impact of particle pollutants on precipitation over the Tehran areaCase studies of the impact of particle pollutants on precipitation over the Tehran area27396FAJournal Article19700101Investigations in recent decades have shown an unexpected increase in aerosol concentrations in metropolitan and industrialized regions. It was sometimes followed by heat islands and probably decreases in precipitation due to an inadvertent cloud overseeding process.
Particle pollutants act like small cloud condensation nuclei in which they form large collections of small droplets in cloud. Collision coalescence efficiency of tiny droplets is decreased and the following formation of rain drops elapsed. In a similar way in mixed clouds the formation of ice crystal precipitation due to a reduction in the accretion aggregation the process of small crystals was dilating.
The relationship between precipitation and particle pollutants in Tehran as a metropolitan area has been investigated. This study plays an important role for meteorologists and environmental researchers.
The influence of particle pollutanst on the precipitation process in various regions of Tehran including northeast (Aghdasie), northwest (Geophysics), east (Sorkhe hesar), west (Mehr Abad) and the center (Bazaar) has been studied .These investigations have been carried out in two ways: desirable days (particle pollutants less than 100 ) and undesirable days (particle pollutants more than 200 ) in warm and cold seasons over a period of 5 years (1999-2003).
The analysis of isohypse/particle pollutant isograms in undesirable conditions for each precipitation event shows that in both warm and cold seasons the amount of precipitation during the day increases from downtown toward the north of the city, due to the decrease of particle pollutant concentrations.
The average of precipitation in the northern stations is higher than the central, western and eastern stations due to their higher elevations. This average on undesirable days decreases from West to east and increases on desirable days. The precipitation trend increases for all stations in desirable conditions in warm and cold seasons and decreases in undesirable conditions.
A study of all data on desirable and undesirable days in cold and warm seasons shows that, in desirable conditions the precipitation trends increase, probably due to inadvertent cloud seeding. In undesirable conditions due to an increase in particle pollutants as cloud condensation nuclei (CCN), the precipitation trends decrease probably due to cloud over seeding.
A majority of stations in the warm season shows that the maximum value of precipitation on desirable and undesirable days is slightly more than its value in the cold season, mainly due to the greater thickness of clouds and more precipitation intensity than in warm the season.Investigations in recent decades have shown an unexpected increase in aerosol concentrations in metropolitan and industrialized regions. It was sometimes followed by heat islands and probably decreases in precipitation due to an inadvertent cloud overseeding process.
Particle pollutants act like small cloud condensation nuclei in which they form large collections of small droplets in cloud. Collision coalescence efficiency of tiny droplets is decreased and the following formation of rain drops elapsed. In a similar way in mixed clouds the formation of ice crystal precipitation due to a reduction in the accretion aggregation the process of small crystals was dilating.
The relationship between precipitation and particle pollutants in Tehran as a metropolitan area has been investigated. This study plays an important role for meteorologists and environmental researchers.
The influence of particle pollutanst on the precipitation process in various regions of Tehran including northeast (Aghdasie), northwest (Geophysics), east (Sorkhe hesar), west (Mehr Abad) and the center (Bazaar) has been studied .These investigations have been carried out in two ways: desirable days (particle pollutants less than 100 ) and undesirable days (particle pollutants more than 200 ) in warm and cold seasons over a period of 5 years (1999-2003).
The analysis of isohypse/particle pollutant isograms in undesirable conditions for each precipitation event shows that in both warm and cold seasons the amount of precipitation during the day increases from downtown toward the north of the city, due to the decrease of particle pollutant concentrations.
The average of precipitation in the northern stations is higher than the central, western and eastern stations due to their higher elevations. This average on undesirable days decreases from West to east and increases on desirable days. The precipitation trend increases for all stations in desirable conditions in warm and cold seasons and decreases in undesirable conditions.
A study of all data on desirable and undesirable days in cold and warm seasons shows that, in desirable conditions the precipitation trends increase, probably due to inadvertent cloud seeding. In undesirable conditions due to an increase in particle pollutants as cloud condensation nuclei (CCN), the precipitation trends decrease probably due to cloud over seeding.
A majority of stations in the warm season shows that the maximum value of precipitation on desirable and undesirable days is slightly more than its value in the cold season, mainly due to the greater thickness of clouds and more precipitation intensity than in warm the season.Institute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X34320081022Estimating variance components of ellipsoidal, orthometric and geoidalEstimating variance components of ellipsoidal, orthometric and geoidal27397FAJournal Article19700101The Best Quadratic Unbiased Estimation (BQUE) of variance components in the Gauss-Helmert model is used to combine adjustment of GPS/levelling and geoid to determine the individual variance components for each of the three height types. Through the research, different reasons for achievement of the negative variance components were discussed and a new modified version of the Best Quadratic Unbiased Non-negative Estimator (MBQUNE) was successfully developed and applied. This estimation could be useful for estimating the absolute accuracy level which can be achieved using the GPS/levelling method. A general MATLAB function is presented for numerical estimation of variance components by using the different parametric models. The modified BQUNE and developed software was successfully applied for estimating the variance components through the sample GPS/levelling network in Iran. In the following research, we used the 75 outlier free and well distributed GPS/levelling data. Three corrective surface models based on the 4, 5 and 7 parameter models were used through the combined adjustment of the GPS/levelling and geoidal heights. Using the 7-parameter model, the standard deviation indexes of the geoidal, geodetic and orthometric heights in Iran were estimated to be about 27, 39 and 35 cm, respectively.The Best Quadratic Unbiased Estimation (BQUE) of variance components in the Gauss-Helmert model is used to combine adjustment of GPS/levelling and geoid to determine the individual variance components for each of the three height types. Through the research, different reasons for achievement of the negative variance components were discussed and a new modified version of the Best Quadratic Unbiased Non-negative Estimator (MBQUNE) was successfully developed and applied. This estimation could be useful for estimating the absolute accuracy level which can be achieved using the GPS/levelling method. A general MATLAB function is presented for numerical estimation of variance components by using the different parametric models. The modified BQUNE and developed software was successfully applied for estimating the variance components through the sample GPS/levelling network in Iran. In the following research, we used the 75 outlier free and well distributed GPS/levelling data. Three corrective surface models based on the 4, 5 and 7 parameter models were used through the combined adjustment of the GPS/levelling and geoidal heights. Using the 7-parameter model, the standard deviation indexes of the geoidal, geodetic and orthometric heights in Iran were estimated to be about 27, 39 and 35 cm, respectively.Institute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X34320081022Impact of topographic and atmospheric masses over Iran on validation and inversion of GOCE gradiometric dataImpact of topographic and atmospheric masses over Iran on validation and inversion of GOCE gradiometric data27398FAJournal Article19700101The dedicated satellite mission GOCE will sense various small mass variations along its path around the Earth. Here we study the effect of the Earth’s topography and atmosphere on GOCE data. The effects depend on the magnitude of topographic height, and they will therefore vary by region. As the effect of the atmosphere and topography must be removed from the total gravity anomaly prior to geoid determinations, these effects should also be removed to simplify the downward continuation of the GOCE data to the sea level.
The main goal of this article is to investigate the direct topographic and atmospheric effects in a rough region like Iran. Maps of the direct effects and their statistics are presented and discussed. Numerical results show maximum direct topographic and atmospheric effects on the GOCE data can reach 2.64 E and 5.53 mE, respectively, when the satellite flies over Iran. The indirect effect of the atmospheric and topographic masses are also formulated and presented.The dedicated satellite mission GOCE will sense various small mass variations along its path around the Earth. Here we study the effect of the Earth’s topography and atmosphere on GOCE data. The effects depend on the magnitude of topographic height, and they will therefore vary by region. As the effect of the atmosphere and topography must be removed from the total gravity anomaly prior to geoid determinations, these effects should also be removed to simplify the downward continuation of the GOCE data to the sea level.
The main goal of this article is to investigate the direct topographic and atmospheric effects in a rough region like Iran. Maps of the direct effects and their statistics are presented and discussed. Numerical results show maximum direct topographic and atmospheric effects on the GOCE data can reach 2.64 E and 5.53 mE, respectively, when the satellite flies over Iran. The indirect effect of the atmospheric and topographic masses are also formulated and presented.