Continue to Site

Eng-Tips is the largest engineering community on the Internet

Intelligent Work Forums for Engineering Professionals

  • Congratulations waross on being selected by the Eng-Tips community for having the most helpful posts in the forums last week. Way to Go!

Hydrocarbon dewpoint (HDP) 11

Status
Not open for further replies.

athomas236

Mechanical
Jul 1, 2002
607
Does anyone know a method that can be used to calculated the HDP knowing the compostion of a natural gas.

Thanks

athomas236
 
Replies continue below

Recommended for you


A computer does not substitute for judgement any more than a pencil substitutes for literacy. But writing without a pencil is no particular advantage
-Robert S. McNamara
 
UmeshMathur:

I would like to add to the history recounted in your last posting. In the post World War II era of the late 1940's and well into the 1950's, there was a veritable explosion of construction in the petroleum refining industry ... grass root refineries, refinery expansions, addition of processes developed during the war and shortly thereafter such as naphtha hydrotreaters/cat reformers/FCC units/alkylation units/etc.

All of the major refinery engineering and construction companies in the U.S. (Kellogg, Braun, Lummus, Fluor, Foster Wheeler, etc.) used the Kellogg charts and the DePriester nomograms or similar data to design literally many hundreds of distillation units in refineries and natural gas processing plants.

With perhaps a few exceptions, all of those distillation units performed as expected. To call that "pure chance" is stretching it a bit, don't you think? I agree with you that good simulators are probably much more accurate ... but we can't deny how well those Kellogg charts and DePriester nomograms served us.

I would also point out (as I have done before) that, during the 1940's and well into the 1950's, the only calculation tools we had were slide rules and some rudimentary electrical adding machines.

Milton Beychok
(Visit me at www.air-dispersion.com)
.

 
Thanks for the history of these nomographs. Judging from all the surronding refineries, they did seem to work sufficiently well in their day.

I wish to add that the concept of an EOS for mixtures is not nearly as intuitive to most people as the activity coefficient approaches, although EOS is generally prefferred where applicable (Note an EOS cannot model highly non-ideal behavior). The trick of using various EOS for multicomponent mixtures, from PR and RKS (cubic EOS type) to BWR (from viral expansion) to numeric tables (i.e. steam tables are an EOS for which analytical expression is too complicated to be useful), is choosing the mixing rules. In Aspen-plus for example there is a basic Redlich-Kwong-Soave equation, but a host of different mixing rules that can go with it which are needed to calculate the required alpha functions for mixtures. The BWR parameters as given in the extensive property databank of "The Properties of Gases & Liquids" by Reid, etal are not in themselves useful for mixtures, and it is the mixing rules like Lee-Starling which add this functionality to the BWR equation.

This is all logically developed in theory in a common text known as "Introduction to Chemical Engineering Thermodynamics" by Smith and Van Ness (3rd edition for those of us graduating in the 80's) by following through chapters 3 (volume relationships), 6 (thermo-props of single components), 7 (thermo-props of mixture), 8 (phase equilibria). It is my opinion that "The Properties of Gases and Liquids" is a good text for how to generate a complete set of property parameters from basic data (like structure, boiling point, etc); but that "Intro to Chem Engr.ing Thermo" is the better text for understanding the underlying theory of thermodynamics (like departure functions, Gibbs free energy, etc)- at least that has been my experience.

best wishes as always,
sshep

P.S. After studying the BWR equation, I often thought of developing my own highly accurate EOS- the SShep Equation. I know that it will be very accurate because I will give it at least 100 adjustable parameters. Unfortunately I won't give users these parameters (my own sadistic chemical engineering 'joke'). I will leave to the users to empirically determine those parameters from accurate experimental data (100 parameters = at least 100 data points). At least this is how some of the more "accurate" recently published EOS characterizations have seemed to me.
 
mbeychok and sshep:

In general, I agree with the substance of your comments. Regarding the design methods of the past (a favorite subject of mine, since I did much process design in the 1960s), there is no question that those simpler methods and "shortcut" calculations were the only options with slide rules. When I first used the HP-35 electronic calculator in the early 1970s (studying with Professor Manning at Tulsa), I thought I had died and gone to heaven.

In the 1040s and 1950s, I believe that for most columns in petroleum refineries, there were many shortcut methods in vogue. Hardly anyone had heard of using the more rigorous tray-by-tray algorithms based on pseudocomponents in spite of the publication, e.g., of the Lewis-Matheson and Thiele-Geddes algorithms. For light hydrocarbon columns, many designs were based on the Fenske-Underwood-Gilliland shortcut method along with Drickamer & Bradford or O’Connell’s overall column efficiency correlations. Here, one needed K-values at the top, feed, and bottom of the column only. This situation began to change in the 1960s as the universities led the way by devising better algorithms and adding superior thermodynamic options (Chao-Seader, etc.). You might recall Rudy Motard’s CHESS and Canfield’s ChemShare programs.

However, in the 1960s and even mid-1970s, I do recall that individual engineers never had the wherewithal to do tray-by-tray calculations as they lacked access to computers. Major operating and engineering companies used highly proprietary, home-grown process simulators running on very slow computers by modern standards, often with approximate thermodynamics (e.g., tabular K-values dependent only on T and P). The trick of course was developing standardized calculation methods backed up with consistent “tray efficiencies”, heat transfer coefficients, etc. that were based on field calibration.

My main point was not to decry the efforts of the past but rather to reinforce Dick Russell’s original contention that hydrocarbon dew point calculations are extremely sensitive to the feed composition as well as the source for K-values.

With respect to sshep’s discussion of EOS v/s activity coefficients, I would like to add that the latter method is generally applied when the liquid phase is extremely non-ideal in the sense of Raoult’s law (infinite-dilution activity coefficients very far from unity) and all components are well within their critical temperature. For the hydrocarbon industries where many components in a mixture are way beyond their critical temperature - and hence cannot be handled by any method that estimates liquid fugacities based on vapor pressures - the fugacity coefficient approach derived from an EOS is the only logical option. Use of tabular or graphical methods in such systems is not recommended practice today by any means, as there are many instances where one could court disaster by doing this. Unfortunately, too many younger engineers have no idea of what we’re talking about here.

Also, I do agree with sshep’s point about the importance of EOS mixing rules (and the implied use of adjustable binary interaction parameters). This is in itself a huge area for research, see e.g., Prausnitz, Lichtenthaler, and Gomes de Azevedo: “Molecular Thermodynamics of Fluid Phase Equilibria” (3rd edition, Prentice-Hall, 1999) and Orbey and Sandler: “Modeling Vapor-Liquid Equilibria” (Cambridge Univ. Press, 1998). One must be extremely careful to choose the right mixing rule since published binary interaction parameters are worthless if you change even slightly the mixing rules on which they are based.

In retrospect, we may have strayed a bit from the original dew point issue, but it seems a logical departure to me at least. I would commend all who have contributed to this thread, as I think there is much knowledge and experience “distilled” here.
 
Gentlemen,

I spent a couple of hours Friday night reading chapters 15 to 25 of Volume 1 of Edmister's book "Applied Hydrocarbon Thermodynamics" which was rather a lot to absorb in one sitting.

It seems to me that if we had an extensive data base of K factors then we could just interpolate to get the required values of K. (I have omitted reference to mixing rules only because I have not yet read anything about them.)

However, since we do not have this all encompassing data base, we are left with the necessity to generalise the available data so it can be used in circumstances not covered by the available test data.

Clearly there are simple methods that could be used to make such generalisations and more complex methods with the simple methods being easier to understand but carrying a higher risk of not being applicable in all cases and therefore less accurate. It is possible that the greater accuracy of the more complex methods is an illusion.

If we say for discussion purposes, that the test data that under pins all generalisations has an error of +/-5% and that the simple methods have an error of +/-5% while the complex methods have an error of +/-1% then very simply, the simple methods have a combined error of +/-10% and the complex methods +/-6%. The point being that no matter what calculations methods are used, the results cannot be more accurate than the original test data.

As stated by mbeychok we know the simple methods worked but the problem was the time taken to make manual calculations. This problem was solved by computers but computers gave the opportunity to use more complex methods of calculations.

So we have moved from simple calculation methods that were easier to understand, time consuming to make but suitable for say 90% of the circumstances in which they were used through simple methods using computers that were not time consuming but again easy to understand and suitable for 90% of circumstances to complex methods that are not time consuming to use suitable for say 95% of circumstances but are not always easy to understand by the people using them. In part this is due to the fact the calculations are locked inside proprietary compute code.

What I want to do, as a consultant, is to calculate HCDP using simple methods that I understand and are applicable in 90% of cases using a computer. This will leave the contractor to make his own calculations in the event that any project that I study gets constructed.

I know that all the above is a gross simplification but I think it represents what I am trying to achieve, whether it is achieveable or not remains to be seen.

Best regards,

athomas236
 
athomas236:

Now we are in the area of opinions, so please bear with me for stating mine forthrightly: If you are a professional engineer and consultant, it behooves you to use the best methods available. If you don't understand them (i.e., they are outside your area of expertise), hire a professional chemical engineer to check/certify them if there is a possibility that ANY third party may rely on your computations in any way. I wouldn't advise use of any computations merely because, right or wrong, they are easy to understand.

Also, there are reliable flash calculation programs available for a nominal cost (<$500) that use the proper thermodynamics. This should be an expense that is insignificant compared to the value of most consulting assignments. Of course, even using the EOS approach requires some level of maturity to avoid improper EOS choices or inadvertent use of the wrong mixing rules. However, most gas contracts specify, in some detail, the options that must be used.

My quite recent experience with HCDP calculations is that use of less reliable thermo options can lead to errors of 10 deg F or more which, in my opinion, is utterly unacceptable for design work. Usually, gas plants go to great lengths to ensure that the HCDP they achieve is well within the specification (because of some uncertainty in gas composition measurement) as the penalties for sustained violations can be intolerable.

To follow your example, a 5% error in the composition of the heaviest ends can easily distort the calculated HCDP by 30 deg F or more. Therefore, for contractual enforcement, automated daily composite samples are usually taken and then are subjected to very accurate off-line gas chromatographic composition analysis, especially focusing on the breakdown of the tail end (C6+ fraction). I have seen instances where both the supplier and the receiver have their own sampling and analysis stations to avoid errors and ensure proper data reconciliation.

My main concern is: In these litigious times, can one afford not to use the best available methods? Believe me, the fines for violations can be intimidating even for major operating companies as we are dealing with a serious safety issue here: if heavy components separate out as line pressure is decreased (retrograde condensation, a common phenomenon in gas pipelines), a gas user could be faced with a major slug of liquid entering his gas burners. Depending on the distribution piping network, these slugs can accumulate undetected over a period of time and then suddenly disgorge themselves. If the boiler or furnace doesn’t have a disengagement vessel at the gas inlet, imagine the horrible consequences.

So, I am quite paranoid about such matters: being torn apart by an expert witness in a legal proceeding would be my notion of chemical engineering hell.

Again, I commend you for initiating a very pertinent discussion.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor