technology are constantly being pushed by designers and manufacturers. Their goal is to design an engine that is lighter than its predecessors and competitors, but produces more thrust. To accomplish this, an engine designer almost always has to bet that a new emerging technology or two will work out as anticipated. Occasionally, this means taking some pretty big risks. Risks that usually turn into problems that get widely reported in the media. For example, engine-development problems in the mid-1950s almost wrecked major aircraft companies, when airframes like the McDonnell F-3H Demon and Vought F-5U Cutlass had to wait months — or even years — for their engines to be developed. So, just how far has jet engine performance come along in the past forty years? Let's take a quick look.

In the mid-1950s, the U.S. Air Force began operating the North American F-100 Super Sabre, nicknamed the 'Hun.' Powered by a single Pratt & Whitney J57-P-7 engine, an axial-flow turbojet generating up to 16,000 lb./ 7,272.7 kg. of thrust, and aided by the newly developed afterburner, it was the first supersonic fighter, achieving a top speed of Mach 1.25. With confidence growing in the axial-flow turbojet engine, new fighter designs quickly showed up, and in 1958 the first McDonnell F-4 Phantom II flew. In the world of combat aircraft, the F-4 is legendary. During the Vietnam War it proved to be a formidable fighter bomber, and it still serves in some air forces. Powered by two giant General Electric J79-GE-15 turbojet engines, each generating up to 17,900lb./8,136kg. of thrust, the Phantom, or the 'Rhino' as it was affectionately called, could reach speeds up to Mach 2.2 at high altitudes.

To illustrate the axial-flow turbojet, consider the J79 engine and its five major sections:

A schematic cutaway of a typical turbojet engine, such as the Pratt & Whitney J57. Jack Ryan Enterprises, Ltd., by Laura Alpher

At the front of the J79 is the compressor section. Here, air is sucked into the engine and compacted in a series of seventeen axial compressor stages. Each stage is like a pinwheel with dozens of small turbine blades (they look like small curved fins) that push air through the engine, compressing it. The compressed air then passes into the combustor section, where it mixes with fuel and ignites. Combustion produces a mass of hot high-pressure gas that is packed with energy. The hot gas escapes through a nozzle onto the three turbine stages of the engine's hot section (so-called because this is where you find the highest temperatures). The stubby fan-like turbine blades are pushed by the hot gas as it strikes them. This causes the turbine wheel to spin at very high speed and with great power. The turbine wheel is connected by a shaft which spins the compressor stages which compact the air flow even further. The hot gas then escapes out the back of the turbojet and this flow pushes the aircraft through the air. When the afterburner (or augmentor) is used, additional fuel is sprayed directly into the exhaust gases in a final combustion chamber, or 'burner can' as it is known. This provides a 50 % increase in the final thrust of the engine. An afterburner is required for a turbojet to reach supersonic speeds. Unfortunately, using an afterburner gobbles fuel at roughly three to four times the rate of non-afterburning 'dry'-thrust settings. For example, using full afterburner in the F-4 Phantom II would drain its tanks dry in just under eight minutes. This thirst for fuel was the next problem the engine designers had to overcome.

The axial flow turbojet became the dominant aircraft propulsion plant in the late 1950s because it could sustain supersonic flight for as long as the aircraft's fuel supply held up. The term 'axial' means along a straight line, which is how the air flows in these engines. Up until that time, centrifugal (circular) flow engines were the military engines of choice — they were actually more powerful than early axial flow turbojets. But centrifugal flow engines could not support supersonic speeds.

Instead of a multiple stage compressor, centrifugal flow engines used a single stage, pump-like impeller to compress the incoming air flow. This drastically limited the pressure (or compression) ratio of the early jet engines, and therefore the maximum amount of thrust they could produce. The comparison between the air pressure leaving the last compressor stage of a jet engine and the air pressure at the inlet of the compressor section is how the pressure ratio is defined. Because the pressure ratio is the key performance characteristic of any jet engine, the axial flow designs had more growth potential than other designs of the period. Therefore, the major reasons why axial flow engines replaced centrifugal flow designs was that they could achieve higher pressure ratios and could also accommodate an afterburner. Centrifugal flow simply could not move enough air through the engine to keep an afterburner lit. By the mid-1960s, it became apparent that turbojet engines had reached their practical limitations, especially at subsonic speeds. If combat aircraft were going to carry heavier payloads with greater range, then a new engine with greater takeoff thrust and better fuel economy would have to be designed. The engine that finally emerged from the design labs in the 1960s was called a high-bypass turbofan.

At first glance, a turbofan doesn't look all that much different from a turbojet. There are, in fact, many differences, the most obvious being the presence of the fan section and the bypass duct. The fan section is a large, low-pressure compressor which pushes part of the air flow into the main compressor. The rest of the air goes down a separate channel called the bypass duct. The ratio between the amount of air pushed down the bypass duct and the amount that goes into the compressor is called the bypass ratio. For high bypass turbofans, about 40 % to 60 % of the air is diverted down the bypass duct. But in some designs, the bypass ratio can go as high as 97 %.

A schematic cutaway of a typical turbofan engine, such as the Pratt & Whitney F-100. Jack Ryan Enterprises, Ltd., by Laura Alpher

I know this doesn't appear to make a whole lot of sense. Don't you need more air, not less, to make a jet engine more powerful? In the case of turbofans, not so. More air is definitely not better. To repeat, pressure ratio is the key performance characteristic of a jet engine. Therefore the designers of the first turbofans put a lot of effort into increasing this pressure ratio. The result was the bypass concept.

If an engine has to compress a lot of air, then the pressure increase is distributed, or spread out, over a large volume. By reducing the amount of air flowing into the compressor, more work can be done on a smaller volume, which means a greater pressure increase. This is good. Then the designers increased the rotational speed of the compressor. With the compressor stages spinning around faster, more work is done on the air, and this again means a greater pressure increase. This is better. The bypass duct was relatively easy to incorporate into an engine design, but unfortunately, a faster spinning compressor proved to be far more difficult.

There were three major problems: 1. Getting more work out of the turbine so that it could drive the compressor at higher speeds. 2. Preventing the compressor blades from stalling when rotated at the higher speeds. 3. Reducing the weight of the compressor so that the centrifugal stresses would not exceed the mechanical strength of the alloys used in the compressor blades.

Each problem is a formidable technological challenge, but mastering all three took some serious engineering ingenuity.

Getting more work out of a turbine is basically a metallurgy problem: To produce the hotter gases needed to spin the turbine wheels faster, the engine must run hotter. Next, if the turbine's weight can be reduced, more useful work can be extracted from the hot gases. Both require a stronger, more heat-resistant metal alloy. But developing such an alloy is a difficult quest. In working with metals, you don't find high strength and high heat resistance in the same material. The solution was found not only in the particular alloy chosen for the turbine blades, but also in the manufacturing technique.

Traditionally, turbine blades have been constructed from nickel-based alloys. These are very resistant to high temperatures and have great mechanical strength. Unfortunately, even the best nickel-based alloys melt around 2,100deg to 2,200degF/1,148deg to 1,204degC. For turbojets like the J79, in which the combustion section exit temperature is only about 1,800degF/982degC, this is good enough; the temperature of the first stage turbine blades can be kept well below their melting point. But high bypass turbofans have combustion exit temperatures in the neighborhood of 2,500degF/1,371degC. Such heat turns the best nickel-based turbine blade into slag in a few seconds. Even before the blades reached their melting point, they would become pliable, like Silly Putty. Stretched by centrifugal forces, they would quickly come into contact with the stationary turbine case. Bad news.

Nickel-based alloys still remain the best material for turbine blades. So improvements in strength and heat resistance depend on the blade manufacturing process. The manufacturing technology that had the greatest effect on turbine blade performance was single-crystal casting.

Single-crystal casting is a process in which a molten turbine blade is carefully cooled so that the metallic

Добавить отзыв
ВСЕ ОТЗЫВЫ О КНИГЕ В ИЗБРАННОЕ

0

Вы можете отметить интересные вам фрагменты текста, которые будут доступны по уникальной ссылке в адресной строке браузера.

Отметить Добавить цитату
×