Scott’s T&E Blog

Scott’s T&E Blog

Top 5 Reasons F-35 JSF’s Test and Evaluation Campaign is Grossly Over Budget and Behind Schedule

May 19, 2016 in Uncategorized

f35Top 5 Reasons F-35 JSF’s Test and Evaluation Campaign is Grossly Over Budget and Behind Schedule

1. Three totally different variants of airframes with three different methods of flight test.

a. Because there are wildly varying configurations based on requirements from each of the armed forces; it creates the need for majorly diverse test and evaluation campaigns. Each of the armed forces also has varying requirements within their own flight test organizations. The major divesture in test and evaluation activities also creates an information vacuum between each of test authorities. There were many common problems with each variant that was found at different phases of testing.
i. Navy: Carrier Capable – Requires a large amount of build-up on the ground and on simulated carrier decks. Also requires a very large effort in test pilot training
ii. Marines: Vertical Takeoff and Landing – This requires a completely different type of envelope expansion testing along with vastly different mission requirements
iii. Air Force: Intelligence Gathering and EW capabilities – This requires the integration and test of “off nominal” systems not common on the other aircraft

2. Different variants of software

Much like the airframe and hardware; the software between each jet model varies greatly. This requires CMM and Sys Level 3 testing which means Lockheed and its subs must provide a larger amount of time and resources to complete. There are 2 key areas of software: mission systems and flight sciences. This makes six different and distinct software capability and maturity testing efforts. Since the operational software had ben bench tested and reworked through many iterations before it was delivered to the aircraft, it caused delays at levels. Its generally a good idea to break software deliveries up into smaller more manageable chunks of capability that are tested at the lab and bench level. Those smaller chunks can be integrated at the Sys Level 3 testing and delivered to the aircraft as a larger package.

3. The problems the Test and Evaluation activities found were grossly exaggerated.

It’ the test teams’ job to find problems, that’s what we do. The F-35 is the joint force’s biggest development venture in history. It’s going to get a lot of press and there will always be a large amount of bureaucrats that misunderstand, misconstrue or simply over exaggerate even the smallest issue. Top members of Congress and The Pentagon have every right to know what is occurring with the F-35’s test program. The test team and testing community need to take care in reviewing test reports and associated issues; that they are clear concise and provide a path to root cause and a solution. Its hard to stop the “telephone game rumor mill”; by ensuring test reports are thorough, don’t place blame on any party and provide a path to a solution; the test team can more effectively manage outcomes and perceptions when releasing information.

4. Test metrics and test point count

a. All flight test campaigns are measured on test point count. All three of the forces use widely different metrics for test points. For example, the Navy utilizes multiple test plans that fold up into an Umbrella Test Plan (UTP). The UTP lists several test points but also references the underlying test plans (i.e Aeromech) that also contain test points. In essence this is almost a way of “double” counting points and can falsely prove positive test burn down metrics.

b. The USAF utilizes an Integrated Test Plan. This doesn’t contain any test points but references a single flight test plan that includes test points. Test points are then applied to the flight test cards when planning a mission. This can lead to disconnects or even worse capturing a test point without even knowing it and subsequently spending extra time a resources to obtain the test point.

5. Involving our allies

a. While it’s great to build and field advanced capabilities with our allies, there is always going to be a large disconnect in the technologies the US exports under ITAR restrictions. This adds an extra level of testing the DoD has to pay for to meet ITAR restrictions along with meeting our allies’ needs.

b. Another issue is the OT&E portion of flight testing. Whilst each country will ultimately test the equipment on based on their own defense requirements; there is still a large portion of OT&E testing that has to occur stateside. Each country has many different profiles for which they will utilize the aircraft; everything from monitoring borders, to low level flight to counter strike, requires capability testing by the DoD and manufacturer. This adds a large scope and requirements increase that sometimes is large unforeseen.

Requirements creep, multiple users and differing mission profiles are all contributing factors to the F-35’s budget and schedule woes. Always understand, very early on, what the test campaign is aimed at achieving. Lock down the initial test configurations and profiles and then categorize them as a baseline. Doing this early on will give better visibility to understanding the unintended cost and schedule issues that loom on the horizon.

Love this article? Shrare It!