Sign In



Updated Tool Helps Navigate Contracting Processstring;#/News/Updated-Tool-Helps-Navigate-Contracting-ProcessUpdated Tool Helps Navigate Contracting Process2018-10-22T16:00:00Z Subway Map_20181022.jpg, Subway Map_20181022.jpg Subway Map_20181022.jpg<div class="ExternalClass7208F092FDD44CA58072E84E3058AD46">You've completed CON90, you're back on the job, and your first contracting assignment lands on your desk; what do you do? Navigating contracting can be daunting. The updated "<strong><a href="/tools/t/Subway-Map">Contracting Subway Map</a></strong>" should be your first stop.<br> <img alt="" src="/PublishingImages/DAU_DAUmil_News_Contracting%20Subway%20May%20Fig%201_20181022.jpg" style="width:100%;" /><br> <br> Developed as part of a collaborative effort between online learning specialists and DAU faculty, the Contracting Subway Map provides users with step-by-step visual instructions on how to maneuver the complex laws and policies that govern DoD contracting. The updated tool includes all of the steps in the contracting process and applicable FAR parts with direct links to <strong><a href=""></a>,</strong> innovative concepts in government contracting such as using other transaction authorities and an updated user interface.<br> <br> Each stop on the subway map includes additional content including articles, tips and tools to enhance your on-the-job decision making ability and reduce the amount of time spent searching for information.</div>string;#/News/Updated-Tool-Helps-Navigate-Contracting-Process
GAO: DOD Just Beginning to Grapple with Scale of Vulnerabilitiesstring;#/News/DOD-Just-Beginning-to-Grapple-with-Scale-of-VulnerabilitiesGAO: DOD Just Beginning to Grapple with Scale of Vulnerabilities2018-10-18T16:00:00Z Weapons GAO Report_20181018-1.jpg, Weapons GAO Report_20181018-1.jpg Weapons GAO Report_20181018-1.jpg<div class="ExternalClass80C773B67C104134B26E7C02E4D50285"><h2>What the GAO Found</h2> [<strong><a href="">Related Content - Full GAO Report on Weapon Systems Cybersecurity</a></strong>] <p>In recent cybersecurity tests of major weapon systems Department of Defense (DoD) is developing, testers playing the role of adversary were able to take control of systems relatively easily and operate largely undetected.</p> <p>DOD's weapons are more computerized and networked than ever before, so it's no surprise that there are more opportunities for attacks. Yet until relatively recently, DOD did not make weapon cybersecurity a priority. Over the past few years, DOD has taken steps towards improvement, like updating policies and increasing testing.</p> <p>The department faces mounting challenges in protecting its weapon systems from increasingly sophisticated cyber threats. This state is due to the computerized nature of weapon systems; DOD's late start in prioritizing weapon systems cybersecurity; and DOD's nascent understanding of how to develop more secure weapon systems. DOD weapon systems are more software dependent and more networked than ever before (see figure below).</p> <p>Embedded Software and Information Technology Systems Are Pervasive in Weapon Systems (Represented via Fictitious Weapon System for Classification Reasons)</p> <p><img alt="C:\Users\HollidayL\AppData\Local\Microsoft\Windows\Temporary Internet Files\Content.Outlook\A6NRBADJ\Highlights_v1_102130-rmw.tif" src="" style="width:100%;" /></p> <p>Automation and connectivity are fundamental enablers of DOD's modern military capabilities. However, they make weapon systems more vulnerable to cyber attacks. Although GAO and others have warned of cyber risks for decades, until recently, DOD did not prioritize weapon systems cybersecurity. Finally, DOD is still determining how best to address weapon systems cybersecurity.</p> <p>In operational testing, DOD routinely found mission-critical cyber vulnerabilities in systems that were under development, yet program officials GAO met with believed their systems were secure and discounted some test results as unrealistic. Using relatively simple tools and techniques, testers were able to take control of systems and largely operate undetected, due in part to basic issues such as poor password management and unencrypted communications. In addition, vulnerabilities that DOD is aware of likely represent a fraction of total vulnerabilities due to testing limitations. For example, not all programs have been tested and tests do not reflect the full range of threats.</p> <p>DOD has recently taken several steps to improve weapon systems cybersecurity, including issuing and revising policies and guidance to better incorporate cybersecurity considerations. DOD, as directed by Congress, has also begun initiatives to better understand and address cyber vulnerabilities. However, DOD faces barriers that could limit the effectiveness of these steps, such as cybersecurity workforce challenges and difficulties sharing information and lessons about vulnerabilities. To address these challenges and improve the state of weapon systems cybersecurity, it is essential that DOD sustain its momentum in developing and implementing key initiatives. GAO plans to continue evaluating key aspects of DOD's weapon systems cybersecurity efforts.</p> <h2>Why GAO Did This Study</h2> <p>DOD plans to spend about $1.66 trillion to develop its current portfolio of major weapon systems. Potential adversaries have developed advanced cyber-espionage and cyber-attack capabilities that target DOD systems. Cybersecurity—the process of protecting information and information systems—can reduce the likelihood that attackers are able to access our systems and limit the damage if they do.</p> <p>GAO was asked to review the state of DOD weapon systems cybersecurity. This report addresses (1) factors that contribute to the current state of DOD weapon systems' cybersecurity, (2) vulnerabilities in weapons that are under development, and (3) steps DOD is taking to develop more cyber resilient weapon systems.</p> <p>To do this work, GAO analyzed weapon systems cybersecurity test reports, policies, and guidance. GAO interviewed officials from key defense organizations with weapon systems cybersecurity responsibilities as well as program officials from a non-generalizable sample of nine major defense acquisition program offices.</p> <h2>What GAO Recommends</h2> <p>GAO is not making any recommendations at this time. GAO will continue to evaluate this issue.</p> <p>For more information, contact Cristina Chaplain, 202-512-4841, or <a href=""></a>.<br> <br> [<strong><a href="">Related Content - Full GAO Report on Weapon Systems Cybersecurity</a></strong>]</p></div>string;#/News/DOD-Just-Beginning-to-Grapple-with-Scale-of-Vulnerabilities



Outperforming With Doctrine, Not Science With Doctrine, Not Science2018-11-01T12:00:00Z,<div class="ExternalClass1D68AC91BEFC4720B7B88E7033C8D47B">The Cold War paradigm of defense acquisition can no longer keep the United States ahead of its near-peer competitors. During the Cold War, the Department of Defense (DoD) was able to out-science its adversaries, because it was the world’s biggest investor in, and consumer of, advanced science and technology (S&T), and could set the agenda for what commercial industries produced. Today, the DoD’s share in the global S&T market is small and shrinking fast. In the future, the DoD no longer will have exclusive access to these technologies that once gave it the edge over potential adversaries. Instead, the DoD must return to an even older, pre-World War II paradigm: We must out-doctrine our potential adversaries by adopting and adapting commercial S&T for the battlefronts—and do it faster and more efficiently than our competitors. <h3>The Cold War Acquisition Paradigm, Simplified </h3> During the Cold War, the defense research and development paradigm, greatly simplified, was as follows: The DoD forecast what threat would exist in 10, 20, and 30 years, and the research and development (R&D) planning was set up to match the threat and develop the required technologies. The classic example for this is the Second Offset Strategy. The Soviet Union always had more troops and conventional arms than did NATO. The first strategy to offset that advantage—build more nuclear weapons—failed when the Soviets matched our production. So, in 1975 the DoD started on what we now call the Second Offset Strategy, at the time called a Long-Range Research and Development Planning Program run by the former Advanced Research Projects Agency (today the Defense Advanced Research Projects Agency), because we had to out-science them and out-technology them. We identified and developed over the next two decades the kinds of technologies that would allow us to outperform the Soviets, including stealth, microprocessors, software, the beginnings of the Internet, and long-range cruise missiles.<br> <br> One example: The U.S. government needed better microprocessors in order to have lighter cruise missiles, ballistic missiles, and other kinds of equipment that had to rely on software, so they could not use regular transistors. In the 1980s, the U.S. Government underwrote a company called SEMATECH, or Semiconductor Manufacturing Technology. It was about $100 million a year (which was not much money even then), with the stated reason that the United States had to compete with Japan. Behind the scenes, the DoD needed to hurry the development of microprocessors that could fit on ballistic missiles and cruise missiles and all these other electronics that were needed in order to achieve this second offset. That seed money got many commercial companies involved in developing microprocessors, of which the DoD was the major consumer. In turn, companies began radically reducing their costs for miniaturization, which in turn reduced the cost to the DoD. At the same time, these technologies were essentially out of reach for the Soviet and Eastern Bloc nations, both because of strict controls by the West, as well as self-imposed communist-bloc quarantines of decadent Western influences such as computers. <br> <br> This paradigm worked: These science and technology investments came to fruition in the 1980s—in 1989, the Berlin Wall collapsed, and within 2 years the Soviet Union collapsed. This paradigm worked because the DoD was able to leverage its own R&D investments to change the face of technology worldwide. In the 1970s, for example, the DoD owned 10 percent of the world’s R&D budget, a substantial amount of leverage. <h3>The Pre-World War II Paradigm, Simplified </h3> World War II marked the start of heavy government investment in research and development, especially military. Earlier, the largest part of the U.S. research budget was devoted to agriculture, so very little of the Army and the Navy budget actually went through what today we would call R&D. <br> <br> For something like 180 years of the United States’ existence, we looked at the commercial sector to develop the technologies that would change the way we fought. That was not simply the United States—Britain, France, and Germany were all operating the same way. They all relied very heavily on their commercial sectors to develop these technologies. As these commercial technologies were developed, the militaries would look at how they would be adapted to the military manner of fighting. It was the commercial sector that developed the Maxim machine gun in the 1880s. It was the commercial market that developed the wireless radio a decade later. It was the commercial world that developed the airplane in the 1900s. At each point, the militaries tried to use these new inventions, but it was the nation that could adapt its own doctrine fastest to those new technologies that gained the greatest military advantage. And it was quite a contest. By the way, spoiler alert, in none of those cases did the United States initially lead the pack. However, the United States did eventually learn and outstrip its competitors. <br> <br> One particular example involves the aircraft carrier. The United States, France, Germany and Britain were all looking at the commercially developed airplane as a military weapon, but no one was quite sure how to use it at sea. There was a long period, primarily between World War I and World War II, where the navies did a lot of work to slowly but surely develop the ideas of how this would operate, develop the doctrine, develop the methods of taking off and landing, of how it would be used in warfare. The navies did a lot of wargaming and operational fleet exercises. That was the key to adopting a new technology. It was not simply somebody looking at it and thinking, “This is a great idea. Let’s do it.” There usually was a careful process of trying it out in different scenarios, taking lessons from the operational experience or the wargaming experience, folding it back into the technology, and then, eventually, making it part of the fleet. That is a fairly systematic way of thinking about how an organization can change its paradigm—not in one fell swoop but by actually thinking carefully about which inventions, technologies and concepts would change how they fight, try them out, and then go back and revise the doctrine. This process is very similar to modern agile development in software, but of course on a longer scale—not weeks as with software, but rather months and years. <h3>What Has Changed, What We Must Do</h3> In the 1970s, the DoD was the single biggest player in the R&D world—10 percent of the total was a big lever—and it was able to wag the tail (so to speak) of R&D investment globally. The trend today is that the DoD owns less and less of the world’s R&D budget, and the leverage is simply not there—the DoD is now just a few hairs on the tail of the dog. In 2010, it had 5 percent of the world’s total, about $80 billion out of $1.6 trillion. In 2016, it was 3.5 percent of the world R&D total, and it continues falling. So, the DoD’s ability is somewhere between limited to almost nonexistent to influence R&D investments and, for that matter, investments in very specific parts of technological development. Today, almost no federal money goes into microprocessor research.<br> <br> The DoD needs to reconceptualize how we approach defense acquisition. Instead of taking the doctrine that we imagine we will have in 10 or 20 years and developing the technologies and the engineering to fulfill that doctrine, we should develop new doctrines based on the technologies and developments appearing in the commercial sector. In other words, we need to do something like the reverse of what we are doing now. <br> <br> The DoD needs to look at what is present now, what is present in the next few years, and try to decide: What can we do with it? How can we establish the kind of doctrine for fighting that would take advantage of these new technologies? It may not be what we predicted. One example is the self-driving vehicle, which is coming along faster than we had ever imagined. Had we been following the standard military R&D course, we would have put in place a plan to develop autonomous capability that would arrive at some endpoint 20 years from now. We will get there much faster if we look at what the commercial world is doing, follow it, figure out where we can adapt what is happening now, and if, instead of dictating the requirements for creating a technology, we take those emerging technologies and decide how to use them on the battlefield.<br> <br> None of these ideas by itself is greatly different from what we do today. It is quite common for new technologies to be folded into the way we fight. In order to make those new concepts fit into an acquisition system, we must rethink how we do large-scale acquisition for high-value platforms. This final piece will require the greatest institutional shift. <br> <br> We have believed for a very long time that economies of scale will reduce cost. We buy 1,000 aircraft or 500 aircraft in the belief that once we have got the industrial process established, we have learning curves and other factors that will drive down cost. It has never worked quite that way, because in so many cases, the 50th and 500th unit to come off the military production line does not resemble others produced in the same very standardized way as in the case of a Ford, Hyundai or Apple product. The savings from building 500 aircraft or 50 warships probably are not nearly as much as are often advertised, when you look at the actual return costs. <br> <br> The other problem with buying 500 aircraft at a time is that, if your technology isn’t there at the beginning, it’s not going to get there at all. Therefore, as every program manager knows, we must race to get all the technologies into one platform. If we went to a paradigm where the number produced came down dramatically, so that we plan to buy 500 aircraft in a series, perhaps 20 or 50 at first and then move to the next step or amount in the series, the pressure would be reduced to get the latest technology into that first particular series. There is often an argument that aircraft are already produced in blocks or flights, so you have a block one version of the F-35 jet fighter, a block two version, etc., and, while there are technology insertion points, the major parts of the aircraft really don’t change. The airframe can’t really change that much. The engines can change but not by much. The gross takeoff weight can’t change much. For a ship, the same rules apply. <h3>‘Plug and Play’ Flexibility</h3> For these high-value platforms, the technologies are often in the mission systems and the software. The goal should be to make the platforms more flexible to allow “plug and play” over long periods. It probably makes more sense to think, not about the flexibility of the individual platform or the individual aircraft, but rather the flexibility of the entire series of platforms. As a new fighting doctrine evolves, a new line of aircraft, ships and other platforms can be developed, if not in real time, certainly in a way that is more adaptable to these evolving technologies and ways of fighting. This would allow quicker technology insertion and doctrinal change. These faster adaptations would enable far more rapid testing of the technology—and would get us to where we want to go more efficiently.<br> <br> As we think about how we will offset the near-peer competition, we find that our near-peer competitors are quite capable of developing most of the technology, and, quite frankly, in some cases developing it faster than we do. The same commercially developed technologies available to the DoD almost certainly will be available to everyone, including our competitors. The real question is not who will be able to outproduce or out-science or out-technology, but who is going to be able to out-doctrine? <br> <br> The DoD needs to consider adopting a much different defense acquisition paradigm, especially in the new balance-of-great-powers environment. The capabilities and technologies available to the DoD will not be terribly dissimilar from those available to our near-peer competitors. Instead, we must adapt our ways and means of deterring war and conducting war (i.e., doctrine) so that it outpaces that of our potential adversaries. The DoD needs to leave behind the notion of trying to project 20 years into the future and develop technology accordingly. It must instead figure out what our current and near-term technologies let us do, and adapt our defense acquisition and doctrine development process accordingly. This new process will get us inside our peer competitors’ OODA loop—observe, orient, decide, and act—much faster. <hr />Ferreiro is the director of research at the Defense Acquisition University in Fort Belvoir, Virginia. He previously was professor of Systems Engineering and continues to teach as an adjunct professor. He is the author of the Pulitzer Prize finalist history book Brothers at Arms: American Independence and the Men of France and Spain Who Saved It. Ferreiro has 40 years of experience in naval and maritime engineering and acquisition. He designed warships for the U.S. Navy, was a systems engineer for the U.S. Coast Guard and was an exchange naval architect with the French Navy. He also served as a technical expert to the International Maritime Organization. <br> <br> The author can be contacted at <a class="ak-cke-href" href=""></a>.<br> <br> <br></div>string;#/library/defense-atl/blog/Outperforming--With-Doctrine,-Not-Science-
Big Data Meets High-Performance Computing Data Meets High-Performance Computing2018-11-01T12:00:00Z,<div class="ExternalClass79F865AB29044C2887A7B8E9B8C3D837">Big data and high-performance computing (HPC) are common topics in technical circles and the popular press. We read about social media, e-commerce, server farms, cloud computing and mobile computing. In fact, we do more than just read about them; we use them daily, performing Google searches at work, watching a Netflix movie on our smart phone, connecting with friends and family via Facebook, buying goods on Amazon, and in a host of other activities. It isn’t merely convenient or social or trendy; it’s very big business based on the wealth of data collected from these activities and enabled by massive computing resources spread all over the world—big data meets high-performance computing. It’s big business because analysis of the data allows purveyors of goods and services to target our specific interests. Retailers pay well for this access to our buying potential.<br> <br> The Department of Defense (DoD) has analogous challenges. The Army Research Laboratory (ARL) employs graph algorithms to analyze e-mail, social network, and other patterns to identify potential tactical threats and threat precursors. The Test and Evaluation (T&E) community tests every weapon system, network, application, piece of equipment, communication device, data link, etc., and measures everything conceivable to assess its effectiveness, suitability, survivability and safety. These requirements produce massive, heterogeneous, distributed data sets requiring new approaches for analysis and exploitation. A larger challenge still is the growing number of requirements for time-critical analysis and how to use HPC resources for them. Within the acquisition life-cycle, T&E is the single largest producer of data.<br> <br> <img alt="" src="/library/defense-atl/DATLFiles/Nov-Dec2018/Article3_image1.jpg" style="margin-left:3px;margin-right:3px;width:454px;height:300px;" /><br> Above: Programmers operate the main panel of an early computer from the 1946-1956 period.<br> <br> Below: Bombs are processsed at an Aberdeen Proving Ground munitions plant in Maryland, November 1918.<br> <img alt="" src="/library/defense-atl/DATLFiles/Nov-Dec2018/Article3_image2.jpg" style="width:317px;height:262px;" /><br> U.S. Army photos from the archives of the Army Research Laboratory’s Technical Library. <h3>A Little History </h3> The U.S. Army Research Laboratory and U.S. Army Aberdeen Test Center (ATC) share a common origin dating from U.S. involvement in World War I. Congress moved the ordnance testing facilities from Sandy Hook, New Jersey, to Aberdeen, Maryland, due primarily to Sandy Hook’s limited range capabilities and to the wartime congestion of New York Harbor. The transition began at the end of 1917, and by Jan. 2, 1918, the first test round was fired at what is now APG. Nine divisions comprising the Proof Department were eventually established at the new proving ground.<br> <br> In 1935, the Ballistic Section was removed from the Gun Testing Division and named the Research Division, which in 1938 became the Ballistic Research Laboratory (BRL). A series of reorganizations and another world war saw the Proof Department become the Ordnance Research and Development Center (ORDC), which included BRL, Development and Proof Services, and the Aberdeen Ordnance Depot. In 1962, the Army Test and Evaluation Command (TECOM) was stood up as the higher headquarters of ORDC and the Development and Proof Services was renamed the Materiel Test Directorate. In 1992, BRL was stood down and the ARL was activated, consolidating eight separate laboratories with other Army research elements. In 1995, the Materiel Test Directorate became the U.S. Army Aberdeen Test Center.<br> <br> Today, ARL is the Department of the Army’s corporate laboratory, the Army’s sole fundamental research laboratory focused on scientific discovery, technological innovation, and transition of knowledge products. ATC, one of eight Army Test and Evaluation Command (ATEC) test centers and subordinate commands, has the mission to validate that equipment tested performs as intended, is safe, and that capabilities and limitations are known during the developmental testing of soldier systems, automotive systems, ballistics and survivability tests. ATC performs 25 percent of ATEC’s total workload. <h3>Avalanche of Data</h3> There is no single big data problem, hence no single solution. Even the term “big” is misleading, implying magnitude only. Certainly data Volume (how much data), which is growing exponentially, is a constant concern; however, a collection of Vs characterizes big data: Velocity is the speed at which data arrives and the speed with which decisions based on it must be made. Variety refers to the heterogeneity of storage platforms, data types, representation, semantic interpretation and security classification or other distribution limitations. Veracity is the trustworthiness of the data, its error and uncertainty and its provenance. Value represents what the data is worth in its native state and when aggregated. Value increases from integrating, analyzing and applying the data. The five Vs of big data represent characteristics that help users identify their big data problem and assist in defining the right tools and approach.<br> <br> For ARL big data means a computational sciences research program in four priority areas to support our stakeholders: large-scale computing (hardware, algorithms, software, networks), convergence of HPC and big data (hardware and software architectures and programming approaches), time-sensitive analysis (real-time, time-critical, and on-demand requirements), and tactical computing (locality-aware applications and cognitive devices that accommodate dynamic resource constraints). Big data also means physics-based modeling and simulation in aerodynamics, combustion, materials, structures, meteorology and other domains. These computations are like a gas that expands to fill any volume enclosing it. The HPC resource is the volume and the computation is the gas, filling the entire computational volume no matter the size, one more ever-growing source of data.<br> <br> Testing across the Army has evolved dramatically over the last 40 years. In 1976, instrumentation could capture data at rates up to 160 kilobits per second; today instruments routinely acquire data at 1 gigabit per second. Testing in 1976 was limited to isolated components and systems; today testing often includes networked systems of systems. In the future, vehicles will employ a network connecting all vehicle systems, such as fire control, vehicle control, and engine control. Every entity on the battlefield—soldier, vehicle, sensor, weapon, radio—is becoming a network node, the tactical Internet of Things. Each progression of integration, communication, and connectedness brings more interfaces and interactions, increased complexity, and an avalanche of data. The amount of instrumentation required is becoming overwhelming, and the instruments acquire data at staggering aggregate rates. Today, a Network Integration Evaluation (NIE) event can produce hundreds of millions of network packets per day, all of which must be time stamped, location registered, direction of arrival reconciled, and reconstructed to build messages and message threads. Dedicated high performance computing is required to meet the analysis demands and timelines of today and the future. <h3><img alt="" src="/library/defense-atl/DATLFiles/Nov-Dec2018/Article3_figure1.jpg" style="margin-left:3px;margin-right:3px;float:left;width:438px;height:333px;" />Scientific Computing</h3> Just as there is no generally accepted definition of big data, the same holds true of a high performance computer. In 1993 a collection of benchmarks was proposed to assess the performance of computers on compute-intensive problems. The benchmarks are now known as the TOP 500, and twice a year a list of the top 500 computers in the world is published. These are certainly high performance computers, but by no means the only ones. They all have one thing in common, however. They trace their roots to the first general purpose scientific computer, the ENIAC (Electronic Numerical Integrator and Computer), designed and built by the University of Pennsylvania Moore School of Electrical Engineering for BRL. The ENIAC became operational in 1946 and was moved to APG in 1947. BRL custom design and development of computers, with origin in the development of firing and bombing tables, ceased in 1976 with the acquisition of a CDC Cyber 7600. Commercial development of scientific computers supplanted custom development.<br> <br> Congress established the High Performance Computing Modernization Program (HPCMP) in Fiscal Year 1992, and the DoD in response stood up the High Performance Computing Modernization Office in 1994 (now called the High Performance Computing Modernization Program Office). Shared resource centers were created and furnished with high performance computers connected to users via high bandwidth networks. A staff of resident subject-matter experts was hired, commercial applications software was made freely available, and a series of user software initiatives was funded. The HPCMP is the primary source of HPC resources in the DoD today, serving the science and technology, test and evaluation, and acquisition communities. ARL operates one of four HPCMP DoD Supercomputing Resource Centers. ARL also hosts dedicated HPC platforms for stakeholders, such as ATC, which has used dedicated HPCs for large-scale data analytics since 2003. <h3>Challenges and Successes</h3> An example from the ATC is illustrative. Recognize that the example is more than 10 years old yet vividly demonstrates value from large-scale data analytics and the success of the T&E community in using it.<br> <br> ATC conducted a study with the Department of Transportation and a major truck manufacturer for tractor-trailer vehicle fleet analysis. ATC has developed black box instrumentation, referred to as the ADMAS (Advanced Distributed Modular Acquisition System) family, for data collection. The ATC fleet analysis study installed ADMAS devices on 80 trucks to collect data, utilized a cellphone network to transfer data in bursts back to ATC, and employed proven ATC data management tools to store, analyze and visualize the results. The study was successful in elucidating how to better employ the fleet of trucks to improve efficiency. An unintended benefit was also derived through examining accidents. The data included sufficient detail for analysts to identify driving behaviors that led to the accidents in some cases, and propose changes in driver practices to reduce the number of accidents. This was not a requirement of the study. Yet even in this small-scale effort, the data added value through mining.<br> <br> We can extrapolate the ATC truck example to the more recent Mine Resistant Ambush Protected (MRAP) vehicle. Data were collected across the vehicle life cycle: early system development, developmental testing, live fire testing, operational testing, training, and in-theater operations. ATC has 20 terabytes (TBs)—or over 20 trillion bytes—of MRAP automotive data from developmental testing, training data from 161 vehicles operating for 47,630 miles, and 15 TB of in-theater data from 337 instrumented vehicles operating for 267,385 miles. Besides collecting in-theater automotive performance data, such as engine parameters, terrain profiles, ride quality information, and environmental temperature, the vehicles are equipped with accelerometers that characterize the response to an explosive impact or rollover event. These results are then compared to live-fire vulnerability data from ATC tests enabling forensic analysis of the events and improving future designs. But what else resides in that data? The data are so massive that traditional means cannot yield the desired results in a timely manner.<br> <br> The Army conducts NIE exercises for up to 6 weeks annually, with as much as 2 TB of data collected daily. The Volume, Velocity and Variety of data are key challenges. In 2012, software developed by ATC and ARL for HPC processing allowed data reduction times to be improved by an order of magnitude, from 60 hours per TB to 5 hours per TB. For the first time, results from one day were able to be analyzed in time to favorably impact the following day’s events. The results were so successful that the software was employed for Warfighter Information Network-Tactical (WIN-T) tests in the fall of 2016 and continued in 2017 during NIE. <h3>Not Your Grandma’s Supercomputer</h3> Throughout the ATC and ARL partnership, each generation of HPC has addressed a different stage of the test and evaluation data flow. The first consolidated multiple stores of data into a single structured query language (SQL) database and integrated it with Google Earth and other tools for improved analysis and visualization of automotive testing. This industry standard database technology served ATC well for years, until data reduction, not data access, became the new limiting factor for timeliness. As a result, ATC and ARL rewrote the software for parallel implementation, developed new visualization tools, and enabled the successful application to Army networked systems tests.<br> <br> Looking to the future, other latency in the data flow arises from querying the database; this process has remained serial through all previous improvements. Additional latency is caused by movement daily of tens to hundreds of gigabytes of reduced data to a small cluster used by analysts. The resolution to these limitations is to leave data on the HPC and execute parallel queries there, both enabled by Hadoop and its associated software stack. Hadoop is open source software for distributed computing and data management used by such giants as Google, Yahoo and Netflix, based on MapReduce developed by Google. Hadoop provides a SQL query interface familiar to database analysts but within an interactive and parallel HPC framework. <br> <br> The new fully parallel approach, when complete, enables preliminary test and evaluation results to be available to stakeholders at the end of each test day and promises another order of magnitude speed up. Besides application to future WIN-T tests, it will also be used to support the autonomous platforms, tactical vehicles such as Joint Lightweight Tactical Vehicle and combat vehicles such as Stryker.<br> <br> Data will not stop growing and the demand for decisions based on it will only multiply. We are putting into place tools and processes to support acquisition, but what else awaits? There is an obvious challenge in processing still and video image data and integrating with other types of data in near-real time. Making current, historical, visible, discoverable and accessible all of the data is key to unlocking secrets in the data and making it available for future developments. Automated validation of all data is still beyond reach. Exascale computers (1,000 times faster than current HPCs) portend ever increasing heterogeneous and distributed resources, well beyond the multi-node, multi-core graphics processing unit (GPU) clusters of today. Many integrated core architectures, low power processors, new applications of Field Programmable Gate Arrays, quantum and quantum-like processors, application-specific HPCs, and neuromorphic chips offer dramatic potential while challenging our creativity to integrate the disparate technologies and develop software tools.<br> <br> The ARL and the ATC are in a unique position at the intersection of big data and HPC. We bring a century of testing, analysis and scientific computing expertise to this challenge. The title of this article is thus misleading: big data and HPC are not meeting for the first time. They are old friends facing yet another challenge together.<br> <br> On Jan. 2, 2018, a commemorative round was fired at ATC echoing the original round fired there a century earlier. <hr />Barton is Chief Technologist and Contractor (Parsons) for the U.S. Army Research Laboratory Computational Sciences Division, Aberdeen Proving Ground (APG) in Maryland and holds a Ph.D. degree in Engineering Science and Mechanics from the University of Tennessee–<br> Knoxville. Wallace is the Technical Director of the Aberdeen Test Center, APG. He holds a Bachelor of Science degree in Electrical Engineering from the West Virginia Institute of Technology. Namburu is the chief scientist of the Army Research Laboratory Computational and Information Sciences Directorate, Adelphi, Maryland, and holds a Ph.D. degree in Mechanical Engineering from the University of Minnesota.<br> <br> The authors can be reached through <a class="ak-cke-href" href=""></a>, <a class="ak-cke-href" href=""></a>, <a class="ak-cke-href" href=""></a>.<br> <br></div>string;#/library/defense-atl/blog/Big-Data-Meets-High-Performance-Computing



Boston University's Metropolitan College (MET)-Equip Yourself for Success University's Metropolitan College (MET)-Equip Yourself for Success2018-10-12T16:00:00Z,<div class="ExternalClass81A6C144ED594B0783876FB1E0670898"><strong>Equip Yourself for Success</strong><br> <br> <strong>Boston University’s Metropolitan College</strong> <strong>(MET)</strong> is proud to be a strategic partner of Defense Acquisition University. For more than fifty years, MET has brought the academic excellence and resources of an internationally respected research university to busy professionals via innovative, part-time programs that address the evolving needs of industry. As one of the 17 degree-granting bodies at BU, MET offers more than 70 graduate and undergraduate degree and certificate programs evenings on campus, online and in blended formats, including: <ul> <li>Applied Business Analytics</li> <li>Computer Information Systems</li> <li>Cybersecurity</li> <li>Enterprise Risk Management</li> <li>Financial Management</li> <li>IT Project Management</li> <li>Project Management</li> <li>Software Development</li> <li>Supply Chain Management</li> </ul> <br> At MET, members of the Defense Acquisition Workforce gain access to: <ul> <li>Cutting-edge facilities, the latest learning tools, and student support services</li> <li>Full-time faculty who combine extensive field experience with academic research</li> <li>Challenging class projects and case studies drawn from “real-life” scenarios</li> <li>Elective course waivers for students who have completed PMT 355 Program Management Office, part A, and PMT 360 Program Management Office, part B (up to two elective waivers, total, limited to electives in the MS Project Management program)</li> <li>Project management degree programs accredited by the Project Management Institute (PMI) Global Accreditation Center for Project Management Education Programs (GAC)</li> <li>Online programs ranked among the top five in the nation for military veterans and service members by <em>U.S. News & World Report</em>: Master of Criminal Justice (#2), Master of Science in Computer Information Systems (#2), and the master’s degree programs in management (#5)</li> </ul> <br> Equip yourself for success. Acquire a degree or certificate from Boston University. Our enrollment advisors are ready to assist you at 617-353-6001 or at <u><a href=""></a></u>, or visit <a href=""></a>.<br></div>string;#/partnerships/blog/Boston-University-Metropolitan-College-(MET)-Equip-Yourself-for-Success
Stevens Graduate Programs in Systems Engineering Graduate Programs in Systems Engineering2018-09-20T16:00:00Z,<div class="ExternalClass1FF580C77F594176B9CA539320C18BC8"><u><strong>Strong partnerships lead to student cost benefits and faster completion times</strong></u><br> <br> Stevens Institute of Technology is proud to offer DAWIA-certified students access to graduate programs to help expand your career opportunities. These master’s and graduate certificate programs combine the cutting-edge educational programs of Stevens with the certification training you receive at DAU to deliver you the best possible value.<br> <br> Students will be awarded credit hours after receiving DAWIA certifications in Engineering and Life Cycle Logistics, and passing related examinations given by Stevens. Details of this program are available in <a href="">the course descriptions listed here</a>. This partnership effectively lowers students' costs in receiving a world-class education and speeds up their time to completion. The maximum number of credits that can be earned from a DAU course exam is six graduate credits.<br> <br> Your courses will be offered through Stevens' award-winning online delivery platform, WebCampus. As a student, you will have 24/7 access to download lecture content, interact with your classmates and complete assignments. Our instructors will integrate video and other multimedia, real-time and recorded webinars for presentations, Q&A periods, and online “office hours” where students can interact with faculty.<br> <br> Stevens School of Systems and Enterprises offers key master’s and graduate certificate programs that focus on systems and software engineering, space systems, design and architecture, life cycle support, complexity, modeling, analytics, and agile methods. Stevens is a leader in systems engineering education and home of the Systems Engineering Research Center, a U.S. Department of Defense Center of Excellence.</div>string;#/partnerships/blog/Stevens-Graduate-Programs-in-Systems-Engineering



PEO Ammunition Redesignated as Joint PEO Armaments and Ammunitionstring;#/training/career-development/logistics/blog/PEO-Ammunition-Redesignated-as-Joint-PEO-Armaments-and-AmmunitionPEO Ammunition Redesignated as Joint PEO Armaments and Ammunition2018-11-05T12:00:00ZBill Kobrenstring;#/training/career-development/logistics/blog/PEO-Ammunition-Redesignated-as-Joint-PEO-Armaments-and-Ammunition
Updated MIL-STD-31000B Technical Data Packagesstring;#/training/career-development/logistics/blog/Updated-MIL-STD-31000B-Technical-Data-PackagesUpdated MIL-STD-31000B Technical Data Packages2018-11-05T05:00:00ZBill Kobrenstring;#/training/career-development/logistics/blog/Updated-MIL-STD-31000B-Technical-Data-Packages
Twenty Questions Every PSM Should be Prepared to Answerstring;#/training/career-development/logistics/blog/Twenty-Questions-Every-PSM-Should-be-Prepared-to-AnswerTwenty Questions Every PSM Should be Prepared to Answer2018-11-05T05:00:00ZBill Kobrenstring;#/training/career-development/logistics/blog/Twenty-Questions-Every-PSM-Should-be-Prepared-to-Answer
New Funding for Product Improvements Resourcestring;#/training/career-development/logistics/blog/New-Funding-for-Product-Improvements-ResourceNew Funding for Product Improvements Resource2018-11-02T12:00:00ZBill Kobrenstring;#/training/career-development/logistics/blog/New-Funding-for-Product-Improvements-Resource



Section 809 Panel Meeting - November 2018 809 Panel Meeting - November 20182018-11-13T13:00:00Zstring;#/events/Section-809-Panel-Meeting---November-2018
Hot Topic Forum: Perspectives on Congress and Acquisition Reform Topic Forum: Perspectives on Congress and Acquisition Reform2018-11-14T17:30:00Zstring;#/events/Hot-Topic-Forum--Perspectives-on-Congress-and-Acquisition-Reform
DAU Lunch and Learn: Effective Communications Lunch and Learn: Effective Communications2018-11-28T17:30:00Zstring;#/events/DAU-Lunch-and-Learn---Effective-Communications
DAU Lunch and Learn: COR Duties Lunch and Learn: COR Duties2018-12-05T17:30:00Zstring;#/events/DAU-Lunch-and-Learn---COR-Duties
Section 809 Panel Meeting - December 2018 809 Panel Meeting - December 20182018-12-11T13:00:00Zstring;#/events/Section-809-Panel-Meeting---December-2018
DAU Lunch and Learn: Integrated Program Protection Planning Lunch and Learn: Integrated Program Protection Planning2018-12-12T17:30:00Zstring;#/events/DAU-Lunch-and-Learn---Integrated-Program-Protection-Planning-