2015 IEEE Aerospace Conference
  • 1 Science and Aerospace Frontiers (Plenary Sessions)
  • 2 Space Missions, Systems and Architectures Marina Ruggieri (University of Roma "Tor Vergata") & Peter Kahn (Jet Propulsion Laboratory) & Steven Scott (NASA Goddard Space Flight Center)
    • 02.01 Deep Space, Earth and Discovery Missions James Graf (Jet Propulsion Laboratory) & Nick Chrissotimos (NASA - Goddard Space Flight Center)
      • 02.0101 ECOSTRESS End-to-end Radiometric Validation William Johnson (Jet Propulsion Laboratory), Renaud Goullioud (Jet Propulsion Laboratory) Presentation: William Johnson - Sunday, March 3th, 04:30 PM - Jefferson
        The ECOsystem Spaceborne Thermal Radiometer Experiment on Space Station (ECOSTRESS) will measure the temperature of plants from the space station. This information will be used to generate products such as evapotranspiration (ET) over an effective diurnal cycle to better understand how much water plants need and how they respond to stresses (i.e. lack of water, sun, nutrients). The radiometer onboard the ECOSTRESS payload provides five thermal infrared (TIR) spectral bands with approximately 70m pixels and a nearly 400km swath. It incorporates many new technologies such as a high-speed Mercury Cadmium Telluride (MCT) focal plane array (FPA), black silicon calibration targets, and a thermal suppression filter allowing shortwave infrared (SWIR) bandpass. This radiometer has two on-board blackbodies to maintain calibration every sweep of the scan mirror (1.4s). The system has undergone an end-to-end test in a thermal-vacuum (TVAC) chamber showing excellent pre-flight radiometric results. This performance is in part enabled by a newly developed, high-speed, low noise, readout electronics. The readout electronics converts all 32-analog channels to digital for onboard processing and downlink. Noise equivalent delta temperature (NEDT) measurements and brightness temperature (BT) retrievals are well within requirements. The optical modulation transfer function (OMTF) is also within specification. The sensor was launched on SpaceX CRS-15 along with Materials ISS Experiment Flight Facility (MISSE-FF) 2 and a Latching End Effector. ECOSTRESS is currently undergoing In-Orbit Checkout (IOC). This is where all systems as well as science/calibration data are checked and verified to be operational. ECOSTRESS uses a local WiFi that sends data from the payload to the ISS. Data packets are then downlinked to the Huntsville Operations Support Center (HOSC) and subsequently achieved in the science data system (SDS) servers. A series of calibration targets such as Lake Tahoe, Salton Sea and the Great Lakes will be used to verify the top of atmosphere radiometric integrity of the science data. Other geometrical targets such as the fields of California and large bridges around CONUS will be sued to verify the geolocation accuracy when compared with previous data from (Visible Infrared Imaging Radiometer Suite) VIIRS and ASTER.
      • 02.0102 The James Webb Space Telescope: Mission Overview and Status Matthew Greenhouse (NASA - Goddard Space Flight Center) Presentation: Matthew Greenhouse - Sunday, March 3th, 04:55 PM - Jefferson
        The James Webb Space Telescope is the successor to the Hubble Space Telescope. It is the largest space telescope ever constructed that will extend humanities’ high definition view of the universe into the infrared spectrum to reveal early epochs of the universe that the Hubble cannot see. The Webb’s science instrument payload includes four sensor systems that provide imagery, coronagraphy, and spectroscopy over the near- and mid-infrared spectrum. The JWST is being developed by NASA, in partnership with the European and Canadian Space Agencies, with science observations proposed by the international astronomical community in a manner similar to the Hubble. The final stages of pre-flight testing is underway in all areas of the program. This talk will provide an overview of the JWST technical status and scientific discovery potential.
      • 02.0103 Riders on the Storm: NASA InSight Lander and the 2018 Mars Global Dust Storm Michael Lisano (Jet Propulsion Laboratory) Presentation: Michael Lisano - Sunday, March 3th, 05:20 PM - Jefferson
        On May 5, 2018, NASA’s InSight spacecraft, carrying an international scientific payload with which to explore the interior of Mars, launched from Vandenburg AFB, California, and began its six-and-a-half-month voyage to Mars. Weeks later, on June 3, 2018, a powerful regional Martian dust storm surged through Meridiani Planum and within days had left the 14-year-old Opportunity rover in darkness and generating insufficient solar power to operate. On June 10, Opportunity sent signals including information that the optical depth (“tau”) of the thick dust blanketing the atmosphere was an all-time high ever measured from the Martian surface, at a value of 10.8. Then Opportunity soon afterwards experienced a power fault and ceased communicating with her ground controllers. By June 20, the storm had spread across the entire planet Mars, darkening the skies at Gale Crater where the Curiosity Rover has been ascending Mount Sharp, on the opposite side of Mars from Opportunity. Record-setting, high atmospheric dust levels continued to be measured and reported by the Curiosity team, as the storm became global. Among those keeping daily tabs on the enormous storm that swiftly enshrouded the Red Planet were the InSight flight team of engineers and scientists, whose recently-launched spacecraft was functioning quite well in flight and now five and a half months away from an unchangeable landing date at Mars – November 26, 2018. Would the storm still be raging when InSight arrived? What, if anything, had this storm changed in the assumptions made by InSight’s design and test engineers about Martian atmospheric conditions for descending to the surface, or generating power there? Could InSight survive and carry out her mission? This paper will deliver an account in two major parts, each with two sections that treat the Entry, Descent and Landing (EDL) of InSight as well as Energy Management for surface operations after landing, by systems engineers of the InSight mission responsible for those mission aspects. The first part describes key engineering assessments made and actions taken on InSight during the cruise to Mars, in response to the onset of the 2018 global dust storm. The second part, written after InSight lands, will summarize operational experiences in monitoring and managing energy resources during the first month after InSight’s landing on Nov 26, 2018, describing constraints and impacts presented by the storm and its aftermath on the deployment of InSight’s seismometer and heat probe instruments onto the Martian surface.
      • 02.0104 IXPE Observatory Integrated Thermal, Power, and Attitude Mission Design Analysis William (Bill) Kalinowski (Ball Aerospace), William Deininger (Ball Aerospace) Presentation: William (Bill) Kalinowski - Sunday, March 3th, 09:00 PM - Jefferson
        When the Imaging X-ray Polarimetry Explorer (IXPE) launches in 2021, the world will have a new orbiting X-ray observatory capable of examining previously unexplored celestial phenomena. For the first time, an earth-orbiting observatory will be able to resolve the polarization angle of each incoming X-ray photon in an imaged scene, and provide polarization measurements of each source within the instrument’s field of view. The two top-level project requirements that drive the mission design and observatory capability are the execution, in a one year period, of a Design Reference Mission (DRM) containing 48 representative targets, and the ability to observe any location on the celestial sphere for 30 days every six months. IXPE exceeds these driving requirements with a straightforward observatory design concept that allows a large instrument field of regard with a fixed solar array. IXPE completed its preliminary design review in June 2018 with a baseline power and thermal design that accommodates all observatory attitudes that maintain a +/- 25 degree angle between the body-fixed solar array and the plane normal to the sun vector. This paper examines how observatory attitude affects power consumption, how the DRM targets drive the observatory power and thermal design, and potential system design trades.
      • 02.0105 IXPE Mission System Concept and Development Status William Deininger (Ball Aerospace), William (Bill) Kalinowski (Ball Aerospace) Presentation: William Deininger - Sunday, March 3th, 09:25 PM - Jefferson
        IXPE is designed to expand understanding of high-energy astrophysical processes and sources, in support of NASA’s first science objective in Astrophysics: “Discover how the universe works.” Polarization uniquely probes physical anisotropies—ordered magnetic fields, aspheric matter distributions, or general relativistic (GR) coupling to black-hole spin—that are not otherwise measurable. Imaging enables the specific properties of extended x-ray sources to be differentiated. The Imaging X-Ray Polarimeter Explorer (IXPE) Mission is a NASA Small Explorer (SMEX). It is designed as a 2-year mission which launches to a cirular LEO orbit at an altitude of 540 km and an inclination of 0 degrees. The payload uses a single science operational mode capturing the x-ray data from the targets. The mission design follows a simple observing paradigm: pointed viewing of known x-ray sources (with known locations in the sky) over multiple orbits (not necessarily consecutive orbits) until the observation is complete. The Observatory communicates with ground stations via S-band link. The IXPE Observatory consists of spacecraft and payload modules built up in parallel to form the Observatory during system integration and test. IXPE’s payload is a set of three identical, imaging, X-ray polarimeter systems mounted on a common optical bench and co-aligned with the pointing axis of the spacecraft. Each system, operating independently, comprises a 4-m-focal length Mirror Module Assembly that focuses X-rays onto a polarization-sensitive imaging detector separated by the deployable boom. Each Detector Unit (DU) contains its own electronics, which communicate with the payload computer that in turn interfaces with the spacecraft. Each DU has a multi-function filter wheel assembly for in-flight calibration checks and source flux attenuation. The payload is accommodated on the spacecraft top deck. The spacecraft provides the necessary resources to support and operate the payload elements and enable continuous science data collection. The IXPE Observatory is designed to launch on a Pegasus XL or larger launch vehicle. Due to X-ray telescope focal length requirements, a deployable boom is required to enable packaging of the stowed Observatoy within the Pegasus XL fairing. The ground system consists of three major elements: the ground stations for data receipt and command upload to the Observatory; the Mission Operation Center (MOC); and Science Operations Center (SOC). The primary ground station, contributed to the IXPE mission as part of an international collaboration is at Malindi, Kenya. The back-up ground station is in Singapore through the NEN. TDRSS is used for launch and early operations. The MOC is located at CU/LASP using their existing multi-mission MOC. The SOC is located at MSFC and the data archive at the GSFC HEASARC. This paper summarizes the IXPE mission system, provides more details on the Observatory, expected launch process, MOC and SOC along with design and development status.
      • 02.0107 Moon Diver: A Discovery Mission Concept for Exploring a Lunar Pit to Investigate Flood Basalts Issa Nesnas (Jet Propulsion Laboratory) Presentation: Issa Nesnas - Sunday, March 3th, 09:50 PM - Jefferson
        Flood basalts are gigantic volcanic eruptions that play a key role in resurfacing planetary bodies and influencing their atmospheres, which consequently impact their habitability. Flood basalts are common within the inner solar system and cover a notable percentage of Mercury, Venus, Earth, the Moon, and Mars. The ideal body for studying flood basalts is one that is easily accessible, has a consistent morphology, does not have active geological processes, and is well preserved. Weathering by wind and water as well as its plate tectonics create serious challenges on Earth; the lack of these complications makes the Moon a serious contender to be the ideal place for this investigation. The discovery of lunar pits by the JAXA SELENE/Kaguya mission and subsequent orbital observations by NASA’s Lunar Reconnaissance Orbiter revealed tens of meters of exposed stratigraphy, offering a unique site for directly accessing the basaltic layers. The objective of the Moon Diver mission concept is to understand the origins, emplacement processes, and evolution of these basalts to inform their influence on solar system planetary bodies. Two key capabilities would be needed: a landing capability that would deliver a spacecraft to within tens of meters of a target and a robotic explorer that would be capable of traversing to the pit, ingressing, and acquiring measurements along its near-vertical wall. JPL has been developing a vision-based capability for pinpoint landing using a terrain-relative navigation that repeatedly matches visual features from a downward-facing camera to an a priori terrain map. This navigation information is used to guide the spacecraft toward its landing target, for a tight landing ellipse. Once on the surface, an extreme-terrain robotic explorer, called Axel, would egress from lander. Axel is a two-wheeled tail-dragger with a tether that is anchored to the lander. It carries hundreds of meters of tether that it pays out as the rover traverses away from the lander. With the aid of the tether, the rover can rappel down steep slopes using the same principle of motion as a yo-yo. The rover carries two large hubs covered by the wheels, which can house multiple instruments each. By coordinating its four actuators (wheels, tail and spool), the rover is capable of pointing its instruments with millimeter repeatability while hanging from the tether. The lander provides mechanical support, power and communication to the rover through tether. The 50 kg Axel rover has clocked over a kilometer of mobility in field tests and has traversed near vertical slopes. The mission timeline is one lunar day for rappelling and acquiring context imagery along a transect and its opposing wall, collecting microscopic imagery under controlled lighting for mineralogy, and acquiring Alpha-particle-X-ray spectroscopy for elemental composition. The instruments as well as a dust removal tool would be deployed from the instrument bays. Lunar pits may open into subsurface lava tubes. This mission would enable us to peer into the potential void providing an exciting new target for lunar exploration.
      • 02.0110 Europa Clipper Mission: Preliminary Design Report Todd Bayer (NASA Jet Propulsion Lab), Maddalena Jackson (Jet Propulsion Laboratory) Presentation: Todd Bayer - Monday, March 4th, 05:20 PM - Jefferson
        Europa, the fourth largest moon of Jupiter, is believed to be one of the best places in the solar system to look for extant life beyond Earth. Exploring Europa to investigate its habitability is the goal of the Europa Clipper mission. The Europa Clipper mission envisions sending a flight system, consisting of a spacecraft equipped with a payload of NASA-selected scientific instruments, to execute numerous flybys of Europa while in Jupiter orbit. A key challenge is that the flight system must survive and operate in the intense Jovian radiation environment, which is especially harsh at Europa. The spacecraft is planned for launch no earlier than June 2022, from Kennedy Space Center (KSC), Florida, USA, on a NASA supplied launch vehicle. The mission is being implemented by a joint Jet Propulsion Laboratory (JPL) and Applied Physics Laboratory (APL) Project team. The project recently held its Project Preliminary Design Review (PDR) and in January 2019 NASA will consider approving the mission for entry into Phase C, the Detailed Design phase. A down-selection to one launch vehicle by NASA is anticipated sometime before CDR. This paper will describe the progress of the Europa Clipper Mission since January 2018, including maturation of the spacecraft, subsystem and instrument preliminary designs, issues and trades, and planning for the Verification & Validation phase.
      • 02.0111 Mission Concept for a Europa Lander Jennifer Dooley (Jet Propulsion Laboratory, California Institute of Technology) Presentation: Jennifer Dooley - Monday, March 4th, 09:00 PM - Jefferson
        A NASA HQ-directed study team led by JPL with partners including APL, MSFC, GSFC, LaRC and Sandia National Laboratory has recently presented an updated mission concept for a Europa Lander that would search for bio-signatures and signs of life in the near-subsurface of the Jovian moon. This paper will describe that mission architecture including science objectives, interplanetary and delivery trajectory, flight system, planetary protection architecture and mission phases. The mission would follow the Europa Multiple-Flyby Mission Clipper planned for launch in June of 2022, which would provide reconnaissance imagery and other data to the Lander for use in selecting a scientifically compelling site and certifying it for engineering safety. The Europa Lander concept accommodates the Model Payload identified by the Europa Lander Science Definition Team (SDT) and documented in the Europa Lander Study 2016 Report released in February of 2017. Since holding a Mission Concept Review (MCR) in June of 2017, HQ directed the study team to further explore the architectural trade space with a goal of reducing the mission cost. Based on the results of that study, in December of 2017 HQ directed the study team to focus on biosignature science and shift to a Direct-to-Earth communication architecture. The currently envisioned Europa Lander would launch on an SLS Block 1B as early as Fall of 2026 into a V-Earth-Mars-Gravity Assist (VEMGA) trajectory, arriving in the Jovian system as early as mid-2031. The baseline design of the integrated flight system includes a Carrier Stage, a Deorbit Vehicle composed of a Deorbit Stage consisting of a solid rocket motor (SRM), an MSL-like sky-crane Descent Stage, and a Lander which accommodates the instrument suite. The Lander would be powered by primary batteries over a 20+ day surface mission. The science goals envisioned by the SDT for biosignature science require three samples taken from a depth of 10 cm, a depth chosen to ensure minimal radiation processing of the potential biomarkers. Mission challenges include the large launch mass, unknown terrain topography, surface composition and materials properties, the high radiation environment, and complying with the stringent planetary protection requirements. The mission concept uses a strategy of early risk reduction and overlapping requirements to provide robustness to harsh and uncertain environments. Early risk reduction efforts are aimed at maturing technologies associated with the sampling system, the intelligent landing system, high specific energy batteries, low mass and power motor controllers, and a thermal sterilization system. Jennifer Dooley, Jet Propulsion Laboratory, California Institute of Technology The information presented about the Europa Lander mission concept is pre-decisional, and is provided for planning and discussion purposes only.
      • 02.0112 Exploring the Chemical Diversity of Comets, Asteroids, and Interstellar Dust at 1 AU. Mihaly Horanyi (University of Colorado, Boulder) Presentation: Mihaly Horanyi - Monday, March 4th, 09:25 PM - Jefferson
        Deciphering the composition of interstellar, cometary, and asteroidal dust—the successive generations of the most unaltered original building blocks—offers an unparalleled opportunity to explore the origin and evolution of our Solar System. The goal of the FOSSIL mission (Fragments from the Origins of the Solar System and our Interstellar Locale) is to confirm or disprove expectations that comets from the Oort cloud (OCC) deliver fragments of pristine material, that is the most carbon rich, and least aqueously altered matter from the early stages of Solar System formation. Also, resolve if Jupiter family comets (JFC) represent a transition between OCC and asteroidal material, where asteroidal matter is expected to be heat-treated, carbon-poor, and aqueously altered. FOSSIL is the first mission with the goal to unambiguously identify, and comprehensively explore the makeup of interstellar dust particles (ISD) flowing through our Solar System today, delivering matter closest to the original solid building blocks of our Solar System. Contemporary ISD will be directly compared to ISD from 4.5 billion years ago found in meteorites. The expected differences will inform us about how the metallicity of our galaxy might have changed during this time. This goal directly addresses the Decadal Survey question: “What were the initial stages, conditions, and processes of Solar System formation and the nature of the interstellar matter that was incorporated?” The Decadal Survey recognized that: “There are too many asteroids, comets, and KBOs to explore individually by spacecraft. Mission choices and target selection must be based on a comprehensive assessment of all available information. The science return from such missions is often enriched by the results of ongoing laboratory studies of meteorites and interplanetary dust and by complementary telescopic and Earth-orbital measurements.” FOSSIL offers the solution as the first comprehensive survey to explore the diversity of the chemical makeup of a broad range of bodies in our Solar System and beyond, offering a powerful approach to test the genetic relationships between small body reservoirs. This approach builds on many decades of ground-based radar observation about the speed and direction of meteors from various sources, which lack compositional information, and many decades of laboratory efforts exploring the chemical composition of meteorites, similarly lacking information about their origins. FOSSIL is placed in an Earth-trailing orbit, carrying 4 state-of-the-art Dust Telescopes (DT), continuously monitoring the anti-sunward hemisphere, measuring the mass, composition, charge, and velocity vector of each impacting dust particle. Each DT consists of a Dust Trajectory Sensor (DTS), and an impact ionization reflectron type time-of-flight (TOF) Composition Analyzer (CA). The recent developments in understanding the dynamics of small particles originating from comets, asteroids, and interstellar space, combined with the accumulation of in-situ measurements of dust composition by Giotto, Stardust, Rosetta, Cassini, and the breakthrough advances in dust detection capabilities make FOSSIL a timely, ultra-low cost, low-risk, and scientifically enticing mission.
      • 02.0113 InSight: A Discovery Mission to Mars Tom Hoffman (Jet Propulsion Laboratory) Presentation: Tom Hoffman - Monday, March 4th, 09:50 PM - Jefferson
        The InSight Mission is planning for a November 26, 2018 landing on Mars. Over the subsequent several months, the Lander will deploy instruments to the Martian Surface and start the science mission. The science delivered from InSight will uncover the geophysical characteristics of Mars and use comparative planetary geophysical techniques to better understand the formation and evolution of Mars and thus by extension other terrestrial planets. The mission science uses several instrument and sensors, many of which are international contributions to gather the science data. This paper will describe the InSight mission and science objectives with a focus on the activities since the May 5, 2018 Launch and the plans for the November Entry, Decent and Landing Phase and possibly some initial deployment results.
    • 02.02 Future Space and Earth Science Missions Robert Gershman (JPL) & Patricia Beauchamp (Jet Propulsion Laboratory)
      • 02.0206 Beyond TRL 9: Achieving the Dream of Better, Faster, Cheaper through Matured Commercial Technology Peter Lord (SSL), Dan Goebel (Jet Propulsion Laboratory) Presentation: Peter Lord - Thursday, March 7th, 09:00 PM - Amphitheatre
        On its web site NASA defines Technology Readiness Level (TRL) 9 as: “Actual systems “flight proven” though successful mission operations”. It’s the gold standard for the development and implementation of new technologies on NASA spacecraft, and comes from the concept that things simply don’t get any better than something that has actually flown. In NASA’s eyes, the risks associated with maturing a new technology and using it in space are considered to have been retired by actual in-space mission performance. But have they? Does working in space once guarantee future reliability and success the next time each technology is used? The Psyche mission is procuring the majority of its spacecraft bus from SSL’s commercial communication product line, and it’s use of high heritage commercial technology illustrates the utility of defining levels of maturity beyond TRL 9. The TRL 9 designation is insufficient for describing technologies in continuous high-volume production that have been matured for high reliability and flown on multiple missions. In many cases, commercial technology incorporates lessons learned over many builds and flight applications, resulting in second generation design maturity with high reliability, reduced burn-in times, reduced infant mortality problems, and high reproduceability. The Psyche mission’s use of second generation commercial SEP technology makes it possible to characterize technologies more mature than TRL 9, while at the same time revealing the limitations inherent in the TRL 9 designation. This paper revisits the development and application of the TRL scale to NASA missions in the light of emerging commercial hardware usage. It applies the current TRL scale to the development of Psyche Missions commercial SEP Chassis to expose the limitations of the current scale. Finally, we propose two additional TRL maturity levels to designate the higher levels of maturity now available to NASA for the purpose of allowing them to be recognized and utilized to explore the solar system; better, faster and cheaper than ever before.
      • 02.0208 An On-Orbit CubeSat Centrifuge for Asteroid Science and Exploration Jekan Thangavelautham (University of Arizona), Stephen Schwartz () Presentation: Jekan Thangavelautham - Thursday, March 7th, 09:25 PM - Amphitheatre
        There are thousands of asteroids in near-Earth space and millions expected in the Main Belt. They are diverse in their physical properties and compositions, and are time capsules of the early Solar System. They are valuable for planetary science, and are strategic for resource mining, planetary defense/security and as interplanetary depots. But we lack direct knowledge of the geophysical behavior of an asteroid surface under milligravity conditions, and therefore landing on an asteroid and manipulating its surface material remains a daunting challenge. Towards this goal we are putting forth plans for a 12U CubeSat that will be in Low Earth Orbit and that will operate as a spinning centrifuge on-orbit. In this paper, we will present an overview of the systems engineering and instrumentation design on the spacecraft. Parts of this 12U CubeSat will contain a laboratory that will recreate asteroid surface conditions by containing crushed meteorite. The laboratory will spin at 1 to 2 RPM during the primary mission to simulate surface conditions of asteroids 2 km and smaller, followed by an extended mission where the spacecraft will spin at even higher RPM. The result is a bed of realistic regolith, the environment that landers and diggers and maybe astronauts will interact with. The CubeSat is configured with cameras, lasers, a penetrometer, gas blower and a bead deployer to both observe and manipulate the regolith at low simulated gravity conditions. A series of experiments will measure the general behavior, internal friction, adhesion, dilatancy, coefficients of restitution and other parameters that can feed into asteroid surface dynamics simulations. Effective gravity can be varied, and external mechanical forces can be applied. These centrifuge facilities in space will require significantly less resources and budget to maintain, operating in LEO, compared to the voyages to deep space. This means we can maintain a persistent presence in the relevant deep space environment without having to go there. Having asteroid-like centrifuges in LEO would serve the important tactical goal of preparing and maintaining readiness, even when missions are delayed or individual programs get cancelled.
      • 02.0209 Power System Analysis of the Space Fence Evaluation of Radar EffectivenesS (SFERES) Cubesat Mission Carlos Maldonado (University of Colorado at Colorado Springs) Presentation: Carlos Maldonado - Thursday, March 7th, 09:50 PM - Amphitheatre
        In 2018, a ground-based S-band radar system named Space Fence will undergo operational testing. This radar is designed to discover and frequently track tens of thousands of satellites and debris objects in orbit around Earth. It is challenging to calibrate and test a system meant for discovering small objects, because the only calibration objects in orbit are large. To alleviate this, the Air Force Operational Testing and Evaluation Center is working with the US Air Force Academy and the Space Fence System Program Office to develop a cubesatellite to characterize the radar’s ability to expand the space object catalog’s fidelity. The cubesatellite will eject two small calibration spheres in low Earth orbit to be tracked by the Space Fence System and other sensors. The radar cross sections of the spheres are precisely measured to support calibration of sensors that will track them in orbit. This paper discusses the cubesatellite’s design, on-orbit mission, and an analysis of the power system. The SFERES cubesat is a 1U satellite consisting of predominantly COTS components to allow for rapid bus development and risk mitigation. The avionics, radio, battery pack, and solar panels were obtained from the California Polytechnic State University cubesat group, PolySat, based on the group’s previous flight heritage and expertise in the field of nanosatellite technologies. The use of COTS components allows for the rapid design, manufacture, testing, and deployment of highly customizable payloads such as the calibration spheres that will be used to test the Space Fence. One of the constant challenges in cubesat design is the volume, mass, and power limitations. In terms of power requirements the SFERES system must have sufficient energy to deploy the payload door, two calibration spheres using a custom Nitinol burn-wire system, operate LEDs for optical ground tracking, while simultaneously supplying power for on-board processing and communications. The cubesat is equipped with 8 solar cells, each with area 26.6 cm2. STK was used to estimate the power generation as a function of incident solar flux to the solar panels. The spacecraft orbit was modeled using a precessing spin to simulate tumble. The power generation was then analyzed as a function of a single orbit. The average time in sunlight per orbit is 56.6 min resulting in an average energy generation of 2.32 Wh per orbit, however there is an average energy dissipation rate of 0.65 Wh. The dissipation rate is the result of minimum satellite operations that are required during system idle. The cubesatellite will also benefit optical sensors. The larger sphere will have an optically-measured iridite coating, and the cubesatellite bus will contain LEDs in frequencies that support testing of selected optical sensors. This cubesatellite platform has the ability to provide real-world on-orbit characterization of billion-dollar assets built to protect the USA and its allies, as well as to expand space situational awareness.
    • 02.03 System and Technologies for Landing on Planets, the Moon, Earth and Small Bodies Ian Clark (Jet Propulsion Laboratory)
      • 02.0301 Semi-Active Damping System Characterization for Landing in Microgravity Mauro Massari (Politecnico Di Milano), Paolo Astori (Politecnico di Milano), Francesco Cavenago (Politecnico di Milano) Presentation: Mauro Massari - Monday, March 4th, 08:30 AM - Madison
        The landing of space probes in microgravity poses very challenging problem both from the dynamical and technological point of view. In particular, the recent experience of the landing of Philae on the comet 67P/Churyumov-Gerasimenko showed that even highly redundant systems do not guarantee a sufficient degree of robustness to the landing procedure in microgravity. On 12 November 2014, the Philae lander performed the first landing in human history on a comet. On the contact with the surface, two harpoons were to be fired into the comet soil while firing a rocket to prevent the lander from bouncing off as the comet’s escape velocity is only around 0.4 m/s. Analysis of telemetry indicated that both the harpoons and the rocket had not fired upon landing, causing the lander to just rely on a passive damping system. The main problem of designing a damping system for landing in microgravity is the high uncertainty associated with the feature of the soil (i.e. damping and stiffness coefficients), therefore a highly robust design relying only on passive system is not possible. In this work a new approach to increase the robustness of the damping system for landing in microgravity is proposed, coupling a passive granular shock absorber with a semi-active piezoelectric based friction damper which can modulate the applied braking force using fast piezoelectric actuator acting on a sort of brake pad. Pushing something in a granular material (like the sand) creates a reaction force able to stop the motion. Moreover, this process occurs gradually, without involving excessive deceleration and preserving the integrity of the impacting object. The energy dissipation occurs through the collision between the granules and the friction phenomenon. For this reason, the granular shock absorber has been considered to be a viable option for the passive part of the semi-active damping system. Considering the active part of the damping system the piezoelectric based friction damper consists of an actuator, which is based on a piezoelectric stack with a mechanical amplifying mechanism that provides symmetric forces. The advantages of such an actuator are its high bandwidth, actuating response and its ability to operate in vacuum environment such as in space. The proposed concept has been carefully modelled numerically, identifying characteristic parameters of both the passive and active parts. In the case of the granular shock absorber, numerical discrete elements simulations have been validated conducting an experimental campaign in relevant conditions. Details of those simulations and tests will be provided in the paper. Finally, the identified model of the semi-active damping system has been used to conduct an extensive sensitivity analysis of the performance achievable in a wide range of landing velocity and soil features. The results of this analysis show that the proposed approach allows reducing the maximum reaction force applied to the landing system while increasing the acceptable uncertainty in the soil characteristics when compared with the equivalent passive-only system.
      • 02.0305 Aero Maneuvering Dynamics and Control for Precision Landing on Titan Marco Quadrelli (Jet Propulsion Laboratory) Presentation: Marco Quadrelli - Monday, March 4th, 08:55 AM - Madison
        Saturn’s moon Titan is the richest laboratory in the solar system for studying prebiotic chemistry, which makes studying its chemistry from the atmosphere to the surface one of the most important objectives in planetary science. Studying Titan’s organic chemistry requires landing to sample and analyze fluids, dissolved species, and sediments from Titan’s seas, lakes, tidal pools, or shorelines. Landing dispersions with existing technology are hundreds of kilometers wide, precluding landing in any liquid body except the large seas at high northern latitudes. Low to medium cost missions require direct to Earth (DTE) communication; seasons on Titan now prevent such missions to northern seas for landings before the late 2030s. With these large landing dispersions, access to shorelines or other smaller features on Titan, which may present liquid-solid interfaces or more dynamic environments conducive to more chemical evolution, is only conceivable by relying on wind drift after landing on large seas. Therefore, there is a critical need for more precise landing capability to explore the unique potential for prebiotic chemistry on Titan’s surface. The focus of our work is on technology development to substantially reduce Titan lander delivery error. By far the greatest contribution to this error in past Titan mission designs has been long parachute descent phases (~ 2.5 hours) from high altitudes (~ 150 km) in high winds with large wind uncertainties; therefore, addressing error during parachute descent is the key to enabling precision landing. Lowest delivery error would be achieved with a multi-stage parachute system, with an unguided drogue parachute that descends rapidly through altitudes with high winds, followed by a guided parafoil with a high glide ratio that flies out position error at lower altitudes. Parafoil deployment at altitudes up to 40 km, where proven descent camera technology could see the surface to enable position estimation, could reduce delivery error by 100 km or more. Parafoil aerodynamic performance has not been characterized yet for the dense Titan atmosphere and parafoil G&C algorithms must be adapted to unique characteristics of Titan missions. As part of this effort, and leveraging past work, we have been developing a simulation of end-to-end EDL performance and using the simulation to estimate and optimize expected landing dispersion, with the goal of showing feasibility of reducing delivery error by at least 100 km compared to Huygens-like descent. We have developed and tested several dynamic models of the parafoil system descending in Titan’s atmosphere. We have also developed techniques for autonomous parafoil turning in the adverse wind environment, including an assessment of the turn performance effectiveness to negotiate the incoming wind. Finally, and in order to improve the controller performance by reducing the uncertainty to environmental factors, we have also developed ways to estimate the Titan environmental parameters, i.e. the atmospheric density, and the wind magnitude, during the descent.
      • 02.0308 Mars 2020 Entry, Descent, and Landing System Overview Adam Nelessen (Jet Propulsion Laboratory), Chloe Sackier (), Ian Clark (Jet Propulsion Laboratory), Paul Brugarolas (), Gregorio Villar (), Allen Chen (Jet Propulsion Laboratory), Aaron Stehura (Jet Propulsion Laboratory), Richard Otero (Georgia Institute of Technology), Erisa Stilley (), David Way (NASA - Langley Research Center), Karl Edquist (NASA - Langley Research Center), Swati Mohan (NASA Jet Propulsion Laboratory), Cj Giovingo (Jet Propulsion Laboratory), Mallory Lefland () Presentation: Adam Nelessen - Monday, March 4th, 09:20 AM - Madison
        Building upon the success of the Mars Science Laboratory (MSL) landing and surface mission, the Mars 2020 project is a flagship-class science mission intended to address key questions about the potential for life on Mars and collect samples for possible Earth return by a future mission. Mars 2020 will also demonstrate technologies needed to enable future human expeditions to Mars. Utilizing the groundbreaking entry, descent, and landing (EDL) architecture pioneered by MSL, Mars 2020 will launch in July 2020 and land on Mars in February 2021. Like its predecessor, Mars 2020 will deliver its rover payload to the Martian surface through the use of Apollo-derived entry guidance, a 21.45 meter supersonic Disk-Gap-Band parachute, a Descent Stage powered by throttleable Mars lander engines, and the signature Sky Crane maneuver. While Mars 2020 inherits most of its EDL architecture, software, and hardware from MSL, a number of changes have been made to correct deficiencies, improve performance, and increase the robustness of the system. For example, Mars 2020 will take advantage of the favorable atmospheric conditions of the 2020 launch opportunity to deliver a larger and more capable rover than has landed on Mars to date. A primary focus in developing the Mars 2020 EDL system has been mitigating residual risks identified after the landing of MSL. The Advanced Supersonic Parachute Inflation Research Experiment (ASPIRE) was performed to address new concerns about the stresses experienced by parachute canopies during inflation. Other risk reduction activities include investigating possible interactions between the parachute deployment system and the inertial measurement unit (IMU) which could lead to IMU saturation, researching the effects of airborne dust on radar ground measurements, and site-specific gravity modeling for improved fuel usage. Several enhancements were added for Mars 2020 to improve performance. The addition of Terrain Relative Navigation (TRN) allows the system to land at sites with more hazardous terrain, enabling scientists to select from locations which have previously been considered inaccessible. Mars 2020 will utilize a Range Trigger for initiating parachute deployment, which reduces landing ellipse sizes by 40% compared to the Velocity Trigger approach used on MSL. New EDL Cameras will capture high resolution and high frame rate images and videos of key events, such as parachute deployment and rover touchdown. Finally, the Mars Entry, Descent, and Landing Instrumentation 2 (MEDLI2) sensor suite will build upon the successful MSL MEDLI experiment with the addition of supersonic heatshield pressure sensors and backshell instrumentation. The team has faced new and unexpected challenges throughout development. Notably, the failure of the flight heatshield during a static load test has prompted the fabrication of a new unit. Also, in accommodating the first ever Mars Helicopter under the rover belly pan, the EDL design has been further constrained by reduced ground clearances. Despite these challenges, much of the EDL-related hardware and software have already been delivered, and the EDL verification and validation program is on track to be completed before launch in July 2020.
      • 02.0309 Overview of the ASPIRE Project’s Supersonic Flight Tests of a Strengthened DGB Parachute Clara O'farrell (Jet Propulsion Laboratory), Bryan Sonneveldt (Jet Propulsion Laboratory), Chris Karlgaard (Analytical Mechanics Associates, Inc.), Ian Clark (Jet Propulsion Laboratory) Presentation: Clara O'farrell - Monday, March 4th, 09:45 AM - Madison
        The Advanced Supersonic Parachute Inflation Research Experiments (ASPIRE) project is aimed at devel-oping and exercising a capability for testing supersonic parachutes at Mars-relevant conditions. The initial flights for ASPIRE are targeted as a risk-reduction activity for NASA’s upcoming Mars2020 mission. For this effort, two candidate Disk-Gap-Band (DGB) parachute designs are being tested at Mach number and dynamic pressure conditions relevant to Mars2020. The two parachutes under investigation are a build-to-print version of the DGB used by the Mars Science Laboratory and a strengthened version of this parachute that has the same geometry but differs in materials and construction. Starting in the fall of 2017, the parachutes are being tested at deployment conditions representative of flight at Mars by sounding rockets launched out of NASA’s Wallops Flight Facility (WFF). The first flight test (SR01) of the built-to-print parachute took place on October 4, 2017, followed by the first test of the strengthened parachute during flight SR02 on March 31, 2018. A second test of the strengthened parachute with a higher target load, SR03, is scheduled for late July of 2018. During the SR02 test, a Terrier-Black Brant sounding rocket delivered a payload containing the packed 21.5-m parachute, the deployment mortar, and the ASPIRE instrumentation suite to a peak altitude of 54.8 km. As the payload descended back down, an onboard computer calculated an estimate of the dynamic pressure and triggered deployment of the parachute once the targeted test condition was reached. The strengthened parachute was deployed at a Mach number of 1.97 and a dynamic pressure of 667 Pa, and produced a peak load of 55.8 klbf. The onboard instrumentation suite included a GLN-MAC IMU, a GPS unit, a C-band transponder for radar tracking, three load pins at the parachute triple bridles, and three high-speed/high-resolution cameras trained on the canopy during inflation. In addition, the atmospheric conditions at the time of flight were characterized by means of high-altitude meteorological balloons carrying radiosondes. These data allowed the reconstruction of the test conditions, parachute loads, and parachute aerodynamic performance in flight. During the SR03 flight, the strengthened parachute will be deployed from an identical test platform, carrying identical instrumentation, at a target dynamic pressure of 920 Pa. This paper will describe the first two sounding rocket flight tests of the strengthened parachutes, SR02 and SR03. It will provide an overview of flight operations, the data acquired during testing, the techniques used for post- flight reconstruction, and the reconstructed performance of the test vehicle and parachute system for each flight.
      • 02.0310 EDL Simulation Results for the Mars 2020 Landing Site Safety Assessment David Way (NASA - Langley Research Center), Soumyo Dutta (NASA Langley Research Center), Samalis Santini De León (Cornell University ) Presentation: David Way - Monday, March 4th, 10:10 AM - Madison
        The Mars 2020 rover is NASA’s next flagship mission, set to explore Mars in search of scientific evidence of past microbial life. Importantly, the rover will also, for the first time, have the ability to collect and cache rock and soil samples for retrieval and return to laboratories here on Earth. Thus Mars 2020 represents the first in a triad of ambitious missions designed to return samples from the red planet, a major step in addressing key questions about the origins of the solar system. A key step in the development of the Mars 2020 mission is the selection of a suitable landing site with the largest likelihood of meeting scientific goals. This decision is a complex and critical one that requires close interaction between the scientific and engineering communities. The chosen landing site must be both scientifically interesting – providing the project with the greatest possible chance of gathering credible and defendable scientific evidence – and also safe enough to attempt a landing in the first place. Thus, arguably one of the most important undertakings of the EDL team, is to effectively enumerate, quantify, and communicate the landing risks to all of the stakeholders. The culmination of this effort is the Landing Site Safety Assessment, which is a review commissioned by the project, presided over by the EDL Standing Review Board, and attended by management and science stakeholders, in which the EDL team communicates their assessment of the associated landing risks and the combined probability of a successful landing at each of the final candidate landing sites. This assessment relies heavily on computer simulations of the EDL sequence. Over the course of several Landing Site Workshops, approximately thirty candidate landing sites were evaluated for scientific interest and the potential to meet scientific objectives. These sites were ranked by the science community, with the highest ranked sites moving-on the next round. Through this process, three top candidates emerged: Jezero Crater, North East Syrtis, and Columbia Hills. These three sites, along with a fourth site located approximately half-way between Jezero and NE Syrtis, were evaluated in detail by the EDL team for overall EDL performance and landing safety. In this paper, we will summarize the end-to-end EDL simulation results used in support of this assessment
      • 02.0311 A Terminal Descent Radar for Landing and Proximity Operations Brian Pollard (Remote Sensing Solutions) Presentation: Brian Pollard - Monday, March 4th, 10:35 AM - Madison
        Mars Science Laboratory’s unprecedented “sky-crane” landing utilized a new “Terminal Descent Sensor” (TDS), a Ka-band pencil beam radar for high accuracy measurements of line-of-sight range and velocity. While Mars 2020 will utilize the same design from remaining parts and new builds, the availability of the TDS for future missions is unclear due to problems of obsolescence and reproducibility; in addition, the TDS is quite large, prohibitively so for smaller missions. Remote Sensing Solutions is currently funded under a NASA Small Business Innovative Research program to revisit the TDS design, and, notably, reduce the size, weight, and power; improve the manufacturability; and maintain or improve the performance. In this paper we discuss the results from RSS’s design efforts. The RSS Terminal Descent Radar (TDR), is a modular design, allowing customization of the center frequency for different applications, from a small, reproducible package. The design utilizes the same “memoryless” measurement approach as the MSL TDS, but also includes additional capability to detect and potentially correct for contaminating targets such as airborne debris. In addition to the design activities, we discuss progress our ongoing program to build and field a Ka-band version of the TDR based on RSS’s ARENA software-defined digital receiver and miniature Ka-band up/downconversion modules, both of which have clear paths to space qualification. The resultant sensor promises to provide similar if not improved performance as the TDS from a significantly smaller package.
      • 02.0312 Effects of Energy and Mass Utilization on Magnetoshell Aerocapture Performance Charles Kelly (University of Washington) Presentation: Charles Kelly - Monday, March 4th, 11:00 AM - Madison
        Aerocapture is an orbit insertion maneuver that uses drag of a planetary atmosphere on a spacecraft to transfer it from a hyperbolic trajectory to a closed elliptic orbit. Current aerocapture devices rely on solid structures to deflect atmospheric flow and are therefore susceptible to high heat and dynamic pressure loads. Magnetoshells (Kirtley, 2012) are a proposed aerocapture technology that generate drag through interaction of the atmosphere with a dipole plasma whose size and density can be modulated. They can create a much larger drag area than rigid aeroshells, allowing them to attain the same drag at higher altitudes and lower dynamic pressures. The large size and low density, high velocity flow regime have thus far prevented ground testing. Therefore, an analytical model is developed here to examine critical questions surrounding fuel/energy requirements of sustaining the plasma and size/power requirements of a magnetoshell subsystem. This model adopts a control volume approach simulating the interaction between plasma and atmosphere with continuity and energy balance equations. Through single-particle-motion analysis, the volume is defined by a magnetic flux contour inside of which all newly ionized particles remain trapped by the magnetic dipole. This volume represents the magnetoshell plasma as a stationary toroid whose axis is parallel to the direction of a neutral atmospheric flow moving at orbital speed. Volume-averaged, normalized equations are developed to track the densities and temperatures of ions, electrons, and magnetoshell neutrals. The effects of stream density and velocity, magnetic field strength, magnet size, and seed plasma injection rate on magnetoshell drag and density are characterized. The model shows that magnet radius has the strongest effect on drag, in some conditions even stronger than the cubic predicted by previous analytic work (Kirtley, 2012). For example, doubling the radius from 0.5m to 1m increased the drag from 1N to 53N at a field strength of 1000 Gauss, and further doubling to 2m generated 415N of drag. These results indicate larger-than-thought advantages of magnetoshells over traditional technologies. The model confirms Kirtley’s prediction of a linear relationship between drag and magnetic field strength, but also reveals regimes where confinement is not strong enough to maintain a plasma sufficiently dense to fully ionize the flow neutrals. This implies a lower bound on magnetic field strength below which the atmospheric flow is not effectively utilized for drag. It is found that the plasma cannot self-sustain using only the atmosphere as an energy source. However, the onboard injection requirement to sustain the magnetoshell is determined to be of mg/s magnitude, requiring just grams of propellant for the whole maneuver. At field strengths below ~1000 Gauss, drag increases as the square of injected mass flow rate. The low fuel mass and favorable scaling indicate a clear advantage over electric propulsion options, where thrust would scale linearly with propellant flow. At higher field strengths, mass flow rate has no effect on drag, implying a forgiving upper bound on plasma seeding in order to fully capture the atmospheric flow.
      • 02.0314 Systems Engineering for ASPIRE: A Low-Cost, High Risk Parachute Test Project Ryan Webb (NASA Jet Propulsion Lab), Thomas Randolph (Jet Propulsion Laboratory), Aigneis Frey (Massachusetts Institute of Technology) Presentation: Ryan Webb - Monday, March 4th, 11:25 AM - Madison
        The Advanced Supersonic Parachute Inflation Research Experiment (ASPIRE) managed by NASA’s Jet Propulsion Laboratory (JPL) has developed a sounding rocket test architecture to test a strengthened parachute for JPL’s Mars 2020 Rover. Categorized as a sub-orbital sounding rocket mission, the program has a high tolerance for risk and is exempt from many standard JPL Flight Project Practices. However, since ASPIRE is a major risk reduction activity for Mars 2020, its test results are significant to JPL and directly impact decisions for a flagship planetary mission. This, combined with the wide scope and complexity of ASPIRE – which spans multiple NASA sites, and includes distinct launch vehicle, flight, and ground systems – creates unique programmatic challenges. Furthermore, as a program composed of multiple test missions, it is possible to evaluate the effectiveness of the systems engineering approach between launches. As a result, the project has adapted its approach to typical systems engineering functions such as verification and validation (V&V), risk management, engineering change requests, problem and failure reporting, and information and configuration management. A set of project guidelines has been established to accommodate the small team size, low budget, and risk posture of the project while also maintaining the highest possible chance of mission success and quality of engineering data products. This paper will describe ASPIRE’s unique approach to systems engineering functions, evaluate successes and lessons learned, and discuss the application of similar approaches to other technology demonstration or qualification programs.
    • 02.04 Access to Space and Emerging Mission Capabilities David Callen (Tyvak Corporation) & Eleni Sims (Aerospace Corporation)
      • 02.0401 Design and Development of RVSAT-1, a Student Nano-satellite with Biological Payload Kai Maitreya Hegde (R. V. College of Engineering), Abhilash C R (R V College of Engineering), Pramod Kashyap (), Anirudh K (R V College of Engineering) Presentation: Kai Maitreya Hegde - Tuesday, March 5th, 08:30 AM - Gallatin
        Many universities across India are coming up with low-cost Pico/Nano/Micro Satellites that have community based missions or payloads. Most of these are solely built by students who have no or very little experience in space technology. Students are driven by sheer motivation to build satellites and make exhaustible plans, especially in the ones which are carrying a biological experiment to space. RVSAT-1 is the first nano-satellite from India to carry a mass of microbes to space in a custom-designed apparatus. Microbes were carefully selected on the basis of their presence in human gastro-intestinal tract and a ground-based analysis was done beforehand. Systems engineering (SE) methodology is adopted while making such a robust satellite, way before from the time of initiation of fabrication. The satellite is of a 2U CubeSat standard design with 10cm x 10cm x 22.7cm dimensions and an overall weight of 2.66kg. The satellite is capable to operate at a flexible orbital height since it has no observation payload. It houses a beacon system that is switched on at all times which posed a challenge while designing the electrical power subsystem. A payload chamber is also incorporated with two independent systems : the microbe characteristic measurement apparatus and a deorbiting system housing an electrodynamic tether-type mechanism. The satellite is expected to stay in orbit for 1 year to carry out the microbe growth and metabolism measurements and then undergoes a deorbit phase of 2 years. Tools like AGI STK were used to model the mission and FDIR (Fault Detection, Isolation and Recovery) obtained was applied to all the subsystems.
      • 02.0403 Multiple Asteroid Retrieval Mission from Lunar Orbital Platform-Gateway Using Reusable Spacecrafts Gustavo Gargioni (Virginia Tech), Marco Peterson (Virginia Tech), David Alexandre (Virginia Tech), Kevin Schroeder (Virginia Tech) Presentation: Gustavo Gargioni - Tuesday, March 5th, 08:55 AM - Gallatin
        This paper describes results of a study commisioned to find possible Near Asteroids capable of being captured using upcoming rocketry for the purposes of space-based mining. Combining reusable rockets such as SpaceX's Big Falcon Rocket (i.e. BFR) and refueling capabilities, this work introduces a relatively low-cost option with higher deltaV and an opportunity for NASA's Lunar Orbital Platform-Gateway (LOP-G) for synergy for services, science, and technologies. In an effort to maximize the number of viable missions, the study focused on choosing a refueling orbit near LOP-G and thus, the Lagrange points L1 and L2 were selected as possible choices for this paper. The resulting simulations of a Cislunar infrastructure orbiting in Lagrange points highlight differences in orbital options. Indeed, the optimal option is balanced between a Near Rectilinear Halo Orbit (NRHO) in L1 and a NRHO in L2. The decision depends on the type of mission allocated to the Cislunar station. However, both options seem promising, not only for asteroid extraction and mining but also for crewed and cargo missions. In a worst case scenario, an operation of 5 decades starting in 2030 generates the viability for more than 130 asteroid retrievals, 2.7 per year, summing more than 1,600 tones. The work culminated in developing a data mining on-line tool that searches the entire Near Earth Asteroids (NEA) close approach database from JPL and the small body database from NASA. The combined data is then integrated with the rocket specs, in this paper the BFR, and the parking orbital choices near LOP-G thus propagating each mission required and selecting the candidates providing all information necessary for each viable mission any range of years. This step then is taking further integrating the reusable capability in the equations and solving for the propellant mass required to extract each asteroids providing different scenarios as the simulation parameters are changed, such as: orbital of departure, rocket specs and range of dates. Having a long-term retrieval and mining operation in place in Cislunar would raise significantly the interest of all humans towards space and its possibilities. Private and Public sectors would be adamantly strengthened by the advance of science and technology. Moreover, shifting the focus of resource exploration from Earth based, humans may set the first step towards a safer environmental future. Furthermore, with the results presented, the paper corroborates this vision as the resulting economic data analysis from the selected asteroids demonstrates that the main use for space sourced materials should be allocated for space-based manufacturing. Moreover, the study provides information that may instigate private entrepreneurs to come and build business and contribute to human space exploration with relative low-cost investment. Using a fleet of less than 10 BFRs over 50 years and services from LOG-P, establishing a mining operation in space may provide a higher long-term success for a business case than any other investment on Earth.
      • 02.0404 SPARC – 1: A New, Improved Modular 6U Spacecraft Craig Kief (COSMIAC at UNM), James Lyke (Space Vehicles Directorate), Don Fronterhouse (PnP Innovations, Inc), Christian Peters (Air Force Research Laboratory), Matthew Hannon (COSIMAC University of New Mexico) Presentation: Craig Kief - Tuesday, March 5th, 09:20 AM - Gallatin
        Abstract — SPARC-1 (Space Plug-and-play Architecture Research Cubesat-1) is the first joint US/Sweden military research nanosatellite (6U cubesat), representing culmination of a research activity spanning more than a decade. The spacecraft design encompasses a blending of technologies and components developed by both countries, with primary payloads of direct interest to each nation. The US payload, referred to as an Agile Space Radio (ASR) is an on-orbit reconfigurable transceiver, intended to support live experimentation with different waveforms and protocols useful to communications missions. The Swedish payload is a visible camera optimized for the study of space situation awareness (SSA) concepts. At the time of this writing, the spacecraft development, assembly, integration, and testing has been successfully completed, and SPARC-1 is expected to launch in 2019.
      • 02.0405 Design and Experimental Validation of a Martian Water Extraction System Daniel Mc Gann (Northeastern University), Emilia Kelly (), Ben Zinser (Northeastern University), Elisa Danthinne (), Patrick Moore (), Andrew Panasyuk (Desktop Metal), Taskin Padir () Presentation: Daniel Mc Gann - Tuesday, March 5th, 09:45 AM - Gallatin
        We present the design and realization of a multi-stage, stationary, robotic, water extraction system used to harvest underground ice from the Martian environment. The motivation for our research comes from the need to use in-situ resources for future manned missions to Mars. Recent studies have found ice located at the polar latitudes of Mars, buried approximately 1-10m below the surface. The collection of water on Mars will provide the means to produce return fuel, oxygen reserves, and plastics — critical resources for an extended Mars mission. Our research was guided by the 2018 NASA Revolutionary Aerospace System Concepts Academic Linkages (RASC-AL) - Mars Ice Challenge. The purpose of the challenge was to maximize water attained within guidelines regarding power, weight on the bit of the drill, and turbidity of collected water. The Northeastern University Planetary Articulating Water Extraction System (NU-PAWES) collected 3.3L over 12 hours in competition awarding the team the competition’s first prize. There were several design constraints for the prototype derived from its intended use on Mars. The robot had to: be operated remotely; have a mass less than 60kg; operate on a limited power supply of 10A at 120V; apply no downward force greater than 150N; remove between 0.3 and 0.6m of regolith; operate in regolith temperatures as low as -26°C. NU-PAWES operates in four stages, (i) a 1 hour cycle of clearing a 5cm diameter hole through regolith to ice using an auger, (ii) a melting process which uses a multi-axis, 360° articulating heater, (iii) extraction of water by a reversible pump system, and (iv) an electroflocculation filtration process. In competition and testing, NU-PAWES demonstrated the necessary versatility to drill holes through overburden of varying consistencies and temperatures. The system effectively minimized energy spent on overburden removal and maximized system accessibility to ice reserves. Testing also revealed the unforeseen high-efficiency potential of using melted water as a heat transfer medium, similar to the Rodwell extraction method. In this paper, we present an overview of the NU-PAWES system along with experimental results of the prototype in terrestrial atmospheric conditions. Additionally, we will discuss lessons learned from the implementation and validation of NU-PAWES and design requirements for future martian water extraction systems.
      • 02.0406 Structural Feasibility Analysis of a Novel Docking Module for Small Satellites Ritvik Pareek (SRM IST KTR), Abhav Prasad (SRMIST), Aditya Patil (SRMIST), Sury Bhan Singh (), Loganathan Muthuswamy (SRM IST) Presentation: Ritvik Pareek - Tuesday, March 5th, 10:10 AM - Gallatin
        This paper focusses mainly on CubeSats and analyses the feasibility of a new and novel docking mechanism. The docking module mentioned above provides the satellite a better and more efficient way to dock, undock and facilitate power and data transfer. It provides small satellites with the ability to aggregate into a multipart space system, thus addressing the issue of volume and mass constraints which is a major shortcoming in small satellites. The proposed docking module is a small - scale, non - androgynous, unpressurized system which has an active locking mechanism. This mechanism eliminates the dependency of docking on the approach velocity of the chaser satellite. The locking mechanism is composed of three locks, all of which restrict the six degrees of freedom of the chaser satellite, thus ensuring a hard dock between the satellites. The primary application after docking is power and data transfer. Power transfer enables us to extend the mission on a low budget basis while data transfer is core for software maintenance. The feasibility of the docking mechanism has been tested by creating a 3-D model of the module and finally carrying out finite element method analysis for natural frequency and static loading conditions and the results are observed to obtain an optimum design
      • 02.0407 Air-Launched Low-SWaP Space-Capable Sounding Rocket Anjali Roychowdhury (Stanford University), Thomas White (Stanford Universtiy), Andrew Lesh (Stanford University), Tim Vrakas (), Michael Arcidiacono (), Skye Vandeleest (Stanford University), Rayan Sud (Stanford University), Kadin Hendricks (Stanford University), Sasha Maldonado (Stanford University), Daniel Shorr (Stanford University), Kartik Chandra (Stanford University), Victoria Thompson (Stanford University), Ben Goldstein (Stanford University), Kai Marshland () Presentation: Anjali Roychowdhury - Tuesday, March 5th, 10:35 AM - Gallatin
        The Spaceshot Team of the Stanford Student Space Initiative (SSI) has designed a custom rocket-balloon system to achieve an apogee height of greater than 100KM, the Karman Line, in the hopes of being the first civilian university to reach space while demonstrating a cheap, low Size, Weight, and Power (SWAP), suborbital launch technology. Suborbital launches represent a high-potential space for growth in aerospace as they provide a high-value market for aerospace research and a low-cost, lower-risk opportunity for agents with less capital to access space. Technologies such as this open-source rocket-balloon system provide an incredible opportunity to democratize space and provide access to aerospace to all. Leveraging the experience of an international-award-winning rockets team, a world-record-breaking high altitude balloon team, and members who have built satellites, high-voltage DNA synthesizers, and more, SSI is building a combinatory rocket-balloon system. The system consists of a zero-pressure balloon platform which will carry a 20kg rocket with a commercial-off-the-shelf motor to 18 KM. At that height, the rocket will launch to reach a final expected apogee of 125KM. While this architecture has been used for decades, from NASA to modern space startups, SSI’s design is uniquely heavily optimized to be low SWaP, affordable, and accessible as an open-source platform. In the process of creating an efficient, effective design in this problem space, SSI has created custom composites designed for the hypersonic thermal load, designed robust payload bays to survive high-shock, and investigated different stability methods for high-altitude, low-atmospheric-density launches.
      • 02.0410 Enhanced Feasibility Assessment of Payload Adapters for NASA’s Space Launch System David Smith (Victory Solutions), Jon Holladay (NASA), Terry Sanders (Jacobs Technology) Presentation: David Smith - Tuesday, March 5th, 11:00 AM - Gallatin
        The first flight of NASA’s new exploration-class launch vehicle, the Space Launch System (SLS), will test a myriad of systems designed to enable the next generation of deep space human spaceflight, and launch from Kennedy Space Center no earlier than December 2019. The initial Block 1 configuration for EM-1 will be capable of lofting at least 70 metric tons (t) of payload and send the Orion crew vehicle into a distant retrograde lunar orbit, paving the way for future missions to cislunar space and eventually Mars. A Block 1B version of SLS will lift at least 34 t to trans-lunar injection (TLI) in its crew configuration and at least 37 t to TLI in the cargo configuration no earlier than 2024. A family of Payload Adapters (PLA) is being developed to provide ELV class (1575mm, 2624mm, 4394mm) and larger spacecraft/payload interfaces for both crewed (Orion) and cargo (fairing) missions. These PLAs also provide the potential of accommodating various configurations of 6U, 12U and 27U Secondary Payloads. Work on demonstration PLA hardware is already in progress at Marshall Space Flight Center in Huntsville, Alabama, which manages the SLS Program. Because of the many potential configurations required to support the many SLS missions planned ranging from sending Europa Clipper to Jovian space to establishing a lunar orbiting Gateway, there is a critical need for establishing the fewest PLA designs that can accommodate the most payloads possible. This paper will summarize applications from a NASA Engineering and Safety Center (NESC) led Model Based Systems Engineering (MBSE) pathfinder activity to develop a “digital” PLA feasibility assessment approach. This approach will help potential users optimize their interface to SLS by providing analysts with the means to reduce PLA feasibility definition cycle time/effort by over 75%. This also allows more feasibility assessment “turns” available to single and multiple payload elements on a single SLS launch which allows them to optimize upmass available to payload versus being required for PLA structure.
    • 02.05 Robotic Mobility and Sample Acquisition Systems Richard Volpe (Jet Propulsion Laboratory)
      • 02.0502 Virtual Model Control for Planetary Hexapod Robot Walking on Rough Terrain Francesco Cavenago (Politecnico di Milano), Marco Canafoglia (Politecnico Di Milano), Mauro Massari (Politecnico Di Milano) Presentation: Francesco Cavenago - Tuesday, March 5th, 08:30 AM - Jefferson
        Sample return and in-situ analysis missions are particularly important in space exploration. In this context, robotic platforms are exploited to explore extraterrestrial bodies since sending humans is both risky and more expensive. Among them, legged rovers are promising systems because they allow to operate in extremely rough terrains. Indeed, they can provide mobility on steep slopes, walk on loose surfaces, overcome obstacles and, as a consequence of these skills, they require easier path planning. In this paper a planetary hexapod robot is considered and, in particular, the control of its gait, exploiting the Virtual Model Control (VMC). In the VMC framework, already used on terrestrial bipedal and quadrupedal robots, a series of virtual elements, like springs and dampers, are attached to specific points on the body to achieve a desired dynamic behavior. Especially, a set of virtual forces are generated and then transformed to the desired joint torques through the Jacobian. The strength of the approach and its suitability for space applications lie on its intuitiveness, robustness and computational efficiency, since neither inverse kinematics nor inverse dynamics are required. The control of the gait is divided into two phases: the stance phase and the swing phase. In the former, the VMC is exploited to compute the torques for the standing legs, required to control the body height, attitude and lateral translations. The virtual elements are attached to different points on the body selected in such a way to govern each degree of freedom. On the other hand, in the latter phase, the VMC provides the control actions for the swing legs to follow a desired trajectory. In this case, the springs and dampers are attached between the foot of the leg and a point on the desired trajectory. The legs alternate these two modes cyclically and this switch is commanded by a state machine whenever the swing legs touch the ground. During the operations, the hexapod could be required to perform different gaits, depending on the need. In particular, this work considers the tripod gait and wave gait. The tripod gait, in which three legs are in the stance phase, whereas the others are in the swing phase, guarantees higher velocity. On the contrary, the wave gait, in which only one leg at a time is in the swing phase, provides a greater stability, thanks to more footholds. The effectiveness and performance of the proposed approach, in both walks, are assessed through numerical simulations considering different terrain roughness and inclinations.
      • 02.0503 PlanetVac Xodiac: Lander Foot Pad Integrated Planetary Sampling System Justin Spring (Honeybee Robotics Spacecraft Mechanisms Corporation), Kris Zacny (Honeybee Robotics Spacecraft Mechanisms Corporation), Bruce Betts (The Planetary Society), Philip Chu (Honeybee Robotics), Steven Ford (Honeybee Robotics Spacecraft Mechanisms Corporation), Kathryn Luczek (Honeybee Robotics Spacecraft Mechanisms Corporation), Andrew Peekema (), Nick Traeden (Honeybee Robotics Spacecraft Mechanisms Corporation), Reuben Garcia (Masten Space Systems), Ian Heidenberger () Presentation: Justin Spring - Tuesday, March 5th, 08:55 AM - Jefferson
        This paper describes the development and testing of the PlanetVac Xodiac sampler by Honeybee Robotics. This iteration of PlanetVac builds on Honeybee’s heritage of pneumatic sampling systems and modifies it to function on an Entry, Descent, and Landing (EDL) test bed vehicle. PlanetVac Xodiac was flown on Masten Space Systems Xodiac vehicle in the Mojave desert of California. The sampler was designed to withstand the high temperatures emitted by the propulsion system plume, as well as the vibration and impact stresses of take off and landing. PlanetVac Xodiac not only survived all three end-to-end field tests, it also collected over three times the expected 100g sample in each trial.
      • 02.0504 Balloon-based Concept Vehicle for Extreme Terrain Mobility Hari Nayar (NASA/JPL) Presentation: Hari Nayar - Tuesday, March 5th, 09:20 AM - Jefferson
        Surface mobility over extreme terrains on planetary bodies will enable access to high-value science targets, for example, exploration of dunes, lake shorelines and putative cryovolcanos on Titan and Recurring Slope Lineae (RSL) on Mars. The steepest slope attempted by any rover on Mars to date is 32°, and slippage was so great in this case that the course was abandoned. Slopes greater than 20° are considered too steep for rover traversal. In this paper, we describe a new concept for surface mobility on planetary bodies with atmospheres. BALLET (BALloon Locomotion for Extreme Terrain) is a balloon mobility concept with 6 evenly-distributed suspended payload modules each serving as a foot for locomotion over currently inaccessible rugged terrain on Mars and Titan. Each foot is suspended by 3 cables from the balloon to control its placement on the ground. Only 1 foot is raised at a time while the remaining feet keep the balloon anchored to the surface. This reduces the buoyancy required and consequently the size compared to a conventional balloon system. To locomote over the surface, each foot is moved in sequence by controlling the three cable lengths. Images from cameras on the balloon are used to map and locate foot placement and for navigation. The platform is inherently highly stable because its center of gravity is at ground level enabling operation on rugged terrain. BALLET achieves its benefits through several innovations: 1) use of a balloon for buoyancy and as a platform for locomotion, 2) limbs composed of cables in tension with significantly less mass than legs composed of links in compression, 3) partitioning the payload into six modular elements and lifting of only one at a time to significantly reduce the needed buoyancy and balloon size, 4) use of the remaining feet on the ground as anchors to restrain BALLET to the desired position, and 5) placement of the payload in the feet keeping the center of gravity very low and the platform highly stable. Physics requirements for the design of BALLET are: 1) the balloon buoyancy force must exceed the weight of one payload foot and, 2) the balloon buoyancy force must be less than the weight of two feet to minimize balloon size. These requirements enable BALLET to lift only one foot at a time while the remaining feet anchor it to the surface. For a nominal design, the buoyancy force of the balloon is set equal to the weight of 1½ feet to accommodate variations in buoyancy due to temperature variations and wind drag forces. The CG will be near the ground and centered within the feet locations. The CG height will increase slightly when a foot is raised but still maintain high stability. The balloon’s center of buoyancy will be within the balloon volume and above the feet. While the physics of BALLET will apply on Venus, the environmental conditions and available component technology limit our consideration to Mars and Titan.
      • 02.0505 Hopping for Low-cost Surface Mobility on Small Bodies: Lessons from past Missions Nikolas Romer (Occidental College), Arthur Chmielewski (Jet Propulsion Laboratory), Nathan Barba (Jet Propulsion Laboratory), Nathan Fulmer (La Cañada High School) Presentation: Nikolas Romer - Tuesday, March 5th, 09:45 AM - Jefferson
        There is a great interest in missions to small bodies in our solar system such as asteroids, comets, centaurs, dwarf planets, and moons. A key aspect of these bodies that makes them interesting is their varying morphologies over (and under) their surface. To characterize these bodies with in-situ exploration, a surface landing architecture that allows for the study of multiple diverse areas is necessary. Current stationary surface landers have only a short range, limited to capabilities of their on-board instruments and robotic arms. Surface rovers, while more mobile, are inherently more complex than landers and incur high costs both in development and operations. A surface lander capable of performing multiple hops could not only lower the cost of this mobility significantly, but also allow for the characterization of even more diverse areas than a rover. While a rover can cover tens of kilometers over the course of its lifetime, a hopper could potentially perform multiple hops of that same distance. While multiple mission concepts have been proposed (CHOPPER, Mars Geyser Hopper, GRUNT), and some have even flown (MINERVA-II, NEAR Shoemaker, Philae), the only mission to execute a planned hop on the surface of another body is the lunar lander, Surveyor 6. Performed in November of 1967, the results of this three-meter hop experiment contain lessons that will aid the development of future hopping mission concepts. This paper will first summarize previous hop attempts– both intentional and unintentional– and proposed concepts for future hopping spacecraft (Hedgehog, Xodiac). A more in-depth study of the Surveyor 6 mission will be included, and lessons learned from the study will be applied to other small bodies. The analysis of ballistic, Surveyor 6 style hops, are performed for a diverse catalogue of small bodies including dwarf planet Ceres, comet 67P, and moon Triton. Feasibilities of hops on each body are then evaluated based on distance and delta-V, among other considerations. The utility of another style of hop, the “jump-and-wait” method, is also examined for the same bodies and compared to the Surveyor style hop results. The paper will conclude with a summary of the applicability of various types of hops for a variety of small bodies.
      • 02.0506 Towards Articulated Mobility and Efficient Docking for the DuAxel Tethered Robot System Patrick Mc Garey (NASA Jet Propulsion Lab), Issa Nesnas (Jet Propulsion Laboratory) Presentation: Patrick Mc Garey - Tuesday, March 5th, 10:10 AM - Jefferson
        Sites of increasing interest for planetary science, such as craters, cold traps, and vents lie in terrains that are inaccessible to state-of-the-art rovers. The Jet Propulsion Laboratory, in collaboration with Caltech, is actively developing a tethered mobile robot, Axel, for traversing and exploring extremely steep terrain, such as Recurring Slope Lineae on Mars and vertical pits on the Moon. However, on Mars, where landing-site uncertainty is high due to the presence of an atmosphere, Axel would first have to traverse several kilometers untethered due to its tether carrying capacity (~500 meters). This paper proposes a novel design for a hybrid mobility system that allows a pair of Axel rovers to dock, lock, and drive long distances as a four-wheeled, articulated steering vehicle. The design improves upon prior efforts to achieve DuAxel mobility by leveraging two actuated docking mechanisms attached on opposite ends of a central module to enable ‘sit/stand’ functionality; the prior DuAxel system was limited to skid-steering, which was inefficient due to Axel's grouser-style, high-friction wheels. In the proposed system, the ‘sit’ configuration is achieved by aligning each dock parallel to the surface, allowing one Axel to detach and explore while the other remains docked and serves as a backup. While ‘sitting’, the central module rests on the ground and is outfitted with wedges for passive anchoring to the terrain (an optional drill can be integrated for anchoring to rock). In order to ‘stand’, the exploring Axel reattaches, locks, and both docks are rotated vertically until Axel's tether deployment boom is upright and the central module is lifted off the ground. Once upright, each Axel rotates about a pivot point for articulated, all-wheel steering, which is accomplished by applying differential wheel torques. To drive straight, the Axels are aligned perpendicular to the direction of motion and wheels are actuated with equal velocity. To turn, each Axel pivots, mirroring the other, and both outer wheels are driven with greater velocity than the inner wheels in order to move along an arc with a minimum turn radius of 1.7 meters (i.e., the distance between Axel pivot points). The end result is a system that enhances the mobility and docking efficiency of the DuAxel system and enables the exploration of previously inaccessible extreme environments from a distant landing site. The main contributions of this paper are i) a detailed systems design of the docking mechanism and central module, ii) kinematic modeling of articulated mobility and ‘sit/stand’ docking functionality, and iii) initial testing in a relevant environment to characterize the mobility of the proposed system.
      • 02.0515 Initial Study of Multirobot Adaptive Navigation for Exploring Environmental Vector Fields Christopher Kitts (Santa Clara University) Presentation: Christopher Kitts - Tuesday, March 5th, 10:35 AM - Jefferson
        Adaptive navigation is the process of modifying a vehicle’s direction or motion path based on measurements taken while moving. Compared to conventional navigation approaches, adaptive navigation has the potential to be more time and energy efficient in identifying conditions of interest, and it has the capability to address challenges relating to time-varying phenomena. Adaptively navigating with a group of robots allows simultaneous, distributed measurements of the environment to be used to characterize the local nature of the environment; this, in turn, can provide additional improvements in timely navigation to or along critical features within the field. Significant work in this multirobot adaptive navigation has been performed for scalar fields, in which a single measurement such as the temperature or the concentration level of a pollutant is associated with every point in the field. To date, nearly all scalar field work has focused on control strategies for finding local minima/maxima and for moving along contour lines/level sets within a field; furthermore, nearly all of this work has been performed via simulation, with very limited lab-based experimentation being performed in some cases. Our group has performed work of this type and has also introduced several extensions to the scope of this field. First, we have proposed, developed, and verified in both simulation and experiment a new class of multirobot adaptive navigation control laws for new scalar field features such as descending ridges, ascending trenches, and locating saddle points. In addition, we have validated our extrema-seeking and contour-following controllers via field missions involving an automated fleet of kayaks. In this paper we describe initial work addressing a new extension - navigating a vector field. Such a field is defined as measurable multi-parameter vector value associated with every point in a field. Depending on the application of interest, this vector might represent a single physical entity, such as the velocity of a flow field, or it might represent multiple distinct scalar qualities such as temperature and humidity. Similar to scalar fields, we hypothesize the existence of a variety of applications that could benefit from the ability adaptively navigate to or along critical features in a vector field, such as sources and sinks, vortices, stagnation points, and so on. In this paper, we propose several simple control strategies for a multirobot formation to navigate to/along several such features. Each strategy is explained, and we describe how each can be implemented in a multilayer control system in which adaptive navigation commands are issued to a multirobot formation control layer, which in turn issues directives to individual robots. Simple simulation case studies are used to demonstrate the behavior of each control strategy. We also describe initial work in preparing for experimental demonstration of these techniques using two of our multirobot systems: a simple indoor testbed consisting of small omniwheeled rovers, and a cluster of automated boats to demonstrate the techniques in the field.
      • 02.0516 Estimating Wheel Slip of a Planetary Rover via Unsupervised Machine Learning Justin Kruger (Stanford University), Arno Rogg (NASA - Ames Research Center) Presentation: Justin Kruger - Tuesday, March 5th, 11:00 AM - Jefferson
        This paper investigates the use of unsupervised machine learning to estimate wheel slip of a planetary exploration rover. In challenging extra-terrestrial terrain, imperfect traction and wheel slip are often encountered, which negatively impacts rover navigation and in the worst case can result in permanent immobilization. Prior slip estimation methods have employed a variety of sensors and algorithms, but are generally only accurate under certain conditions, or require resources unavailable to a rover. Recently, machine learning has been applied to this problem. This study examines unsupervised learning in more detail by applying three unsupervised learning algorithms – self-organizing maps, k-means clustering, and autoencoding – to classify rover sensor inputs into discrete classes corresponding to its current slip state. Unsupervised learning is preferred since labelled training data may not be available to a rover. Proprioceptive signals are used as inputs, with a focus on IMU and wheel telemetry, to avoid adding complexity to rover systems and prevent a reliance on visual odometry. The algorithms are validated on data taken from field trials of a planetary rover, during which slip was induced on a sandy incline between 0.05-0.25m/s. Performance is evaluated for different velocities, sensor inputs, slip classes, algorithm parameters and data filters, with the aim of revealing optimal and non-optimal use cases. Self-organizing maps (SOM) consistently demonstrate the best slip classification accuracy, achieving 97% immobilization detection in the ideal two-class case. For ten slip classes – which approaches a continuous slip estimate – 71% accuracy is obtainable. At rover-like speeds of 0.10m/s, 88% accuracy is demonstrated for three classes. K-means is consistently the worst-performing algorithm, losing 5-30% accuracy compared to SOM, while autoencoders lose 2-10% accuracy. SOM is most computationally intensive while k-means is least, though storage and processing requirements always remain reasonable for a rover. An analysis of significant parameters for algorithm tuning displays accuracy benefits of up to 25%. Due to usage of IMU signals, accuracies improve at higher velocities, and in general, choice of sensor inputs has a notable effect (though optimal choices differ between algorithms). Sliding filters of 0.5-2s improve results while maintaining fast response times. The primary inaccuracy is mis-classification of medium slip as high slip, which can be reduced by making high-slip class intervals larger. The algorithms are not dependent on terrain, lighting or vehicle parameters and, with suitable training, display performance comparable to or better than many existing slip estimation methods for rovers. Although some labelled training data is needed to directly associate slip classes with unsupervised data clusters, it is significantly less than what a fully-supervised algorithm requires. Unsupervised learning is thus considered promising for robust real-time rover slip estimation.
      • 02.0517 Cryobotics: Extreme Environment Testing at Cryogenic Temperatures Drew Smith (NASA - Kennedy Space Center), Andrew Nick (), Jason Schuler (EASI) Presentation: Drew Smith - Tuesday, March 5th, 11:25 AM - Jefferson
        In designing and building equipment to operate in extreme environments, including cryogenic temperature conditions, realistic performance testing is essential. The technology focus area, cryobotics, concerns robotic systems and rotating machinery that must operate at cryogenic temperatures in environments including Earth, low Earth orbit, Mars, Moon, asteroids, Solar orbit, planetary orbit, or those encountered during travel among these destinations. Another given aspect is that cryogenic temperatures (below about 150 K) will be encountered and must therefore be dealt with early in the design process. The heat transmission effects of these temperatures, as well as the large temperature differences (ΔT) and quick changes in temperatures (thermal transients), and thermal cycling must be understood by testing in relevant environments. Knowing what factors are more relevant than others is key in being able to come up with an adequate experimental approach and build an appropriate test apparatus. The applications include mining equipment, spacecraft mechanisms, rotating machinery for superconducting power generation, cryofuel pumping systems, and so forth. Participating with science and industry partners in tribology research, dry lube technology, and material science in a collaborative way is a key facet of the focus area of cryobotics. This paper addresses the design, checkout and testing of an extreme cold environment test chamber and the initial testing of Harmonic Drive strainwave gear sets and Bulk Metallic Glass flexsplines in strainwave gear sets. This chamber was specifically designed for the testing of actuator subsystems such as planetary gearboxes, strainwave gear sets, and full actuators in vacuum and at approximately 100 kelvin (K) and below. This chamber’s capabilities include life testing while under specific loads, gear efficiency, gear wear, temperature monitoring, and other operational parameters.
      • 02.0518 Analysis of the Space Robotics Challenge Tasks: From Simulation to Hardware Implementation Murphy Wonsick (Northeastern University), Velin Dimitrov (Northeastern University), Taskin Padir () Presentation: Murphy Wonsick - Wednesday, March 6th, 08:30 AM - Jefferson
        This paper aims to identify the differences between the physical Valkyrie, a humanoid robot built by NASA’s Johnson Space Center, and simulated Valkyrie used in the Space Robotics Challenge (SRC). SRC is a NASA Centennial Challenge where teams competed in a virtual challenge to complete a series of space-related tasks for Valkyrie that are representative of necessary tasks to support operations in unexplored environments. Leveraging the Centennial Challenges program quickly exposes many different approaches that teams take to complete the tasks. However, not all approaches will work on the physical hardware. The intent of providing this information is to help university research groups without access to a full-size humanoid robot, citizen scientists, and other potential entities interested in working with humanoid robots better understand the limitations and considerations needed to successfully transition from simulation to real hardware. Considering the practical transfer to real hardware early in the development process will help keep future development for humanoid robots in space exploration environments stay grounded with respect to the realistic capabilities and challenges that will be present in completing relevant tasks in space. To identify the differences, we implemented the first SRC task, aligning a communications dish, which requires Valkyrie to turn two wheels with an attached knob to a designated value, on both the physical and simulated robot. There were four major techniques used by the SRC competitors to rotate the wheels: stroking the wheel using friction between the hand and the wheel, inserting a finger(s) into the wheel and rotating by the wheel spoke, pushing against the knob to rotate the wheel, and grabbing the top of the knob. We attempted three of the four approaches, turning the wheel by a spoke was omitted since the fingers were not designed for such operation, and found that the physical robot was either unable achieve the necessary precision to get to the desired value or the robot required several re-grasps to be successful due to unplanned slips. Therefore, to turn the wheel we experimentally chose to use a cylindrical-grasp on the knob, which allowed us to turn the wheel 360 degrees without re-grasping using our whole body, trajectory optimization motion planner. Overall, we found that although the arm’s actual joint positions while turning the wheel were relatively the same between the physical and simulated robot, the arm’s joints experienced a wider range of torque in simulation compared to the physical robot. Additionally, we identified that the fingers in simulation use three coupled revolute joints and do not mimic the actual tendon driven design of the fingers well and are generally unable to grasp items that the physical robot is capable of.
      • 02.0520 Sampling Tool Concepts for Enceladus Lander In-situ Analysis Mircea Badescu (Jet Propulsion Laboratory), Paul Backes (Jet Propulsion Laboratory), Scott Moreland (Jet Propulsion Laboratory), Alex Brinkman (Jet Propulsion Laboratory), Dario Riccobono (Politecnico di Torino), Noel Csomay Shanklin (Georgia Institute of Technology), Samuel Ubellacker (Jet Propulsion Laboratory) Presentation: Mircea Badescu - Wednesday, March 6th, 08:55 AM - Jefferson
        A potential future in-situ lander mission to the surface of Enceladus could be the lowest cost mission to determine if life exists beyond Earth since material from the subsurface ocean, where the presence of hydrothermal activity has been strongly suggested by the Cassini mission, is available on its surface after being ejected by plumes and then settling on the surface. In addition the low radiation environment of Enceladus would not significantly alter the chemical makeup of samples recently deposited on the surface. A study was conducted to explore various sampling devices that could be used by an in-situ lander mission to provide 1cc to 5cc volume samples to instruments. In addition to temperature and vacuum environmental conditions, the low surface gravity of Enceladus (1% of Earth gravity) represents a new challenge for surface sampling that is not met by sampling systems developed for microgravity (e.g. comets and asteroids) or higher gravity (e.g. Europa 13%g, Moon 16%g, or Mars 38%g) environments. It is desired to acquire surface plume material that has accumulated in the top 1cm to ensure acquisition of the least processed material. Several sampling devices were developed or adapted and then tested in simulated conditions that resemble the Enceladus surface properties. These devices and test results are presented in the paper.
      • 02.0521 Autonomous ISRU Robotic Excavation and Delivery Hari Nayar (NASA/JPL), Brian Wilcox (Jet Propulsion Laboratory), A Howe (NASA Jet Propulsion Lab) Presentation: Hari Nayar - Wednesday, March 6th, 09:20 AM - Jefferson
        In-Situ Resource Utilization (ISRU) for our purposes is the exploitation of available resources at the site of a landed spacecraft on the surface of another planetary body. This can include harvesting of atmosphere, regolith, or rock for direct use (e.g. as radiation or micrometeorite shielding) or for separation/purification (e.g. for propellant production). The objective of this study is to try to identify an ISRU architecture, specifically for extracting water from hydrated minerals identified from orbital multispectral imaging on Mars, which can be implemented in an affordable way. By its nature this architecture must incorporate not only all the conventional excavate/scoop/haul/dump/process functions of a terrestrial mining operation on Earth, but also the sorts of maintenance and repair capabilities which any terrestrial mining operation would require in order to stay operational for an extended duration: • keeping the mined material from clogging at choke points in the processing system, • repairing or replacing worn or failed components, • decision-making about where to excavate next, • decision-making about changing the routing of vehicles as the mine site geometry changes or graded roads become unserviceable While it might be possible to directly replicate the function of each human employee of a terrestrial mine with robotic systems on Mars, this study attempts to identify an architecture that simplifies the robotics and autonomy needs of the system to the point where a long-life and reliable system can be implemented in the near-term, and to elaborate a realistic approach to autonomy which can be prototyped within the scope of a realistic task. Autonomous ISRU operations are required in the area of deciding which next place to excavate as the mining proceeds, which rocks or clods need crushing (or further crushing), what path to take between the excavation site and the ore processing site, what path to take from the ore processing site to the dump site, what path to take between the dump site and the excavation site, what preventive maintenance to perform, and fault detection, diagnosis, and repair. Each of these broad areas has many sub-topics, such as (for example) how to modify the transit ramp (or make a new ramp) into an open pit as the geometry of the pit changes, as part of the "what path to take between the excavation site and the ore processing site" broad topic area. Other topics that would generally be straightforward automation but may involve a bit of "autonomy" include how to ensure that a hauling vehicle becomes lined up properly for correct dumping into the intake hopper of the processing plant. Deciding when well-used roads need to be re-graded (or completely re-established along another route) is another topic for autonomy. This report identifies challenges to be overcome and areas for further study.
      • 02.0522 Int-Ball: Crew-supportive Autonomous Mobile Camera Robot on ISS / JEM Shinji Mitani (Japan Aerospace Exploration Agency), Masayuki Goto (JAXA), Ryo Konomura (), Yasushi Shoji (Space Cubics, LLC.), Keiji Hagiwara (MEISEI ELECTRIC CO., LTD.), Shuhei Shigeto (Japan Aerospace Exploration Agency), Nobutaka Tanishima (Japan Aerospace Exploration Agency) Presentation: Shinji Mitani - Wednesday, March 6th, 09:45 AM - Jefferson
        This paper describes the development of an autonomous mobile camera robot that autonomously moves inside the JEM pressurization section and shoots image and video of the object, and the result of the initial checkout on orbit. The JEM Internal Ball Camera (called Int-Ball) is developed to eventually reduce the crew time resource related to the regularly consumed photo shooting to zero. This realization expects that about 10 % of the total crew resources are saved. To improve the efficiency of a shooting task by crew, it is considered that realization of the camera which autonomously moves and can be fixed in a free space is useful. The Int-Ball is developed as a practical equipment which supports full-scale crew support, and the goal is to build environment of joint task of human and robot. If realized, crew time for ISS / JEM use can be used more effectively. In developing the Int-Ball, civilian technologies such as COTS parts and 3D printing technology are utilized to shorten development period and lower cost. The appearance of the Int-Ball is a spherical shape having a diameter of 150 mm or less not to disturb the work such as getting into the sight of the crew, and to consider the safety and the portability. Wireless network cameras providing real-time video downlink and still image acquisition service are capable of continuous shooting with resolution of 1280 x 720 pixels or more for 80 minutes or more duration. The main battery can be charged with USB bus power. When command from the ground control center is received, it moves autonomously to the target position. By applying the image navigation camera system (called Phenox, the drone technology) which performs onboard self-position estimation by using a polyhedral-shaped marker, it is possible to estimate relative 6 degrees of freedom under zero gravity environment. The camera robot control system combines the Phenox's self-position estimate value and MEMS inertial sensor information to realize 6-degree-of-freedom space movement by 12 small-sized axial flow fans and to realize image stabilized attitude control by 3-axis reaction wheels. The control system follows the concept of the flight control software of a small satellite and has a failure detection function, and a redundancy function for one failure. The Int-Ball was launched as payload of Dragon spacecraft by SpaceX Falcon 9 on June 3, 2017. First flight control demonstration was successfully conducted in JEM pressurization section on 15th June. This paper also describes the initial checkout result.
      • 02.0523 Application of Pneumatics in Delivering Samples to Instruments on Planetary Missions Kris Zacny (Honeybee Robotics Spacecraft Mechanisms Corporation), Ralph Lorenz (Johns Hopkins University/Applied Physics Laboratory), Fredrik Rehnmark (Honeybee Robotics), John Costa (Honeybee Robotics Spacecraft Mechanisms Corporation), Joseph Sparta (Honeybee Robotics Spacecraft Mechanisms Corporation), Vishnu Sanigepalli (Honeybee Robotics Spacecraft Mechanisms Corporation), Bernice Yen (Honeybee Robotics Spacecraft Mechanisms Corporation), David Yu (Honeybee Robotics Spacecraft Mechanisms Corporation), Jameil Bailey (Honeybee Robotics), Dean Bergman (Honeybee Robotics), William Hovik (Honeybee Robotics Spacecraft Mechanisms Corporation) Presentation: Kris Zacny - Wednesday, March 6th, 10:10 AM - Jefferson
        Traditional sample acquisition, transfer and capture approaches rely on mechanical methods (e.g. drill or a scoop) to acquire a sample, mechanical methods (e.g. robotic arm) to transfer the sample and gravity to capture the sample inside an instrument or a sample return container. This approach has some limitations: because of reliance on gravity, it is only suited to materials with no or little cohesion. Because of the sample transfer requiring mechanical system, the instrument or sample return container need to be easily accessible. Pneumatic based systems solve these problems because the pneumatic force can exceed the gravitational force and the sample delivery tubing can be routed around other spacecraft elements, making instrument or sample return container placement irrelevant to the sampling system. This paper presents background to pneumatic system applied to planetary missions and provides examples how this could be accomplished on planetary bodies with significant atmosphere (Venus and Titan) and on airless bodies (the Moon, Europa, Ceres).
      • 02.0524 Modeling of Cryobot Melting Rates in Cryogenic Ice Wayne Zimmerman (NASA Jet Propulsion Lab) Presentation: Wayne Zimmerman - Wednesday, March 6th, 10:35 AM - Jefferson
        There currently is significant interest within NASA and the scientific community to explore outer planetary ocean worlds. In this study, both passive and active melt probes (cryobots) have been theoretically analyzed to determine heating power requirements and rates of descent. It is based in part on earlier experimental JPL cryobot studies as well as more recent system engineering studies and provides an analytical estimation adapted to the unique Europan environment.
    • 02.06 Future Missions & Enabling Technologies for In Situ Exploration, Sample Returns Patricia Beauchamp (Jet Propulsion Laboratory)
      • 02.0601 In-Situ Science Instruments in a Radioisotope Power System Environment Brian Bairstow (Jet Propulsion Laboratory), William Smythe (Jet Propulsion Laboratory), Alex Austin (Jet Propulsion Laboratory), Young Lee (Jet Propulsion Laboratory) Presentation: Brian Bairstow - Thursday, March 7th, 08:30 AM - Jefferson
        Radioisotope Power Systems (RPS) have enabled or enhanced many historic and current space missions. Concepts for future missions that could be powered by RPS include in-situ missions to the atmospheres, surfaces, and interiors of Europa, Titan, and other destinations. Such mission concepts are often tightly constrained on mass and volume, while still needing to support the instrument packages necessary to carry out ambitious science investigations, such as the search for signs of life. The physical proximity of RPS to payloads and to the in-situ environment has the potential to impact science instruments and science measurements. Radiation, thermal, vibration, electromagnetic interference (EMI), and magnetic fields impacts must all be considered carefully. This paper looks at existing and potential future RPS designs that could support in-situ missions, and discusses possible interactions with in-situ instruments, including those under development to open up new avenues of scientific discovery. In-situ operations have additional complications compared to in-space operations, including unique environments, and packaging and form factor requirements that drive spacecraft designs. Radiation, EMI, and magnetic fields from RPS could be an order of magnitude higher than for orbital spacecraft, if the RPS must be packaged within the element instead of mounted externally. Waste heat from RPS could cause changes to the local environment around the spacecraft, particularly in an atmospheric or subsurface environment. Vibrations produced by potential future dynamic RPS could interfere with seismic measurements. In addition, in-situ investigations can require different instrument technologies and measurement approaches. Many of these instrument types have not yet flown and require additional development to make them flight capable in the small volumes available for payloads. These developments will have implications for the instruments and their interactions with the RPS environment. Furthermore, all this is complicated by the fact that in-situ mission concepts can vary wildly from one another. Balloon elements have very different requirements compared to melt probes and submarines. Mission designers must consider these characteristics along with RPS and instrument accommodations.
      • 02.0602 Flight-Experiment Validation of the Dynamic Capabilities of a Flux-Pinned Docking Interface Frances Zhu (Cornell University), Mason Peck (Cornell University), Mitchell Dominguez (Cornell University), Laura Jones Wilson (Jet Propulsion Laboratory) Presentation: Frances Zhu - Thursday, March 7th, 08:55 AM - Jefferson
        Flux-pinned interfaces for spacecraft leverage the physics of superconductor interactions with electromagnetism to govern the dynamics between two bodies in close-proximity. The dynamic behavior of this interface constitutes a stable, stiff joint in up to six degrees of freedom without mechanical contact or active control. As part of a spacecraft-docking subsystem, a flux-pinned interface offers several unique advantages over traditional mechanical capture systems, including robustness to control failures, contactless reorientation of the capture target, and collision mitigation. Due to the highly nonlinear, coupled dynamics of flux-pinning physics, ground testing cannot accurately assess the full dynamic capabilities of the interface. Furthermore, current analytical models of the dynamic behavior are either too computationally expensive or too inaccurate to verify real-time state prediction and control. Motivated by these limitations, this study describes a series of experiments performed in a microgravity environment during a parabolic-flight campaign to measure the dynamic behavior of the interface in a flight-traceable environment. This paper presents the performance of a flux-pinned interface in the full six degrees of freedom in terms of several metrics: success of capture at various energetic states, interface stiffness, contact force upon collision, settling time, and final relative position and attitude of the docking (or capture) of the two spacecraft bodies. The testbed consists of two free-floating test articles: one ~15 kg free-flying component outfitted with an array of magnets and one ~150 kg spacecraft analogue that includes paired superconductors, seven video cameras for motion capture, and a release mechanism to control the initial kinematics. The results described here represent 35 tests with significant initial kinetic energy and 48 tests with near-zero initial motion. The boundary between successful and unsuccessful capture has been identified for a range of initial translational and angular velocity, in which 20 of the 35 cases showed successful capture. Specifically, the flux-pinned interface provided successful docking at over 20 cm/s in translational velocity and 30 deg/s in angular velocity. This boundary provides a context for comparison with mechanical capture technologies. The paper summarizes the experimental approach, the results, and derives quantifiable metrics that assess the flux-pinned docking interface as a developing technology. Further, the data collected from the parabolic flights contribute to a predictive dynamics model, which is indispensable for any implementation in a flight project.
      • 02.0603 Genetic Algorithms for Autonomous, Learned Robotic Exploration in Extreme, Unknown Environments Frances Zhu (Cornell University), David Elliott (Cornell University), Zhidi Yang (), Haoyuan Zheng (Cornell University) Presentation: Frances Zhu - Thursday, March 7th, 09:20 AM - Jefferson
        Exploring and traversing extreme terrain with surface robots is difficult, but highly desirable. The behavior between the terrain and vehicle, terramechanics, is hard to model accurately, especially if the terrain is not known a priori. The ability of the system to track a trajectory greatly decreases when the terramechanics are not modeled properly. As the terrain becomes more extreme, the tracking performance further decreases because inaccuracies in the terramechanics model more significantly affect the motion of the vehicle. Knowing the terramechanics model allows preemptive, or feedforward, control that improves tracking performance and mitigates unsafe motion or control action. For these reasons, learning a terramechanics model online is necessary. Additionally, a learned model in interpretable form retains useful and easily extractable information about the vehicle interaction with the environment. This paper develops a new method for learning the vehicles dynamics and control policy when interacting with an unknown environment, while also providing an interpretable model of the systems closed-loop dynamics. The proposed method uses a model-based genetic algorithm to iteratively refine the dynamics model of the vehicle, including the unknown terramechanics, and an optimal control policy. The genetic algorithm assumes a model form for both the dynamics model and control policy, the parameters of which are initially uncertain but converge to a local, possibly global, solution as the vehicle accumulates dynamic information. A simulation and a physical experiment of a radio-controlled car with Ackerman steering evaluate the trajectory-tracking performance of the developed control algorithm by commanding both car analogues to track a predefined trajectory. The tracking accuracy of a neural network controller, as well as a pure-pursuit controller, offer baseline performance to the developed control method. As the refinement of the dynamics model affects the control policy, the closed-loop tracking accuracy encompasses the ability of the learning algorithm to converge on both an accurate dynamics model and an optimal control policy.For an extreme, unknown environment, the results show that the genetic algorithm tracks a given trajectory more accurately than a model-free supervised neural network approach, which ultimately cannot provide an interpretable model. With the interpretable model from the proposed method, engineers can better design, analyze, and verify vehicle and control algorithm architectures that achieve stability and optimal performance. Scientists can derive scientifically rich conclusions from parameters of the learned terramechanics model. Further, the culmination of this research enables farther-reaching and riskier surface exploration campaigns, expanding the minimal knowledge of the universe we occupy.
      • 02.0604 Area-of-Effect Softbots (AoES) for Asteroid Proximity Operations Jay Mc Mahon (University of Colorado Boulder), Christoph Keplinger () Presentation: Jay Mc Mahon - Thursday, March 7th, 09:45 AM - Jefferson
        Soft robotics has recently exploded as a new area of research for terrestrial robotics applications since the capabilities of these systems can very often out-perform traditional “hard” robotic designs. However only limited development of soft robotic systems for space applications have been discussed to date. This paper will provide an introduction and overview of Area-of-Effect Softbots (AoES), which are currently in development under a Phase 2 NASA Innovative Advanced Concepts (NIAC) project. AoES are designed to operate in proximity to, and on the surface of, small asteroids to support mining and planetary defense missions. Their unique design and capabilities are dependent on the incorporation of soft, compliant, and lightweight materials. AoES have a large area-to-mass ratio which allows them to take advantage of the peculiarities of the dynamical environment around small asteroids. Specifically, AoES will use solar radiation pressure to sail to the surface of the target asteroid after being deployed at a safe altitude from a mothership around the asteroid. This capability and the associated control laws will be demonstrated, removing the need for propulsion systems. Furthermore, the large, flexible surface area allows for robustness with respect to uncertainty about the asteroid surface structure - it can provide flotation to prevent sinking into a very loose, dusty regolith, and also provide anchoring to the surface through natural and electroadhesion forces. The enabling technology that will allow the AoES design loop to close is a new class of soft actuators known as HASEL actuators. These actuators harness an electrohydraulic mechanism, whereby electrostatic forces generate hydraulic pressure to drive shape change in a soft fluid-filled structure. HASELs provide an extremely power- and mass-efficient mechanism for actuating the large flexible surface areas that are the essential components defining AoES. Current system design, requirements, and key tradeoffs will be discussed - with a particular focus on the actuation, mobility, anchoring, materials, and power systems/components. The nominal mission profile and concept of operations for using AoES in an asteroid mining mission will be outlined. While AoES are specifically designed for asteroid proximity and surface operations, the system capabilities will be applicable to a wide variety of missions in the future, thus opening the door for capable soft robots in space.
      • 02.0607 A Spring Propelled Extreme Environment Robot for Off-World Cave Exploration Steven Morad (The University of Arizona), Thomas Dailey (University of Arizona), Jekan Thangavelautham (University of Arizona) Presentation: Steven Morad - Thursday, March 7th, 10:10 AM - Jefferson
        Pits on the Moon and Mars are mysterious geological formations that have yet to be explored. These geological formations can provide protection from harsh diurnal temperature variations, ionizing radiation, and meteorite impacts. Some have proposed that these underground formations are well-suited as human outposts. The Martian pits may harbor remnants of past life. Unfortunately, these geological formations have been off-limits to conventional wheeled rovers and lander systems due to their collapsed ceiling or "skylight" entrances. In this paper, a new low-cost method to explore these pits is presented using the Spring Propelled Extreme Environment Robot (SPEER). The SPEER consists of a launch system that flings disposable spherical microbots through skylights into the pits. The microbots are low-cost and composed of aluminium Al-6061 disposable spheres with an array of adapted COTS sensors and a solid rocket motor for soft landing. By moving most control authority to the launcher, the microbots become very simple, lightweight, and low-cost. We present a preliminary design of the microbots that can be built today using commercial components for under 500 USD. The microbots have a total mass of 1 kg, with more than 750 g available for a science instrument. In this paper, we present the design, dynamics and control, and operation of these microbots. This is followed by initial feasibility studies of the SPEER system by simulating exploration of a known Lunar pit in Mare Tranquillitatis.
      • 02.0608 A Flux Pinning Concept for On-orbit Capture and Orientation of an MSR Orbiting Sample Container Paulo Younse (Jet Propulsion Laboratory) Presentation: Paulo Younse - Thursday, March 7th, 10:35 AM - Jefferson
        A concept for on-orbit capture and orientation of a Mars orbiting sample container (OS) using flux pinning was developed as a potential technology for the Mars Sample Return campaign. The system consists of a set of type-II superconductors field cooled below their critical temperature using a cryocooler, and operates on an orbiting sample container with a series of permanent magnets spaced around the exterior, along with an integrated layer of shielding to preserve the magnetic properties of the returned samples. Benefits of the approach include passive, non-contact capture and orientation, as well as potential mass savings relative to various mechanical methods. A system prototype was developed, characterized, and tested in a micro-gravity environment to demonstrate feasibility. A flux pinning model was developed that accounts for magnet geometry, superconductor geometry, superconductor training geometry, superconductor temperature, superconductor material properties, and magnetic field shape, and outputs forces and torques the superconductors imparts on the OS via the magnets. A magnetic model of the OS was developed to evaluate magnetic shield effectiveness and demonstrate successful shielding of the sample. A vision system using AprilTag fiducials was developed and demonstrated on a free-floating OS in a micro-gravity environment to estimate relative OS position, orientation, linear velocity, and angular velocity. An integrated Capture, Containment, and Return System (CCRS) payload concept for an Earth Return Orbiter using the flux pinning approach was proposed and traded amongst other competing architectures.
      • 02.0609 A Milli-newton Propulsion System for the Asteroid Mobile Imager and Geologic Observer (AMIGO) Jekan Thangavelautham (University of Arizona), Greg Wilburn (University of Arizona) Presentation: Jekan Thangavelautham - Thursday, March 7th, 11:00 AM - Jefferson
        Exploration of small bodies, namely comets and asteroids remain a challenging endeavor due to their low gravity. The risk is so high that missions such as Hayabusa II and OSIRIS-REx will be performing touch and go missions to obtain samples. The next logical step is to perform longer-term mobility on the surface of these asteroid. This can be accomplished by sending small landers of a 1 kg or less with miniature propulsion systems that can just offset the force of asteroid gravity. Such a propulsion system would ideally be used to hop on the surface of the asteroid. Hopping has been found to be most efficient form of mobility on low-gravity. Use of wheels for rolling presents substantial challenges as the wheel can’t gain traction to roll. The Asteroid Mobile Imager and Geologic Observer (AMIGO) utilizes 1 kg landers that are stowed in a 1U CubeSat configuration and deployed, releasing an inflatable that is 1-m in diameter. The inflatable is attached to the top of the 1U lander, enabling high speed communications and a means of easily tracking lander from an overhead mothership. Milligravity propulsion is required for the AMIGO landers to perform ballistic hops on the asteroid surface. The propulsion system is used to navigate the lander across the surface of the asteroid under the extremely low gravity while taking care to not exceed escape velocity. Although the concept for AMIGO missions is to use multiple landers, the more surface area evaluated by each lander the better. Without a propulsion system, each AMIGO will have a limited range of observable area. The propulsion system also serves as a rough attitude control system (ACS), as it enables pointing and regulation over where the lander is positioned via an array of MEMS thrusters. Several different techniques have been proposed for hopping nano-landers on low gravity environments including use of reaction wheels, electro-polymers, and rocket thrusters. In this concept, we will be heating sublimate to provide propulsive thrust which is simple and effective. Storing the propulsive gas as a solid provides much better storage density to ensure the longest lifetime possible. The starting point for this propulsion system involves selection of an appropriate, high-performance sublimate that meets the mobility needs of AMIGO. The paper will cover selection of the right sublimate, initial design and prototyping of the thruster using standard off-the-shelf arrays of electro-active MEMs valves. The device will be integrated with the fuel source with thrust profiles measured inside a milli-newton test stand inside a vacuum chamber. MEMS devices are to be used because of the weight and volume saving potential compared to traditional large-sized thrusting mechanisms. Through these efforts we are advancing on a milli-newton thruster that use non-combustibles and can be readily integrated onto nano-landers for asteroid surface exploration.
      • 02.0610 A Flight-traceable Cryogenic Thermal System for Use in a Sample-capture Flux-pinned Interface Ian Mc Kinley (Jet Propulsion Laboratory), Christopher Hummel (jet propulsion laboratory), Laura Jones Wilson (Jet Propulsion Laboratory) Presentation: Ian Mc Kinley - Thursday, March 7th, 11:25 AM - Jefferson
        Flux-pinned interfaces for spacecraft have been studied for almost a decade for their dynamic properties that allow designers to shape the dynamic behavior of spacecraft relative to one another. However, the efficacy of these interfaces hinges on the requirement that the type-II superconductors in the interface first be cooled below their critical temperature in the presence of a magnetic field, then held below their critical temperature for the duration of the dynamic interaction. Ground-based research often relies on consumable liquid nitrogen to cool the superconductors, but little work has been published on a flight-traceable cryocooler-based solution to meet the thermal constraints. This work provides estimates of the mass, power, and performance of a system to facilitate trade studies for potential spacecraft applications. This paper details a thermal system designed to cool three 16 mm thick, 56 mm diameter Yttrium Barium Copper Oxide disks to below their critical temperature of 88 K for a ground-based testbed. Data collected on the device shows that it successfully provides the thermal environment required for the flux-pinned interface while consuming 105 W of power. A thermal model accurately predicts heat flows and temperatures in the device. This model applied to a space environment predicts a power consumption of 67 W in a space-flight device.
      • 02.0611 EURO-CARES - a European Sample Curation Facility for Sample Return Missions. Lucy Berthoud (University of Bristol) Presentation: Lucy Berthoud - Thursday, March 7th, 11:50 AM - Jefferson
        Another 23 authors to be added (not as official co-authors) EURO-CARES (European Curation of Astromaterials Returned from Exploration of Space) was a three-year multinational project funded by the European Commission's Horizon2020 research programme. The objective of EURO-CARES was to create a roadmap for the implementation of a European Extra-terrestrial Sample Curation Facility (ESCF). This facility was intended to be suitable for the curation of samples from return missions from the Moon, asteroids, Mars, and other bodies of the Solar System. The EURO-CARES project covered five technical areas, led by scientists and engineers from institutions across Europe. 1. Planetary Protection: Planetary protection requirements and implementation approaches were assessed by experts and guided by international policy. Existing sterilization methods and techniques were reviewed. It was found that measures already employed for high containment Biosafety facilities are suitable for a restricted sample return mission. However, the development of certain technologies, such as a ‘double walled’ isolator, remote manipulation, integration of scientific analytical instruments, etc., is also required. 2. Facilities and Infrastructure: Aspects from building design to storage of the samples were examined in the project. Requirements for the facility included that it contained a receiving laboratory, a cleaning and opening laboratory, a bio-assessment laboratory, a curation laboratory, and sample storage. Different design solutions were prepared in collaboration with architects. 3. Instruments and Methods: The methodology of characterization of returned samples and the instrument base required at the ESCF were determined. The analyses provide an appropriate level of characterization while ensuring minimal contamination and minimal alteration of the sample. When the samples are returned to Earth, several stages of studies would be conducted. 4. Analogue Samples: Analogue proxy samples were considered critical for testing sample handling, preparation techniques, storage conditions, planetary protection measures, as well as to validate new analytical methods. A list of useful analogue samples has been assembled. 5. Sample Transport: The Earth re-entry capsule from a sample return mission is targeted at a specific landing ellipse on Earth and must then be transported safely to the ESCF in an appropriate transport container. Lessons learned from past sample return missions show that preparations for recovery included: training of the recovery team for every possible scenario, possible temporary facilities nearby the landing site, environmental measurements and collection of samples at the landing site, added to this if necessary, would be planetary protection measures. In conclusion, long-term curation of extra-terrestrial samples requires that the samples are kept clean to minimize the risk of Earth contaminants, at the same time as contained, in case of possible biological material. This work describes a roadmap for a combined high containment and ultraclean European sample curation facility and the development of the necessary novel scientific and engineering methods and techniques. Acknowledgements: This project has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement No. 640190.
      • 02.0612 Energy Modeling of VTOL Aircraft for Titan Aerial Daughtercraft (TAD) Concepts Daiju Uehara (The University of Texas at Austin), Larry Matthies (Jet Propulsion Laboratory) Presentation: Daiju Uehara - Thursday, March 7th, 04:30 PM - Jefferson
        Considerable interest has arisen in exploration of an icy moon of Saturn, Titan, with a variety of aerial vehicles, such as fixed-wing, rotary-wing, or hybrid rotor-wing aircraft. With one-seventh of Earth’s gravity and almost 5 times the atmospheric density of Earth at sea level, Titan is much more advantageous for flight than Earth. Small-scale VTOL aircraft deployed as a “daughtercraft” from a lander or balloon “mothercraft” is one concept for aerial exploration of Titan, with potential to access the surface and return to the mothercraft to deposit samples in scientific instruments and/or recharge batteries for subsequent sorties. Such aircraft, termed Titan Aerial Daughtercraft (TAD), have been considered with total masses on the order of 10 kg or less, to be affordable components of the overall scientific payload of the mothercraft. This study examines and develops mathematical models of flight energy requirements and durations for two different types of VTOL aerial vehicle, a multi-copter (quadcopter) and a tail-sitter deployed from a lander or balloon. The mathematical models compute flight power and energy demands in different flight states, including hover, descent, climb, and forward flight. In this paper, a parametric study is performed for a quadcopter and tail-sitter to characterize trade-offs between range, size, and payload capacities of the daughtercraft. A multicopter (quadcopter) has advantages of its mechanical and control simplicity as well as the maturity of flight dynamics modeling. Although a tail-sitter is not as common as a multicopter-shaped VTOL aircraft, such hybrid aircraft has the capabilities of vertical take-off, landing, and efficient level flight or gliding as a fixed-wing aircraft, with its mechanical simplicity. For a balloon-based scenario, the energy required for completing the entire mission is first estimated for a given total vehicle mass and vehicle speed at each flight regime. A battery mass corresponding to the required energy is then determined with a battery specific energy constant. All the remaining mass, after the battery mass allocation, is treated as available payload mass for scientific instruments, sampling device, etc. This analysis gives an idea how much mass would be potentially allocated to payload mass fraction for a given total mass. The variation of the atmospheric conditions, especially air density, for as a function of altitude is also considered for estimating energy and performance of the TAD. For a lander-based scenario, a payload mass is fixed so that all the unused mass for a given total mass is allocated to the battery, to examin possible flight radius around a lander. The quantitative analysis of propulsion system characteristics provides insight into the feasibility of the mission concepts as well as the specific mass/power margin of each mission scenario that determines the capacity of payload, on-board scientific instruments, thermal management, and avionics components.
    • 02.07 In Situ Instruments for Landed Surface Exploration, Orbiters, and Flybys Ricardo Arevalo (University of Maryland) & Stephanie Getty (NASA - Goddard Space Flight Center) & Xiang Li (University of Maryland, Baltimore County)
      • 02.0701 Opportunities in NASA Planetary Science Instrument Development Rainee Simons (NASA - Headquarters), James Gaier (NASA - Glenn Research Center), Florence Tan (NASA Headquarters) Presentation: Rainee Simons - Friday, March 8th, 08:30 AM - Jefferson
        Opportunities in NASA Planetary Science Instrument Development Rainee N. Simons*, James R. Gaier*, and Florence W. Tan Planetary Science Division* Science Mission Directorate NASA Headquarters 300 E Street S.W. Washington, DC 20024 The strategic objective of NASA’s Planetary Science Division (PSD) within the Science Mission Directorate is to ascertain the context, origin, and evolution of the solar system and the potential for life elsewhere. To advance this objective, the PSD and the science community use instruments that are deployed on robotic spacecraft, landers, and rovers. Since these instruments operate in a resource constrained environment, size, weight and power consumption are of vital importance. In addition, these instruments must operate in hostile environments including vacuum, extreme temperatures and intense radiation fields. The science community has a large array of instruments available for advancing planetary science. Although these instruments may have the desired performance, they are often large and bulky and consume significant amounts of power. The challenge to accommodate these instruments on a spacecraft is to scale down their design without sacrificing performance. The growing use of CubeSats and SmallSats presents opportunities to develop a suite of new small size, low mass, low power consumption planetary science instruments. In addition, opportunities also exist to deploy these small size instruments on newer platforms such as drones and rotorcrafts for science experiments on Titan and Mars. Historic data suggests that typically it takes a decade or more to develop a new instrument for space flight operation. Successful teams often include a mix of individuals with strong background and experience in science, technology, management, advanced materials, component design, software, manufacturing, and testing. Science instruments onboard robotic spacecrafts are vital for acquiring scientific knowledge from future planetary science missions. PSD has instituted two significant planetary science instrument development programs. These programs are the Planetary Instrument Concepts for the Advancement of Solar System Observations (PICASSO) and the Maturation of Instruments for Solar System Exploration (MatISSE). The goal of the PICASSO program is to support the development of low technology readiness level (TRL) spacecraft-based instrument components and systems that show promise for use in future planetary missions to the point where they may be proposed in response to the MatISSE Program. The goal of the MatISSE program is to develop and demonstrate planetary science instruments with significantly improved instrument measurement capabilities to the point where they may be proposed in response to future announcements of space flight opportunity without additional extensive technology development. Proposed investigations may target any Solar System body except the Earth and Sun, in order to advance the objectives outlined in the NASA Science Plan. The talk will focus on the future opportunities in NASA’s planetary science instrument development programs.
      • 02.0702 Development of a Nucleic Acid-Based Life Detection Instrument Testbed Srinivasa Bhattaru (Massachusetts Institute of Technology), Jacopo Tani (MIT), Kendall Saboda (Massachusetts Institute of Technology), Christopher Carr (Massachusetts Institute of Technology) Presentation: Srinivasa Bhattaru - Friday, March 8th, 08:55 AM - Jefferson
        Future space instruments will explore increasingly complex questions about our universe, including the origin of life on Earth and the presence of life elsewhere. These instruments will likely integrate chemical and biological subsystems that will face unique challenges; existing protocols typically require non-stabilized components and manual handling. The Search for Extra-Terrestrial Genomes (SETG) instrument is being developed for in situ extraction and sequencing of nucleic acids as a biomarker of life on other planetary bodies. Such sequencing is being implemented using a nanopore-based device, the Oxford Nanopore Technologies (ONT) MinION; as such, it needs to integrate many benchtop-based protocols. Here we describe an automated testbed, designed and built to automate and rapidly prototype extraction, library preparation, and sequencing protocols that could be used in our instrument. The system is designed to be modular with respect to components, facilitating hardware and software modifications with minimal system impact, while also precise across multiple test runs, allowing for accurate evaluation of the impact of varying system inputs as well as exploration of system failure modes and potential solutions. We also present testing results from each of the three primary subsystems (extraction, library preparation, and sequencing) as well as a plan for and initial data on subsystem integration into an end-to-end system. The extraction subsystem is able to match or approach nucleic acid yields attained via manual testing for B. subtilis spores in water (~15%) and spores in basalt (~12%). The library preparation subsystem can successfully prepare a library of E. coli DNA that can be identified after sequencing. The loading/sequencing subsystem has successfully automated sequencer loading, resulting in a sequencing run producing 1.4 billion bases after 1 day of sequencing from a pre-prepared sample. These testing results provide valuable data about the challenges of biological protocol automation, while directly informing future design decisions for SETG. In the process, the lessons learned from this milestone are relevant to the technological development of future planetary science instruments that take advantage of molecular biology techniques.
      • 02.0703 Enabling Measurement of Darwinian Evolution in Space Kendall Saboda (Massachusetts Institute of Technology), Christopher Carr (Massachusetts Institute of Technology) Presentation: Kendall Saboda - Friday, March 8th, 09:20 AM - Jefferson
        A common definition of life is a “self-sustaining chemical system capable of Darwinian evolution,” or natural selection of inherited variations that contribute to survival and reproduction. Thus, measuring Darwinian evolution would seem highly relevant to searching for life beyond Earth. While it is now feasible to track evolution in the laboratory, such an experiment has not yet been reported in space. Prior work has demonstrated the ability of microorganisms to adapt to multiple extremes relevant to space and potentially habitable niches beyond Earth. For example, Wassmann and colleagues cultured Bacillus subtilis 168 under the selective pressure of UV light (200-400 nm) over ~700 generations; the resulting population was found to be significantly more resistant than the ancestral line to both UV and other stressors, including increased salinity, vacuum, desiccation, and ionizing radiation. More recently, Tirumalai and colleagues cultured Escherichia coli in ground-based low-shear modeled microgravity for ~1000 generations revealing 5 coding mutations of not-yet-characterized significance. Miniaturization of nucleic acid extraction and sequencing technologies is enabling development of space instruments targeting nucleic acids, including work by our group and recent use of a nanopore sequencer on the International Space Station (ISS). In addition, NASA plans to deploy a deep space cubesat, BioSentinel, with a biological payload, to characterize the ability of yeast to carry out DNA repair in space. Critically, this system demonstrates the capability to initiate, sustain, and characterize biological systems in space-compatible formats. Here we describe how these advances can be integrated to enable in-situ measurement of Darwinian evolution, with applications for understanding adaptation to space and for future life detection missions. Specifically, we focus on applying nanopore sequencing to detect and characterize evolution in the lab and propose a system to autonomously measure evolution in space as an extension of the Search for Extraterrestrial Genomes (SETG), an instrument under development for in-situ nucleic acid-based life detection. The minimal approach would involve sequencing before and after a culturing period during which organisms of interest would be exposed to a simulated or actual stressor. An integrated system for measuring Darwinian evolution in space would not only allow for definitive measurement of nucleic-acid based life; it could also be used to improve understanding of microbial life’s ability to adapt to the harsh conditions of space and, in doing so, support human health beyond Earth and inform future use of synthetic-biology during deep space missions.
      • 02.0704 In-Situ Close-Range Imaging with Plenoptic Cameras Martin Lingenauber (German Aerospace Center - DLR), Ulrike Krutz (), Florian Fröhlich (German Aerospace Center - DLR), Christian Nissler (German Aerospace Center (DLR)), Klaus Strobl (German Aerospace Center (DLR)) Presentation: Martin Lingenauber - Friday, March 8th, 09:45 AM - Jefferson
        This paper discusses the concept of plenoptic hand lens imagers for in-situ close-range imaging during planetary exploration missions. Hand lens imagers, such as the MAHLI camera on board the Mars rover Curiosity, are important tools for in-situ investigations, e.g. of rock layers, minerals or dust. They are also important for the preparation and documentation of other instrument operations or for rover health assessment. Due to the small distance between object and camera, significant physical limitations affect the imaging as well as the operational performance of hand lens imagers. Most evident is the limited depth of field of a few millimeters for working distances of a few centimeters. This requires a highly accurate positioning of the camera and it also limits the in-focus content of an image significantly. Hence, in order to have an extended object completely in focus, a sequence of images, each being focused to a different distance, is required. Additionally, a single, passive camera is insufficient to compute depth from a single shot; only the combination of multiple images, either taken from different vantage points or at different focal settings, allows this. To overcome those limitations, we propose the use of plenoptic cameras as hand lens imagers. From a single exposure, it is possible to create an extended depth of field image while maintaining a more open aperture and at the same time a metric depth map. These and other advantages become most evident at close range and might make it possible to omit space grade focus mechanisms. A plenoptic camera is achieved by adding an additional matrix of lenslets shortly in front of the image sensor of a conventional camera. Hence, available space camera hardware can be used in order to gain a new type of sensor. Each lenslet has a diameter of a few micrometers and views the image, which the main lens projects into the camera, from a slightly different vantage point. Thus, a plenoptic camera maintains the 3-D nature of the main lens image as the parallax between the lenslets allows to compute the depth for each image point. Additionally, the micro lens array allows to record not only the location but also the direction of incoming light rays. This results in a 4-D data set known as a light field. From a single recorded 4-D light field, the aforementioned depth map but also 2-D images with an extended depth of field and several other data products can be derived. The paper provides an overview of the plenoptic camera technology and the light field processing in the context of in-situ hand lens imaging. We present analysis results gained with a mathematical model of a plenoptic system and with experimental data. The experimental data contains images of different test and rock targets acquired with a plenoptic camera and with a comparable conventional camera. The data was recorded in order to investigate the image quality, the range estimation capabilities and the usability for planetary scientists.
      • 02.0705 A Chip-Scale Plasmonic Spectrometer for in Situ Characterization of Solar System Surfaces Nancy Chanover (New Mexico State University), David Voelz (New Mexico State University) Presentation: Nancy Chanover - Friday, March 8th, 10:10 AM - Jefferson
        We discuss the development of a plasmonic spectrometer for in situ characterization of solar system surface and subsurface environments. The two goals of this effort are to (1) quantitatively demonstrate that a plasmonic spectrometer can be used to rapidly acquire high signal-to-noise spectra between 0.5 - 1.0 microns at a spectral resolution suitable for unambiguous detection of spectral features indicative of volatiles and characteristic surface mineralogies, and (2) demonstrate that this class of spectrometer can be used in conjunction with optical fibers to access subsurface materials and vertically map the geochemistry and mineralogy of subsurface layers, thereby demonstrating that a plasmonic spectrometer is feasible in a low-mass, low-power, compact configuration. Our prototype spectrometer consists of a broadband lamp/source, a fiber optic system to illuminate the sample surface and collect the reflected light, a mosaic filter element based on plasmon resonance, and a focal plane array (FPA) detector. The critical filter element of the spectrometer is based on the internal plasmon resonance of metallic nanostructures. Unlike conventional grating-based spectrometers, the spectral resolution of the spectrometer is mainly determined by two parameters: the spectral width of the resonance peak and the tunability of the center wavelength of resonance. Our initial numerical simulations revealed that periodic nanostructures in a thin gold membrane provide a narrow resonance peak. In addition, the resonant peak is highly tunable within the target wavelength range. We developed a membrane-based plasmonic filter that can directly be implemented on an optical fiber. First, we developed a new fabrication process for nanostructures in thin Au membranes (100 ~ 500 nm) suspended in air. For fabrication, periodic nanoscale circular hole arrays were patterned in a 500 nm thick Au film using a focused-ion-beam milling system. The Au film was separated from the substrate using a highly selective chemical etching process. Our initial findings showed that we can control the central wavelength of the filter by changing the index of refraction of the surrounding medium, thus we explored several media (e.g. water, glycerol, glucose) as a means of introducing a range of indices of refraction, and thus central wavelengths of the filter, using microfluidic channels. In addition to the development of the plasmon filter element, we constructed a testbed to explore the use of optical fibers for source illumination and signal transmission to the focal plane array. We discuss our preliminary design studies of the plasmonic nanostructure prototypes and their application to miniaturized instrumentation for in situ characterization of solar system surface and subsurface environments.
      • 02.0706 Linear Ion Trap Mass Spectrometer (LITMS) for in Situ Astrobiology Xiang Li (University of Maryland, Baltimore County), Andrej Grubisic (NASA Goddard Space Flight Center), Marco Castillo (University of Maryland), Friso Van Amerom (), Ryan Danell (Danell Consulting, Inc.), Desmond Kaplan (KapScience LLC), Ricardo Arevalo (University of Maryland), William Brinckerhoff (NASA - Goddard Space Flight Center) Presentation: Xiang Li - Friday, March 8th, 10:35 AM - Jefferson
        The highly compact Linear Ion Trap Mass Spectrometer (LITMS) combines pyrolysis gas-chomatography/mass spectrometry (GCMS) and Mars-ambient laser desorption mass spectrometry (LDMS) through a single, miniaturized yet highly-capable linear ion trap mass analyzer. The LITMS instrument is based substantially on the Mars Organic Molecule Analyzer - Mass Spectrometer (MOMA-MS) for the 2020 ExoMars mission, but features further miniaturization and analytical enhancements identified during the MOMA-MS development but not realized due to schedule or mission architecture limitations. In addition to MOMA capabilities (GCMS, LDMS, positive ion detection, tandem mass spectrometry), LITMS enhances the instrument performance by including negative ion detection, a dual frequency RF power supply to increase mass range, precision subsampling of drill cores at fine (≤ 1 mm) spatial scales, and pyrolysis of powdered sample for evolved gas analysis (EGA) of minerals and organics. LITMS will enable in situ characterization of inorganics and organics in individual rock core layers and features. This level of integrated analytical capability is critical to achieve advanced astrobiology objectives at Mars and other planets. The LITMS instrument is also scheduled to undergo field testing in the Atacama Desert, Chile, in mid-late 2018/early 2019, by deploying onboard the K-REX2 Rover as part of the Atacama Rover Astrobiology Drilling Studies (ARADS) project, led by NASA’s Ames Research Center. The team will demonstrate the autonomous LDMS analysis of inorganic and organic species from both the surface and the subsurface samples collected on site. This paper will describe the technical details and demonstrate the analytical capabilities of LITMS, as well as the instrumental configuration design for the field testing.
      • 02.0707 The Effects of Spacecraft Charge on In-situ Ionospheric Measurements Carlos Maldonado (University of Colorado at Colorado Springs) Presentation: Carlos Maldonado - Friday, March 8th, 11:00 AM - Jefferson
        The natural space environment and its effects on space systems presents a host of challenges concerning the design, development, and operation of satellites and spacecraft. Of particular interest to the aerospace community are the interactions between spacecraft operating in low-Earth orbit (LEO) and the ambient ionosphere, which can lead to potentially hazardous levels of spacecraft charging and cause interference for GPS and communication signals. The integrated Miniaturized Electrostatic Energy Analyzer (iMESA) has been developed at the Space Physics and Atmospheric Research Center (SPARC) in the Department of Physics at the United States Air Force Academy (USAFA) to act as a rugged and low-cost instrument capable of providing in-situ measurements of ionospheric plasma density, temperature, and subsequent spacecraft charge. A small constellation of these sensors is being placed in LEO at a variety of orbital inclinations and altitudes to provide in-situ data which can then be ingested in physics-based assimilation models to provide near real time space weather predictions. Presently, there is an early iMESA design operating on board Space Test Program Satellite 3 (STP-Sat3) with five additional sensors manifested for launch in 2018-19. Prior to launch and on-orbit operation, a test campaign to investigate the effects of spacecraft charge on plasma density and temperature measurements has been conducted. The laboratory based measurements on a flight model version of the iMESA instrument is used to characterize the effects which allows for comparison with on-orbit data from the STP-Sat3 instrument and corresponding remotely acquired digisonde measurements. The measurements of merit for the iMESA are the spacecraft "frame" charge and ionospheric density and temperature. The experimental data is used to evaluate the effects of spacecraft charging in relation to the density and temperature measurements, particularly the artificial increase in these measurements. The experimental correlation factors are then used to correct the on-orbit data from STPSat-3 and compare to remote digisonde measurements.
      • 02.0708 Raman-LIBS, a Journey from Mars to Earth via the Moon. Andrew Court (TNO) Presentation: Andrew Court - Friday, March 8th, 11:25 AM - Jefferson
        In 2005 ESA initiated a study for a spectrometer combining Raman and Laser Induced Breakdown Spectroscopy (LIBS) as a potential instrument for the Pasteur payload of the ESA ExoMars rover. It is a fundamental, next-generation instrument for organic, mineralogical and elemental characterization of soil, rock samples and organic molecules. The objective was to combine Raman spectroscopy and LIBS (R-L) into a single instrument sharing many hardware commonalities. The resulting ‘elegant bread board’(EBB) was successfully tested in ‘Mars’ like condition. Ultimately a reduced Raman only instrument was selected. In a later ESA study, the R-L system was adapted for use in a lunar rover and again the EBB was used under high vacuum conditions and demonstrated that in situ laser analysis of regolith like materials was possible under high vacuum conditions. The developed knowledge and technology has continued to be used here on Earth with various spin offs into industrial and medical applications. This paper will describe the origin of the R-L technology and its path through the different Space roles on into the spin-off systems which are now transferring into Earth based systems and will look forward to the potential role of R-L in future space missions to asteroids and comets.
      • 02.0709 Nucleic Acid Sequencing under Mars-like Conditions Christopher Carr (Massachusetts Institute of Technology) Presentation: Christopher Carr - Friday, March 8th, 11:50 AM - Jefferson
        All known life uses informational polymers based on nucleic acids. Future missions to Mars and Ocean Worlds such as Enceladus and/or Europa may target these or related polymers in the search for extant life beyond Earth. Nanopore-based devices represent a promising approach for sensing and characterizing these, and possibly other, biomarkers. Here we demonstrate low-input (200 pg) DNA sequencing, equivalent to extraction from 10^6 Bacillus subtilis spores at 5 percent extraction yield, using the Oxford Nanopore MinION in a thermal vacuum chamber under Mars-like temperature -60˚C, atmosphere (100% CO2), and pressure (400 to 500 Pa). Current limits of detection correspond to 2 to 5 pg DNA. With additional advances in nucleic acid extraction and library preparation efficiency, a sequencing-based approach to life detection will be viable at cell densities representative of the most extreme Mars analog environments here on Earth.
    • 02.08 Q/V band connectivity and Alphasat experience Giorgia Parca (Italian Space Agency) & Giuseppe Codispoti (ASI, Italian Space Agency)
      • 02.0801 High Power Transmitters for Q/V-band Communications- beyond Alphasat Naresh Deo () Presentation: Naresh Deo - Wednesday, March 6th, 11:00 AM - Jefferson
        Since the delivery of the Q-band Solid-State Power Amplifiers (SSPA) for Alphasat TDP5 in 2010 and their successful operation in space for over 5 years, very significant advances have been made in the technology and design of SSPAs thereby greatly enhancing their performance, suitability and reliability for space payloads. The most important of these developments is the improvement in the capabilities and maturity of Gallium Nitride (GaN)-based MMIC power amplifiers in Q/V-bands and higher. Other critical factors include innovative circuit integration techniques, novel manufacturing methods and advanced materials. Given the same mechanical outline, the emerging technology can generate more than four times the power output, two times the efficiency and an order of magnitude higher reliability than the original Gallium Arsenide (GaAs) MMIC-based SSPA (produced by the author’s team and company) for Alphasat Q/V Communications payload. Newly developed GaN MMIC power amplifier devices can generate two-to-three times the RF power output at twice the efficiency of their GaAs counterparts in the same frequency range thus increasing the power output per unit volume by a large factor while lowering the DC power consumption. Furthermore, GaN devices can operate at much higher channel temperatures (>165 deg. C) with reliability (MTTF) consistent with long-term (>15 years) use in space equipment. To complete the transmitter function, an integrated upconverter that employs the most robust and application-specific frequency scheme may be incorporated in the same physical structure as the SSPA. The MMIC power amplifier devices may include other significant functions or features, such as power detection, modulation and gain control. Provisions to improve the linearity of the transmitter can be implemented where necessary. In this paper, we will present a design concept that is capable of generating 50 to 100 Watts of RF power over the 37.5-40 GHz band with power-added efficiency approaching 25-30% within approximately the same outline and weight as the original Alphasat SSPA. Highly efficient, robust and compact power combining methods developed for this family of SSPAs make use of novel materials and manufacturing processes to further enhance the capabilities of the Q-band SSPA. Another useful feature of the approach adopted for the SSPAs is modularity of their construction, which allows rapid development and manufacture of customized transmitters for various payload requirements. The paper will present the performance and operating features of SSPA designed using various GaN MMICs developed by author’s company and employed in SSPAs using unique combining methods and implementation techniques. Design concepts to further increase the power output and enhance other desirable attributes for Q-V bands will be presented together with designs and results for space-born SSPAs for various frequency bands of interest within 25 to 120 GHz.
      • 02.0802 Optimization of Q/V-band Smart Gateway Switching in the Framework of Q/V-Lift Project Tommaso Rossi (University of Rome Tor Vergata), Carlo Riva (Politecnico di Milano), Lorenzo Luini (Politecnico di Milano), Mauro De Sanctis (Universití di Roma Tor Vergata, Dip. Ing. Elet.), Marina Ruggieri (University of Roma "Tor Vergata"), Giuseppe Codispoti (ASI, Italian Space Agency), Giorgia Parca (Italian Space Agency), Giandomenico Amendola (universita della calabria) Presentation: Tommaso Rossi - Wednesday, March 6th, 11:25 AM - Jefferson
        Future high-throughput satellite (HTS) systems are expected to reach the milestone of terabit/s capacity through the exploitation of the Q- and V- bands (and possibly beyond) in the feeder link. In this respect, the H2020 QV-LIFT project, kicked-off in November 2016, aims at filling crucial gaps in the ground segment technology required by future Q/V-band HTS systems. Specifically, at the network level, the QV-LIFT team will design and develop a smart gateway management system (SGMS) operating in the Q/V- band. The SGMS will implement fade mitigation techniques able to counteract the detrimental propagation impairments across the feeder link. This paper reports a performance assessment of fade mitigation based on smart gateway diversity through simulations. In a real system, in order to carry out gateway switchover, two elements are necessary: • A predictor of the atmospheric channel (i.e. rain attenuation) • A switching decision algorithm A basic switching decision algorithm is assessed assuming a simple predictor of channel conditions against the case of ideal prediction. It is assumed that link outage is due to rain attenuation, whereas clear-sky components, even though not negligible at Q/V- band, are counteracted by simple techniques such as a static link margin. In the simulations, the channel is fully characterized by synthetic time series of rain attenuation generated by a Multi-site Time-series Synthesizer (MTS). One-year time -series are generated by the MTS for a reference scenario including 10 gateways operating at V -band (uplink). The simulated rain attenuation time series across each gateway link have the following properties: a) they reproduce the climatologic distribution of rain in each location and b) they are spatially correlated. An N+P diversity scheme is considered with N=9 and P=1, i.e. the system is operational if at least 9 gateways are operational. When a gateway goes into outage due to a deep rain fade, the corresponding traffic is switched to the spare gateway. This is a realistic configuration for one gateway cluster of a larger HTS system made of several clusters providing continental coverage and up to the terabit/s capacity.
      • 02.0803 SDN for Smart Gateway Diversity Optimization in High Throughput Satellite Systems Tommaso Rossi (University of Rome Tor Vergata), Marina Ruggieri (University of Roma "Tor Vergata") Presentation: Tommaso Rossi - Wednesday, March 6th, 11:50 AM - Jefferson
        Future satellite networks could benefit from new paradigms that are currently applied to terrestrial networks, as Software Defined Networking (SDN) and Network Functions Virtualization (NFV); these can improve key system characteristics as flexibility, customization, scalability, etc. This paper presents an analysis on the use of SDN for the optimization of High throughput Satellite (HTS) systems. The latter will use EHF (in particular beyond Ka-band frequencies, such as Q/V and W bands) in the feeder link and in this framework it is mandatory to use spatial diversity to counteract tropospheric fading. One of the most interesting diversity scheme is “smart gateway”; this technique has to be carefully optimized to achieve high system performance. This paper shows that SDN is a good candidate for the intelligent management of this system. In particular, the paper shows an analysis of the use of the SDN paradigm for smart gateway so-called “N+0” and “N+P” diversity schemes. The two schemes allow, in case of heavy fading conditions on one gateway, the redirection of the user traffic towards a redundant gateway (N + P), or the partition of the lost capacity over different active gateways (N + 0), ensuring service continuity.
    • 02.09 Mission Design for Spacecraft Formations Giovanni Palmerini (Sapienza Universita' di Roma)
      • 02.0901 Reconstruction of the Shape of a Tumbling Target from a Chaser in Close Orbit Giovanni Palmerini (Sapienza Universita' di Roma), Renato Volpe (), Marco Sabatini (Universita` Roma La Sapienza) Presentation: Giovanni Palmerini - Monday, March 4th, 09:00 PM - Madison
        Operations involving two or more spacecraft, including approach, rendezvous and servicing, are not always based on the cooperation among them. The lack of cooperation means a limited set of information initially available to the approaching spacecraft. Still, the determination by the chaser of the relative kinematic state of the target spacecraft stands as a required step to continue the approach. Completing this first fundamental information about relative motion, also the reconstruction of the target’s shape can be considered an important part of the rendezvous and a pre-requisite for safe docking. In fact, shape reconstruction enables the chaser to understand target’s configuration, to assess its integrity and eventually to compare it with already known spacecraft’s models. The reconstruction needs to be accurate even while starting from the limited number of images captured only from the points of view attained during relative motion, and is indeed a quite challenging task. Due to the extremely wide range of possible relative poses and light conditions, an extensive test campaign based on numerical simulations is required to validate possible algorithms to carry on this operational phase. Only after simulations are successfully passed, it would be possible to move towards experiments in a ground-based testbed and finally to in-flight qualification. The proposed paper details the experience gained with the simulation phase at the Guidance and Navigation Lab of Sapienza Università di Roma. A software suite is developed in order to simulate in orbit acquisition of the target image, managing its 3D CAD model according to the relative dynamics and lighting conditions. Several spacecraft configurations, different in terms of shapes and relative pose, are assumed for the target, and the relevant images are captured by the chaser during its relative motion around it. An effective process for the identification and match of the features, capable to manage their appearance and disappearance during the sequence of images has been implemented. Following these steps, it is possible to gain – in addition to the understanding of the relative dynamics - an educated guess of the shape of the target. Advanced filtering techniques taking into account relative orbital dynamics are applied, significantly contributing to the final result of the process. The post-facto comparison between the actual CAD model and the estimated target’s shape shows for the proposed technique appealing success rates in the recognition task.
    • 02.10 Space Radiation and its Interaction with Shielding, Electronics and Humans Maria De Soria Santacruz Pich (Jet Propulsion Laboratory) & Lembit Sihver (Technische Universität Wien)
      • 02.1001 Radiation Risks and Countermeasures for Humans on Deep Space Missions Lembit Sihver (Technische Universität Wien), S. M. Javad Mortazavi (University of Wisconsin Milwaukee) Presentation: Lembit Sihver - Monday, March 4th, 04:30 PM - Madison
        The radiation environment encountered in space differs much in nature from that on Earth, with contribution of protons and high energetic ions up to iron, resulting in radiation levels far exceeding the ones present on Earth. Accurate knowledge of the physical characteristics of the radiation field, the solar activity and the mission length, which influence the radiation risks for humans on deep space missions, e.g. to Mars, is therefore very important. It has been estimated that the transit times for a human mission to Mars vary from 5 to 6 months each way, with a typical figure of about 6 months for a long duration stay on Mars, up to 8 – 10 months each way for a short duration stay on Mars. That means that the astronauts will be exposed to a harsh radiation environment for at least 2-3 years. This paper describes the radiation environment in deep space, some of the radiation health risks astronauts are exposed to on long term missions, as well as the requirements and limitations of physical protection for reducing these risks. Since it has been shown that passive shielding alone is not adequate for long term deep space missions, we also present the need for new effective methods of biological protection, e.g. ground-based in vitro pre-flight screening of the candidates for evaluation of the magnitude of their adaptive responses. Furthermore, methods for boosting the immune system of astronauts and the possibility of using medical counter-measures are discussed. Notably, the use of vitamin C as a promising non-toxic, cost-effective, easily available radiation mitigator, is described.
      • 02.1002 Does Gender Matter for Radioadaptation and Radiation Susceptibility in Deep Space Missions? S. M. Javad Mortazavi (University of Wisconsin Milwaukee), Lembit Sihver (Technische Universität Wien) Presentation: S. M. Javad Mortazavi - Monday, March 4th, 04:55 PM - Madison
        It is believed that astronauts who will participate in future long-term space missions beyond the shielding effect of the Earth’s magnetosphere (e.g. Mars journeys) face an increased risk of cancer due to the exposure of HZE particles. It has been suggested that while on the ground, women mount a more potent immune response than men, they are more vulnerable to radiation-induced cancer than their male counterparts. As this belief might be originated from data sources such as atomic bomb survivors, two key points should be carefully considered before any conclusions can be made. Firstly, the differences between the cancer incidence and death rates in men and women can to some extent be to the differences in background rates than a real sensitivity to radiation effects. Secondly, the data obtained from atomic bomb survivors is for high radiation doses only, as well as for total different radiation fields than in deep space. Furthermore, the induction of adaptive response (AR) should be further investigated. A NASA report published in 2016 confirms the importance of adaptive response Although, early adaptive response studies showed no difference between the pattern of adaptive response in lymphocytes of a male and a female pre-exposed to an adapting dose of 2 cGy and then irradiated with the challenging dose of 150 cGy, these studies clearly suffered from the very small size of samples. More recently, a significant increase in life span of female mice pre-exposed to low dose radiation compared to female mice with no pre-exposure has been shown. Even if one can expect a very large difference between the results found for mice exposed to low dose gamma rays compared to the radiation effects in humans been exposed to a mixed radiation field with HZE particles, as in deep space, it might be important to in more details study the gender dependence of radioadaptation after pre-irradiation with adapting (conditioning or priming) doses. This will therefore discussed in this paper.
      • 02.1004 Bayesian Radiation Design Margin for Spacecraft Reliability Prediction Anthony Coburger (Johns Hopkins University Applied Physics Laboratory) Presentation: Anthony Coburger - Monday, March 4th, 05:20 PM - Madison
        In this case study, a proposed Bayesian reliability model is applied to Displacement Damage Dose (DDD) and Total Ionizing Dose (TID) field data. Displacement damage, which disrupts the lattice structure in semiconductor materials, gradually degrades solar cells’ ability to generate power. TID damage results when high energy particles interact with onboard semiconductor devices and accumulate charge within the gate oxide. As with DDD, TID shifts the semiconductor device parameters away from their desired values. The result is a degradation in device performance, circuit performance, device functionality, and increasing the risk of non-compliant performance and device functional failure. Additionally, variability in the space weather environment and the non-deterministic nature of the radiation damage make accurate predictions of on-orbit device response a challenge. These and other sources of uncertainty are currently addressed by applying conservative but industry familiar risk mitigation techniques including Radiation Design Margins (RDM), radiation shielding measures, and worst-case circuit analyses. However, these approaches do not propagate and quantify the uncertainty, which leaves them prone to overestimating or underestimating what the sufficient amount of radiation protection is. By leveraging Bayesian methods, the proposed model methodically accounts for this uncertainty in the reliability estimates and reduces it as new data is incorporated.
      • 02.1005 Radiation and Signal Analysis of the Falcon Solid-state Energetic Electron Detector (FalconSEED) Carlos Maldonado (University of Colorado at Colorado Springs) Presentation: Carlos Maldonado - Monday, March 4th, 09:25 PM - Madison
        The Falcon Solid-state Energetic Electron Detector (FalconSEED) is an energetic charged particle sensor currently being designed and developed at the Space Physics and Atmospheric Research Center (SPARC) in the Physics Department of the United States Air Force Academy (USAFA) in an effort to monitor electron energy deposition to spacecraft systems in geosynchronous orbit (GEO). The GEO radiation environment is of particular concern to spacecraft designers and operators due to the high energy particles that are trapped in the radiation belts, where sufficient energy deposition can cause hazardous levels of spacecraft charging. The space radiation environment provides a constant threat to spacecraft by exposing them to a flux of high energy particles, galactic cosmic rays, solar particle events, or nuclear detonations in space which can result in spacecraft anomalies or failures such as dielectric charging, single-event latchups (SEL), single-event upsets (SEU) and single-event burnouts (SEB). The initial radiation analysis of the instrument is conducted using the AE9/AP9 environmental software to provide an estimate for the total radiation dose to critical internal components such as power supplies and microprocessors. The radiation hardening through design, in an effort to ensure operational lifetime of one year in GEO for commercial off the shelf (COTS), parts is described in this paper. The COTS components required a total dose of less than 5 kilorads in order to optimally survive the yearlong mission. Using 6.35 mm aluminum, the total accumulated dose is 1.2 krads, well within the survivability range for COTS electronics. In an effort to further minimize sensor susceptibility to the radiation environment, the electronics design includes a rad-tolerant power supply and the use of cyclical redundancy checks and power cycling. These techniques will alleviate the issues associated with single-event upsets and latchups which are calculated at 1.77×10-6 SEUs per bit per 20 minutes. Additionally, this work includes the predicted instrument response to the GEO electron flux in the 10-100 keV energy range. The electron particle spectra in the specified energy range was obtained using the AE9/AP9 trapped particle model and was then used as the input flux for the instrument signal analysis. The GEANT4 Monte Carlo transport code was used to model the passage of energetic electrons through the instrument to the detector face and estimate sensor response. To provide an instrument signal estimate that accommodates the expected electron input during nominal and worst case scenarios, the electron flux is calculated at median and high flux levels of confidence to ensure sensor operation without saturation. Initial simulations were run to bound the electron throughput in terms of the desired 10 and 100 keV energy range. To ensure that the solid state detector within the sensor detection chamber is photon tight a thin 340 nm aluminum layer was added to the orifice model. The thin layer or window is designed to allow energetic electrons to pass through and reach the detector while blocking photons.
      • 02.1007 Liquid Shielding Christopher Heistand (Johns Hopkins Univ Applied Physics Lab (JHU APL)), Michelle Donegan (Johns Hopkins University Applied Physics Laborator), Jeffrey Boye (JHUAPL) Presentation: Christopher Heistand - Monday, March 4th, 09:50 PM - Madison
        Space provides an incredibly challenging environment for commercial electronics in several areas. Radiation can cause single event effects that can range from transients to destructive latchup and, in the extreme case, even cause total loss of the hardware. The thermal environment swings wildly in the vacuum of space, making it hard to keep a stable working temperature. Extreme vacuum causes outgassing or even significant degradation of many materials. Space launch causes shock and vibration forces that can cause serious mechanical damage to the hardware. Each of these problems requires some mitigation for COTS electronics to be flown in space. These factors place both physical and electrical limitations on the parts, such as using insulating substrates vs semiconductor wafers, larger nanometer processes, and even specialized memory chips like MRAM. These factors provide technical challenges for designers, but rad hard by design (rhbd) electronics also suffer from a niche market with small lot sizes and infrequent production runs. Between the technical cost and the inability to spread the non-recurring engineering costs over a large production line, space grade electronics are incredibly expensive with long lead times, lower performance and have a very small user base when compared with terrestrial products from Intel, NVIDIA, Raspberry Pi, etc. Combined with a risk averse posture and long duration missions, this means our processing capacity lags two orders of magnitude behind terrestrial computational capabilities. Liquid Shielding proposes using a non-conductive fluid surrounding the electronics to create a benign enough environment for non space-grade electronics to be flown. Using a pressure vessel, a bath can be formed around the electronics that shields from radiation, protects materials from vacuum, dampens shock and vibration characteristics, increases thermal mass and allows for unique radiation schemes. This concept focuses on shielding the electronics from Total Dose failure, non-clearable latchups and most mechanical failures, targeting almost uninterrupted uptime, but not immunity from upsets. As long as the part does not fail entirely and is only down temporarily, there are ways to solve the downtime with a redundancy scheme or other software/hardware systems that remain a fraction of the price of a single space grade processor. This paper first discusses the environment that the Liquid Shielding concept is trying to protect against. It then walks through two down selections; shielding liquids and single board computers (SBC) for our prototype system. Given the liquid, we then identify how thick the shield needs to be. Next, we explain our mechanical design for the given liquid, depth requirement and SBC. Lastly, we explore the prototype and discuss the future testing campaign. Liquid Shielding aims to fix enough of the environmental issues that it allows non space-grade processors to be used in space. By doing so costs come down dramatically (100x), processing power goes up equally so (100x) and flight/instrument software can stop overly optimizing code and start leveraging the rest of the terrestrial world’s software boom.
    • 02.11 Space Debris and Dust: The Environment, Risks, and Mitigation Concepts and Practices Kaushik Iyer (Johns Hopkins University/Applied Physics Laboratory) & Douglas Mehoke (Johns Hopkins University Applied Physics Laboratory (JHU/APL))
      • 02.1101 Drag-enhancing Deorbiting Devices for Mid-sized Spacecraft Self-disposal Jennifer Rhatigan (Naval Postgraduate School), Josep Virgili Llop (), Marcello Romano (Naval Postgraduate School), Katrina Alsup (), Keith Lobo (Naval Postgraduate School), Jessica Shapiro (Naval Postgraduate School), Bianca Lovdahl (Naval Postgraduate School), Farsai Anantachaisilp (Naval Postgraduate School) Presentation: Jennifer Rhatigan - Monday, March 4th, 08:30 AM - Gallatin
        The current state-of-the-art has established the feasibility of drag-devices for small spacecraft, however the use of drag-devices for mid-sized spacecraft has not yet received the same level of attention. Here we explore the potential benefits and uses of drag devices on mid-sized spacecraft deployed in low Earth orbit (LEO). Methods to de-orbit LEO spacecraft have received increased study since the widespread adoption of orbital debris mitigation standards. These standards dictate that spacecraft shall be passivated and safely disposed of at their end-of-life (EOL). For spacecraft in LEO, an atmospheric disposal within 25 years of EOL is seen as the most cost-effective method. The residual atmosphere present at orbital altitudes causes orbital decay that, at sufficiently low operating altitudes, will naturally de-orbit a spacecraft within the prescribed 25-year period. At higher altitudes, the natural decay can be insufficient and for spacecraft with propulsion the current practice is to reserve fuel for a final de-orbit burn. A drag-enhancing de-orbiting device may be used to increase the spacecraft's cross-sectional area, increasing the aerodynamic drag and shortening its natural decay below the 25-year limit. Here, we explore the trade-space resulting from the variation of the necessary cross-section area increase required for different starting altitudes, in combination with realistic models of a small (50 kg), mid-size (400 kg), and large (2000 kg) spacecraft. The preliminary results of this analysis suggest that drag-devices are well suited for small and mid-sized spacecraft, as only moderate increases in cross-section areas, in the order of ten square meters, are required to reduce the post-EOL orbital lifetime below 25 years. Indeed, the offset of fuel reserved for de-orbit of mid-size spacecraft can be significant, establishing a primary area of study within our trade-space (mid-size spacecraft with propulsion). The required increases in cross-sectional area for large spacecraft, in the order of hundreds of square meters, suggest that drag-devices for large spacecraft remain impractical given the state-of-the-art. In order to establish feasibility, we present a conceptual design of a drag-device for a mid-sized spacecraft with propulsion and suggest possible scenarios for its use. Based on this concept, it appears that given their low mass and simplicity, these drag-devices are an attractive alternative to the classical propulsive de-orbit. Some of the drag-device advantages are straightforward (e.g., lower mass), yet other benefits are derived from the drag-device concept of operations (e.g., extended operational lifetime), or ease of integration (e.g., storable and non-hazardous). The use of a drag-device to augment a classical propulsive de-orbit burn is also explored here, showing that this hybrid approach can extend the use of drag-enhancing de-orbiting devices to significantly higher altitudes while still retaining some its attractive benefits. Finally, a comprehensive analysis of advantages and disadvantages of drag-devices is presented.
      • 02.1102 Regolith Particle Erosion of Material in Aerospace Environments Emma Bradford (JPL), Jason Rabinovitch (Jet Propulsion Laboratory, California Institute of Technology), Mohamed Abid (Jet Propulsion Laboratory) Presentation: Emma Bradford - Monday, March 4th, 08:55 AM - Gallatin
        This paper contains results for thermal control S13GP:6N/LO-I white paint, Kapton flex cable, fiber optic cable, HEPA filter, and M55J graphite composite, when exposed to high-velocity regolith. It is understood, based on Mars Science Laboratory data, that when landing on Mars, the Mars 2020 rover is exposed to this extreme environment. This environment was replicated to test the survivability of susceptible materials. The testing was performed at University of Dayton Research Institute in Dayton, Ohio. Experiment parameters consisted of exposing materials to basalt like particles ranging in size from approximately 40 μm to 2 cm at velocities ranging from 250 m/s to 19 m/s and with varied particle flux (measured in mg/cm^2). Depending on the size, the particles can embed in or erode the material. The post-test analysis suggests that all material will survive the environment observed during landing; however, some material has been tested to failure in order to better characterize behavior. Materials that failed in some test scenarios include the paint, fiber optic cable, and the graphite composite. The paint after exposure to the regolith increased the α/ε ratio by approximately 37% when particles embedded. Darkening of the paint can negatively affect thermal control of the rover; however, analysis of the observed darkening suggests no significant increase in temperature. At high particle fluxes, the paint eventually degraded enough to expose the aluminum substrate. The fiber optic cable when impacted by a 1.5 cm particle traveling at 20 m/s did not sever the cable, but the impact did cause the cable to deform enough to crack the glass, which resulted in a significant increase in attenuation rendering the cable unable to transmit data. The graphite composite failed when exposed to high particle fluxes. All of the observed failures occurred at test parameters above the requirements and the materials are not expected to fail during landing. Tests performed beyond the requirements will help the Jet Propulsion Laboratory to characterize how well these materials will survive in even more extreme environments for future missions.
      • 02.1104 Feasibility Study on a PCL Radar for Space Debris Detection Shota Ochi (), Makoto Tanaka (Tokai University) Presentation: Shota Ochi - Monday, March 4th, 09:20 AM - Gallatin
        Space debris is a growing problem for outer space activities. More than 19,000 pieces of debris are tracked by ground radars. Now space surveillance networks of U.S.A. and Russia mainly track space objects. In order to ensure the long-term sustainability of spaceflight and space activities, it is necessary to improve a system on space debris detection. Currently, the space surveillance network consists of ground based active radars and passive telescopes. These systems are used to detect and track space debris. However, both of them are very expensive, so it is difficult to build radar facilities at any place of the Earth. We consider that one of solutions on this issue is a Passive Coherent Location (PCL) radar. The PCL radar is a low cost and compact radar system due to using 'illuminators of opportunity' as their source of radar transmission instead of a dedicated transmitter. Many researchers investigate a PCL radar system, and the basic principal is established. In this paper, we propose a PCL radar for space debris detection using radio waves transmitted from Earth-orbiting satellites. On a PCL radar system, various kinds of radio sources such as FM radio broadcasting and television broadcasting are available for illuminators of opportunity. In particular, a radio wave transmitted from Earth-orbiting satellites is suitable for space debris detection because space debris on Low Earth Orbit (LEO) are reliably irradiated by RF waves from satellites. We calculated a received power for space debris detection using a PCL radar. As an example, we assumed a case study of space debris with a diameter of about 10 m at around 400 km altitude. This target has a Radar Cross Section (RCS) of 20 dBsm. A radio wave transmitted from Inmarsat-4 at 35786 km altitude was assumed as 'illuminators of opportunity'. The transmitted radio power of Inmarsat-4 was 97 dBm in equivalent isotropic radiated power (EIRP). We estimated that the received power from space debris on the ground was -168 dBm using a receiving antenna with effective area of 1 square meter. From a view point of the technology level of receiver equipment and signal processing, weak radio waves (-168 dBm) reflected from space debris are detectable. As a result of this research, a proposed PCL radar system for space debris detection can be realized. In this paper, we will also show preliminary experimental results of detecting an aircraft (RCS: 20 dBsm) that was located 35 km away from a receiver using a FM radio broadcasting (80.0 MHz, 81 dBm in EIRP).
      • 02.1107 Design and Analysis of a Passive Tether De-Orbiting Mechanism for a Nano-Satellite Avish Gupta (Manipal University), Varun Thakurta (Manipal University), Dhananjay Sahoo (Manipal Institute of Technology), Anirudh Kailaje (Manipal University) Presentation: Avish Gupta - Monday, March 4th, 09:45 AM - Gallatin
        This paper aims to characterise the forces acted upon a 2U-class Nano-satellite by a passive electrodynamic tether system used as an inexpensive, space and mass efficient de-orbiting mechanism in the Low Earth Orbit (LEO). As stated by the IADC guidelines, an object in the low earth orbit should not have an orbital lifetime exceeding 25 years which can be attained by the satellites themselves due to significant atmospheric drag. The de-orbiting system involves a spool with an electrically conducting thread wound around it. It is ejected using a spring-loaded mechanism with the other end of the thread secured to the satellite body. The shape of the spool and the type of winding have been optimised to ensure maximum packing efficiency and a stable ejection. It has resulted in a tether length of around 300 meters for the considered system. The mass of the spool with the tether winded upon it is about 80 grams (mass of the tether being 32 grams) which makes it a suitable solution for small satellites with stringent mass constraints. Study of the forces acting on the satellite body during the deployment becomes essential to avoid the destabilisation of the satellite which would result in the winding of the tether about the satellite body. Excessive tension on the tether during the deployment phase might result in a failure of the thread material. The forces acting on the system during the deployment phase have been analysed. Methods to reduce and dampen these destructive effects have been discussed in detail. Once deployed, various forces act on the satellite system due to the interaction of the conductive thread with the time-varying magnetic field and the ambient plasma. Being a passive tether system, it experiences a change in the direction of the magnetic field while crossing over the poles resulting in the force to be applied in the same direction in different parts of the orbit. This force, always opposing the motion of the satellite, increasing the eccentricity of the satellite orbit causing it to burn up at the perigee in under 2 years as opposed to 25 years. A detailed study on the respective forces and its effects on the satellite's orbit has been carried out in this paper.
      • 02.1108 Modeling Hypervelocity Impact Temperatures for Europa Clipper Planetary Protection Anthony Mark (Johns Hopkins Applied Physics Laboratory), Kaushik Iyer (Johns Hopkins University/Applied Physics Laboratory), Douglas Mehoke (Johns Hopkins University Applied Physics Laboratory (JHU/APL)), Wayne Dellinger (JHU-APL), Hayden Burgoyne (), Michael Di Nicola (Jet Propulsion Laboratory), Kelli Mc Coy (NASA Jet Propulsion Lab), Ethan Post () Presentation: Anthony Mark - Monday, March 4th, 10:10 AM - Gallatin
        As one of the means of meeting the NASA Planetary Protection Requirement that the probability of inadvertent contamination of an ocean or liquid body be less than 10-4 per mission, the Europa Clipper Project was required to assess the probability that any spacecraft-borne organism survives a high velocity impact (HVI) with an Icy Body (Europa, Ganymede and Callisto) (DiNicola et al, IEEE 2018 and Burgoyne et al, IEEE 2019). A similar assessment was required for the Juno orbiter, but its potential impact speeds were sufficiently high, ~20 km/s, such that shock vaporization, and thus sterility, of all spacecraft components was expected in an HVI scenario. With the Europa Clipper Mission, however, the scenarios warranting consideration have significantly lower impact speeds and shock loadings, making it necessary to assess impact-induced heating mechanisms of materials when phase changes (melting and vaporization) are improbable. In particular, the project evaluated the temperature rise generated from extreme-rate, HVI-induced plastic deformation of large, metallic spacecraft sub-systems, as well as the cooling of deformed structures in an ambient environment consisting of an icy, planetary surface with a tenuous atmosphere; as observed on Europa, Ganymede, and Callisto. Using such cooling curves with available models for biological survival at high temperatures, the Project could then calculate probabilities of planetary contamination following an inadvertent impact. Calculating temperature rise in metals undergoing rapid plastic compression is a specialized area within high-rate mechanical sciences, and has mostly been achieved through a combination of idealized theoretical analyses and Split Hopkinson Pressure Bar laboratory compression tests using relatively simple cylindrical samples subjected to compressive strain rates 2-3 orders of magnitude lower than those required for the Clipper Project. Computing thermal cooling is a second, separate analysis requiring an initial temperature (field), geometry, and boundary conditions. This paper discusses, in broad terms, the approach developed by the Clipper team, for obtaining the requisite “crash, crush and cool” time-temperature profiles for complex, spacecraft metallic structures and HVI strain rates, which to our best knowledge, has never been done before. The necessary in-depth consideration of the post-HVI impactor shape is an additionally unique complement to most analyses performed by the HVI community, which focus exclusively on substrate (planetary surface) morphology. We review some key physics and mechanics-based modeling concepts such as the transition from a transient shock pressure to an approximately steady-state mechanical crushing pressure, and 1-dimensional theoretical estimates for plastic deformation heating in order to formulate the technical problem. We then discuss our approach including special considerations required for modeling high aspect ratio, and sometimes expansive, spacecraft structures (both metallic and composite); integration and use of available shock physics, Design of Experiments (DOE), and engineering analysis and design tools for obtaining solutions; material models for metals and ice with varying porosity; and treatment of impact orientation, impact obliquity and other analytical assumptions. The need for large-scale parallel computing capability to obtain results in realistic time frames is also discussed. Finally, we present some representative results and suggest areas for future research required to reduce analytical uncertainty.
      • 02.1110 Autonomous Active Space Debris-removal System Shriya Kaur Chawla (SRM institute of science and technology), Vinayak Malhotra (SRM University) Presentation: Shriya Kaur Chawla - Monday, March 4th, 10:35 AM - Gallatin
        Space exploration has noted an exponential rise in the past two decades. The world have starting probing the alternatives for efficient and resourceful sustenance along with utilization of advanced technology viz., satellites on earth. Space propulsion forms the core of space exploration. Of all the issues encountered, space debris have increasingly threatened the space exploration and propulsion. The efforts have resulted in the presence of disastrous space debris fragments orbiting the earth at speeds up to several kilometers per hour. Unscaled debris are universally projected as a potential damage to the future missions with loss of resources, mankind, as huge amount of money is invested into it. Appreciable work had been done in the past  relating to active space debris-removal technologies such as harpoon, net, drag sail. The primary emphasis is laid on confined removal. In recently, remove debris spacecraft was used for servicing and capturing cargo ships. Airbus designed and planned the debris-catching net experiment, aboard the spacecraft. The spacecraft represents largest payload deployed from the space station. However, the magnitude of the issue suggests that, active space debris-removal technologies, such as harpoons and nets, still wouldn't be enough. Thus, necesitating the need for better and operative space debris removal system. techniques based on diverting the path of debris or the spacecraft to avert damage have turned out minimal usage owing to limited predictions. Present work, focuses on an active hybrid space debris removal system. The work is motivated by the need to have safer and efficient space missions. The specific objective of the work is to thoroughly analyze the existing and conventional debris removal techniques, their working, effectiveness and limitations. Thus, a novel active space debris removal system is proposed. Secondarily, to understand the role of key controlling parameters in coupled operation of debris capturing and removal. The system represents the utilization of the latest autonomous technology available with an adaptable structural design for operations under varying conditions. The design covers advantages of most of the existing technologies while removing the disadvantages.  The system is likely to enhance the probability of effective space debris removal.  
      • 02.1111 Multiple Debris Orbital Collision Avoidance Ahmed Hamed (NARSS), Ahmed Badawy (October University for Modern Sciences and Arts), Adel Omar (Military Technical College, Cairo Egypt), Mahmoud Ashry (), Wessam Hussein (MTC) Presentation: Ahmed Hamed - Monday, March 4th, 11:00 AM - Gallatin
        Recently, Mission safety become an important concern because of the exponentially increment of space objects crossing or accompany the orbit. In such a situation the risk value, becomes more and more as these controlled and uncontrolled objects increase. Therefore, mission control centers depend on organizations as joint space operation center to use their supplied information to schedule a smart plan to minimize the orbit risk. This paper proposes a new technique in the field of satellite safe trajectory used usually in docking missions known as artificial potential field. Satellite surroundings are represented by artificial field where counterpart objects are represented by repulsive potentials, and future predicted path as an attractive field. Therefore, prospective planned maneuver considers all surrounding objects with different probability within the same algorithm. Without loss of generality, the proposed method is then applied to a real case between a Chinese “cz_4” and the United States “DMSP 5D-2 F7” satellites and show the results before and after applying the algorithm with calculations of the velocity required to escape risk situation and maintain the necessary orbital parameters. Finally, a comparative study is implemented to determine the effectiveness of the proposed method compared to well-known Hohmann maneuver.
    • 02.12 Asteroid Detection, Characterization, Sample-Return, and Deflection Paul Chodas (Jet Propulsion Laboratory) & Jeffery Webster (Jet Propulsion Laboratory)
      • 02.1204 Concurrent Redirection and Attitude Control of an Asteroid M. Reza Emami (Luleå University of Technology), Michael C. F. Bazzocchi (University of Toronto) Presentation: M. Reza Emami - Thursday, March 7th, 08:30 AM - Amphitheatre
        As private and governmental organizations begin to gaze towards deep space, the opportunities presented by near-Earth asteroids are increasingly coming to the forefront of scientific investigation. The near-Earth asteroid population provides many targets for expanding knowledge of the solar system, while synergistically acting as stepping stones for exploration. The volatile resources present on asteroids are particularly valuable for their applications to life support space and propulsion systems. Moreover, asteroid materials can be used to create radiation shielding, and if processed, in-situ materials for construction. The essential next step to realizing the benefits of asteroids is to capture and bring an asteroid from its initial orbit to an orbit in the Earth-Moon system. Transferring an asteroid, or a small boulder from an asteroid, into an Earth-bound orbit makes it easily accessible for more in-depth studies of its composition and structure, as well as presents opportunities for private industry to leverage its materials. However, the challenges of the controlled redirection of an uncooperative, tumbling body are significant, and so are the subjects of research in this work. The work considers a target near-Earth asteroid within the Arjuna domain with a monolithic composition. The asteroid’s geometric, orbital, and inertial properties are estimated from available observational data. A spacecraft with a low-thrust propulsion system is attached to the asteroid surface, and is employed for the task of both detumbling and redirecting the asteroid from its initial orbit to rendezvous with Earth. The first step in the transfer maneuver is to detumble the asteroid using the low-thrust system, and to reduce the angular velocity of the asteroid-spacecraft system to a bounded region around zero. The next step is to perform the redirection task, which requires concurrent attitude control and redirection thrusting. The thrusters reorient the asteroid-spacecraft system, such that the spacecraft’s redirection thrusters are aligned with the desired redirection vector. The direction and magnitude of the redirection force are determined using a three-dimensional shape-based transfer trajectory design method. A robust nonlinear control strategy is employed for the task of ensuring the asteroid-spacecraft system is in the desired attitude to exert the redirection thrust and that the system follows the desired trajectory. The performance of the control strategy will be assessed for both the nominal case and with consideration of disturbances such as solar radiation pressure and gravitational perturbations. In addition, uncertainty in the asteroid and spacecraft models will be considered in the dynamics and control of the system.
      • 02.1205 Projecting Asteroid Impact Corridors onto the Earth Clemens Rumpf (MCT/NASA Ames Research Center), Donovan Mathias (MCT/NASA Ames Research Center), Davide Farnocchia (Jet Propulsion Laboratory), Steven Chesley (Jet Propulsion Laboratory) Presentation: Clemens Rumpf - Thursday, March 7th, 08:55 AM - Amphitheatre
        Asteroids that collide with the Earth have the potential to deliver widespread destruction to human civilization. The Probabilistic Asteroid Impact Risk (PAIR) software tool is currently being used to model the consequences associated with this natural hazard. Given that initial orbital solutions for a threatening asteroid are subject to uncertainties, the projected impact locations can cover a large area on the Earth’s surface, called the impact corridor. The impact corridor not only provides information about the extent and general shape of the exposed area, but also represents a spatial projection of the impact probability distribution on the surface of the Earth. Spatial impact probability information is crucial for asteroid risk assessment as risk is the product of impact consequences and impact probability. While PAIR is currently capable of estimating the impact consequences for a given impact scenario, including a calculation of the impact corridor will crucially extend its utility in asteroid risk analysis. Here, we present an approach to derive the impact corridor from an initial orbital solution that provides a position and velocity state vector in cartesian coordinates with a corresponding covariance matrix. Using a Monte Carlo method, a finite set of impact points is generated by sampling the orbital solution state space and propagating the sampled orbits to the ground. The set of impact locations is subsequently used to construct the impact corridor in continuous form on the ground. An analytical or numerical approach may be used to construct the impact corridor from a set of impact points. The analytical approach requires fitting of a probability density function to the impact location set to achieve the goal of determining a spatial impact probability distribution. On the other hand, the numerical approach estimates the spatial impact location density at each position in the affected area and scales this information to represent impact probability. The two schemes are compared for their practical applicability within the PAIR tool. The capability to calculate probabilistic impact corridors from orbital data within the PAIR model will enable it to support additional asteroid risk analysis applications, such as risk-informed design for mitigation missions or evaluation of civil response strategies.
      • 02.1206 The Pan-STARRS Data Archive — a Treasure Trove of Moving Object Observations Richard Wainscoat (University of Hawaii), Robert Weryk (University of Hawaii) Presentation: Richard Wainscoat - Thursday, March 7th, 09:20 AM - Amphitheatre
        Pan-STARRS1 has been observing the night sky since 2010, and has amassed an archive comprising over 1.1 million science images comprising approximately 2.5 Petabytes of raw data. The vast majority of Pan-STARRS observations were carried our in a manner well-suited to detection of moving objects — most observations were taken as a sequence of four observations spaced over approximately one hour. Most of the remainder were taken as pairs. Due to the nature of the detectors, and the very large area of the focal plane, it has been difficult to find all moving objects, and a large number of moving object detections remain unreported from these 8+ years of surveying the sky north of –50º declination. The Pan-STARRS system and cameras will be explained. Examples of how archival observations have improved orbits of important Near-Earth Objects will be provided. The performance of the current CCDs will be described, and a proposed design for a replacement camera with larger, better performing CCDs will be presented.
      • 02.1207 Solar Sails for Planetary Defense and High-Energy Missions Jan Thimo Grundmann (DLR German Aerospace Center), Waldemar Bauer (German Aerospace Center - DLR), Jens Biele (DLR), Ralf Boden (The University of Tokyo), Kai Borchers (German Aerospace Center - DLR), Matteo Ceriotti (University of Glasgow), Federico Cordero (Telespazio-Vega Deutschland GmbH), Bernd Dachwald (FH Aachen University of Applied Sciences), Etienne Dumont (German Aerospace Center - DLR), Christian Grimm (German Aerospace Center - DLR), David Hercik (TU-Braunschweig), Tra Mi Ho (German Aerospace Center - DLR), Rico Jahnke (), Aaron Koch (DLR e. V.), Caroline Lange (German Aerospace Center - DLR), Roy Lichtenheldt (German Aerospace Center - DLR), Volker Maiwald (German Aerospace Center (DLR)), Colin Mc Innes (University of Glasgow, School of Engineering), Jan Gerd Meß (German Aerospace Center - DLR), Tobias Mikschl (Uni Wuerzburg), Eugen Mikulz (), Sergio Montenegro (University Würzburg), Ivanka Pelivan (), Alessandro Peloni (University of Glasgow), Dominik Quantius (German Aerospace Center - DLR), Siebo Reershemius (German Aerospace Center - DLR), Thomas Renger (German Aerospace Center - DLR), Johannes Riemann (), Michael Ruffer (), Kaname Sasaki (German Aerospace Center - DLR), Nicole Schmitz (German Aerospace Center - DLR), Wolfgang Seboldt (), Patric Seefeldt (German Aerospace Center - DLR), Peter Spietz (DLR German Aerospace), Tom Sproewitz (German Aerospace Center), Maciej Sznajder (German Aerospace Center - DLR), Simon Tardivel (CNES), Norbert Toth (German Aerospace Center - DLR), Elisabet Wejmo (German Aerospace Center - DLR), Friederike Wolff (German Aerospace Center - DLR), Christian Ziach (German Aerospace Center (DLR e.V.)) Presentation: Jan Thimo Grundmann - Thursday, March 7th, 09:45 AM - Amphitheatre
        20 years after the successful ground deployment test of a (20 m)² solar sail at DLR Cologne, and in the light of the upcoming U.S. NEAscout mission, we provide an overview of the progress made since in our mission and hardware design studies as well as the hardware built in the course of our solar sail technology development. We outline the most likely and most efficient routes to develop solar sails for useful missions in science and applications, based on our developed ‚now-term‘ and near-term hardware as well as the many practical and managerial lessons learned from the DLR-ESTEC Gossamer Roadmap. Mission types directly applicable to planetary defense include single and Multiple NEA Rendezvous ((M)NR) for precursor, monitoring and follow-up scenarios as well as sail-propelled head-on retrograde kinetic impactors (RKI) for mitigation. Other mission types such as the Displaced L1 (DL1) space weather advance warning and monitoring or Solar Polar Orbiter (SPO) types demonstrate the capability of near-term solar sails to achieve asteroid rendezvous in any kind of orbit, from Earth-coorbital to extremely inclined and even retrograde orbits. Some of these mission types such as SPO, (M)NR and RKI include separable payloads. For one-way access to the asteroid surface, nanolanders like MASCOT are an ideal match for solar sails in micro-spacecraft format, i.e. in launch configurations compatible with ESPA and ASAP secondary payload platforms. Larger landers similar to the JAXA-DLR study of a Jupiter Trojan asteroid lander for the OKEANOS mission can shuttle from the sail to the asteroids visited and enable multiple NEA sample-return missions. The high impact velocities and re-try capability achieved by the RKI mission type on a final orbit identical to the target asteroid‘s but retrograde to its motion enables small spacecraft size impactors to carry sufficient kinetic energy for deflection.
      • 02.1208 Development of a Realistic Set of Synthetic Earth Impactor Orbits Steven Chesley (Jet Propulsion Laboratory), Giovanni Valsecchi (INAF), Siegfried Eggl (University of Washington), Mikael Granvik (University of Helsinki), Davide Farnocchia (Jet Propulsion Laboratory), Robert Jedicke (University of Hawaii) Presentation: Steven Chesley - Thursday, March 7th, 10:10 AM - Amphitheatre
        We present a refined method for creating orbits of fictitious Earth impactors that are representative of the actual impactor population. Such orbits are crucial inputs to a variety of investigations, such as those that seek to discern how well and how early a particular asteroid survey can detect impactors, or to understand the progression of impact probability as an object is tracked after discovery. We will describe our method, which relies on Opik's b-plane formalism, and place it in context with previous approaches. While the Opik framework assumes the restricted three body problem with a circular Earth orbit, our final synthetic impactors are differentially corrected to ensure an impact in the N-body problem of the solar system. We also test the validity of the approach through brute force numerical tests, demonstrating that the properties of our synthetic impactor population are consistent with the underlying Near-Earth Object (NEO) population from which it is derived. The impactor population is, however, distinct from the NEO population, not only by virtue of the proximity of the asteroid orbit to that of the Earth, but also because low encounter velocities are strongly favored. Thus the impacting population has an increased prominence of low inclination and low eccentricity orbits, and Earth-like orbits in particular, as compared to the NEO population as a whole.
      • 02.1209 Characterization of Asteroids Using Nanospacecraft Flybys and Simultaneous Localization and Mapping Mihkel Pajusalu (Massachusetts Institute of Technology), Andris Slavinskis (Tartu Observatory/NASA Ames Research Center) Presentation: Mihkel Pajusalu - Thursday, March 7th, 10:35 AM - Amphitheatre
        Nanospacecraft could enable detailed characterization of a large sample set of asteroids in a small timeframe by launching a large number of spacecraft at the same time to a large set of targets. An example of this is the Multi-Asteroid Touring (MAT) mission concept (Slavinskis et al, “Nanospacecraft fleet for multi-asteroid touring with electric solar wind sails”, 2018 IEEE Aerospace Conference). To be able to characterize as large of a set of asteroids as possible, however, visits to individual asteroids would be limited to fly-bys, which would have to be autonomously controlled by the spacecraft themselves. Current nanospacecraft propulsion systems, and deep space spacecraft propulsion systems in general, allow to rendezvous with and establish an orbit around only a few targets per mission. As an additional challenge, due to the limitations of nanospacecraft and a massively parallel architecture, Earth-based localization and communication infrastructure, such as the Deep Space Network, cannot be relied on. We have developed an optical instrument prototype and an image simulation system for autonomous asteroid fly-bys and 3D multispectral mapping using nanospacecraft. The final system is targeted to fit into a single unit of CubeSat (a 10 cm cube) and to have the mass of less than 1 kg. The design is optimized for high-resolution shape model reconstruction and surface mapping in multiple spectral bands from the visual range to the mid-infrared and uses a compact reflective telescope design. The image simulation system is based on the Blender open-source 3D graphics software package to allow state-of-the-art physics-based image rendering and is used to simulate the images generated by fly-bys, based on the mission profile and instrument specifications. The images are used for simultaneous localization and mapping (SLAM) algorithms to determine the quality of the multispectral 3D reconstruction to be expected from this mission. In the case of the MAT mission, the fly-bys would be performed at a large distance (100-1000 km closest approach), but in case of closer and slower fly-bys, the SLAM algorithm can also be used to produce a detailed enough trajectory estimate for probing the mass of the asteroid and the shape of asteroid gravity field, possibly allowing to determine a low-resolution internal structure model, in addition to a 3D spectral map of the side of the asteroid illuminated by the Sun during the fly-by and a rough shape model of the non-illuminated part (based on occultation). We will present the instrument design, the simulation system, and the simultaneous localization and mapping system developed for nanospacecraft asteroid fly-bys.
      • 02.1210 Double Asteroid Redirection Test (DART): The Earth Strikes Back. Elena Adams (Johns Hopkins University/Applied Physics Laboratory), Daniel O’shaughnessy (Johns Hopkins University/Applied Physics Laboratory), Matthew Reinhart (Johns Hopkins University/Applied Physics Laboratory), Jeremy John (Johns Hopkins University/Applied Physics Laboratory), Elizabeth Congdon (The Johns Hopkins University Applied Physics Laboratory), Daniel Gallagher (JHU Applied Physics Laboratory), Elisabeth Abel (Johns Hopkins University/Applied Physics Laboratory), Justin Atchison (Johns Hopkins University Applied Physics Laboratory), Zachary Fletcher (), Michelle Chen (Johns Hopkins University/Applied Physics Laboratory), Christopher Heistand (Johns Hopkins Univ Applied Physics Lab (JHU APL)), Evan Smith (Johns Hopkins University/Applied Physics Laboratory), Philip Huang (johns hopkins univieristy applied phyics laboratory), Deane Sibol (Johns Hopkins University/Applied Physics Laboratory), Dmitriy Bekker (Johns Hopkins Applied Physics Laboratory), David Carrelli (Johns Hopkins University/Applied Physics Laboratory) Presentation: Elena Adams - Thursday, March 7th, 11:00 AM - Amphitheatre
        The NASA Double Asteroid Redirection Test (DART) is a technology demonstration mission that will test the kinetic impactor technique on a binary Near Earth Asteroid system, Didymos. Didymos is an ideal target, since the 780 m primary, Didymos A, is well characterized, and the 163 m secondary, Didymos B, is sufficiently small to allow measurement of the kinetic deflection. Didymos also represents the population of Near Earth Objects that are the asteroids most likely to pose a near-term threat to Earth. Scheduled to launch in June 2021, the DART spacecraft will autonomously intercept Didymos B in October 2022, altering the orbit period of Didymos B with respect to Didymos A. The impact will occur when the Earth-Didymos range is minimal, allowing observation by Earth-based optical and radio telescopes. The spacecraft will be guided to the impact by its on-board autonomous real-time system SMART Nav. In addition, DART is potentially carrying a 6U CubeSat provided by Agenzia Spaziale Italiana (ASI). The cubesat will provide photo documentation of the impact, as well as in situ observation of the impact site and resultant ejecta plume. The mission is currently in Phase C, with mission CDR planned for summer 2019. It is a substantial challenge to navigate the DART spacecraft to a hypervelocity impact with the Didymos secondary. The DART spacecraft will carry an ion propulsion system with the NEXT-C engine, which provides substantial flexibility in trajectory design to achieve the desired Didymos arrival conditions. The trajectory was designed such that the arrival at Didymos maximizes the asteroid deflection with a relative velocity of 6 km/s, while maintaining a proximity to Earth that allows both observation of the impact, and high-gain communication to allow sufficient imagery of the target upon the approach. Additionally, NEXT-C allows the mission an opportunity to fly by another asteroid seven months prior to Didymos impact, affording an in-flight opportunity to characterize SMART Nav performance before it is needed. To operate the NEXT-C engine, the DART spacecraft carries 20 m2 solar arrays to generate the necessary~3.5 kW of power, but the long arrays introduce substantial flexible body motion to the spacecraft. This motion must be managed carefully to maintain the DART narrow angle camera on the target asteroid, even while performing the necessary autonomous ΔV maneuvers required to intercept the target. Guidance to Didymos B is further complicated by having to switch targets late in the approach, as SMART Nav must target the primary asteroid initially, as the secondary is too small to be resolved by the narrow angle camera until ~1 hour prior to impact. Shadowing of both primary and secondary set by arrival lighting makes it challenging to impact at the center of Didymos B. The spacecraft streams images back to Earth in real time, with the last image acquired no earlier than 17 s prior to impact to achieve the surface resolution required.
    • 02.13 Orbital Robotics: On-Orbit Servicing and Active Debris Removal David Sternberg (NASA Jet Propulsion Laboratory) & Roberto Lampariello (German Aerospace Center - DLR)
      • 02.1305 An Approach to Contact Detection and Isolation for Free-floating Robots Based on Momentum Monitoring Francesco Cavenago (Politecnico di Milano), Alessandro Massimo Giordano (), Mauro Massari (Politecnico Di Milano) Presentation: Francesco Cavenago - Wednesday, March 6th, 04:30 PM - Jefferson
        On-orbit robotics is still a novel field of research. Supporting astronauts during maintenance work on space stations, capturing uncontrollably tumbling objects, repairing spacecrafts and assembling structures are some examples of applications that would gain great advantages from advances in space robotic technology. Considering these missions, the robot is required to work safely in an environment with other agents and to carry out operations in which contact is desired (e.g., grasping objects) or may occur accidentally. A space robot, with a certain level of autonomy, should be able to get aware of the situation and react properly, exploiting its typical sensing equipment (attitude sensors and joint sensors). This is particularly true in case of unexpected impacts that can occur in any point along the manipulator, but also considering an intentional contact, that is typically at the end-effector. Indeed, even though the robot could be equipped with a dedicated sensing device, like a force/torque sensor at the wrist, that sensor is usually not redundant and thus a failure could jeopardize the success of the entire mission. Hence, space arms should be endowed with algorithms to master these situations in order to avoid critical consequences. Collision handling task can be divided into four phases: detection, isolation, identification and reaction. This paper presents some strategies to face detection and isolation considering a free-floating space robot. In free-floating mode, no external forces/torques act on the system and therefore the linear momentum and angular momentum are preserved. When a contact occurs, the momenta lose their stationarity, and thus these quantities are good candidates to be used as monitoring signals to detect a collision. When dealing with change detection problem it is extremely important to understand the effects of measurement noise, acquisition frequency and model uncertainties on the designated signals, in such a way to select a proper strategies. Indeed, a simple threshold could not be the best solution, since it is not easy to define in advance and, in case of noisy signals, it is particularly affected by false positive alarms. Therefore, the sensitivity of the momenta to the noise, uncertainties and sensors choice is analyzed and different formulation of the momenta are considered to improve the quality of the monitoring signals. Then, hierarchical sequential analysis techniques are applied to the problem. They exploits a first subsequence of the data stream to learn statistical features of the stationary phenomenon, then they checks their evolution in time and, if a change in the system occurs, raise an alarm. Moreover, they are endowed with a second layer which verifies the reliability of the detection. Afterwards, the momenta can be exploited to identify the orientation and point of application of the external disturbance through the rotational dynamic equation. Finally, a different approach exploiting a model-based observer is briefly introduced. The observer is used to estimate directly the external force and moment that become the new monitoring signals. A comparison between the two approaches is performed highlighting advantages, drawbacks and future developments of this second approach.
      • 02.1307 Assembled, Modular Hardware Architectures - What Price Reconfigurability? Christine Gregg (NASA Ames Research Center), Benjamin Jenett (), Kenneth Cheung (NASA - Ames Research Center) Presentation: Christine Gregg - Wednesday, March 6th, 04:55 PM - Jefferson
        Recent robotic technology advances have motivated increased interest in the potential of on-orbit assembly to significantly enhance mission capability and scope. Robotic assembly systems of various types have been proposed, from 3D printing and welding to mechanical assembly. Re-usability of space exploration hardware in general is often discussed for its potential to vastly reduce the cost of space operations, and is cited as a potential benefit of on-orbit robotics. But does reconfigurability come at the cost of assembly or performance efficiency? This article specifically evaluates the potential use of highly modular materials systems - such as reversibly assembled cellular composite materials - as the basis for re-configurable exploration hardware and compares them to assembly and manufacturing techniques that are not efficiently reversible or reconfigurable. First, a dimensional scaling argument is presented to suggest that systems of all sizes can benefit from mass savings associated on-orbit assembly. Next, a comparison of the manufacturing energy cost of fabricated materials suggests that both conventional primary processing methods and melting or fusion based 3d printing cost similar amounts of energy to launching of mass to Low Earth Orbit (LEO), or more. The energy required to assemble or reconfigure building block based materials with mechanically reversible connections is many orders of magnitude less, and benefits strongly from the generality of a tunable material system to increase system re-use. It is shown that the parasitic mass penalty associated with reversible mechanical connection hardware for discretely assembled cellular material systems can be characterized and is well bounded. Example systems from experimental literature show that in practice, the effect of this parasitic mass on overall structural performance compared to state of the art is modest. When combined, these analyses provide a framework for evaluating the efficiency and suitability of different manufacturing techniques for space applications. Assembly and re-configurability of discrete cellular materials offers new properties and performance not available with current alternatives. The reconfigurability of resulting systems can extend past work on modular in-space and for-space materials and manufacturing technology, as a high performance structural system with low lifecycle cost, and should motivate future study and consideration for assembled on-orbit systems.
      • 02.1309 Decentralized Cooperative Localization for a Spacecraft Swarm Using Relative Pose Estimation William Bezouska () Presentation: William Bezouska - Wednesday, March 6th, 05:20 PM - Jefferson
        This paper presents a decentralized cooperative localization approach to relative state estimation for a team of spacecraft to estimate inertial position, orientation, velocity, and angular velocity of each spacecraft. The solution uses an Extended Kalman Filter running on each spacecraft to estimate the full state of every other spacecraft in the swarm. A Multiplicative EKF (MEKF) is used to address limitations in quaternion representation of attitude; non-quaternion states are estimated by a standard EKF. This use of an MEKF and relative pose estimates for a large cooperative satellite swarm has not been seen in the literature. Each spacecraft is equipped with a gyroscope, a star tracker, and a relative pose measurement sensor. Only pose measurements between spacecraft are shared among the team. We assume that the number of spacecraft is known a priori and that some mechanism can be used to confidently map a relative pose measurement to a specific spacecraft. The ability for a swarm of spacecraft to estimate relative position and orientation enables future space missions such as fractionated spacecraft, satellite servicing, on-orbit construction, cellular architectures, and exploration of planetary rings. Extensive work during the last three decades has produced a variety of techniques for relative state estimation between two spacecraft. However, full pose estimation for a large group of collaborating spacecraft has received less attention. Separately, ground robotics includes an active area of research known as cooperative localization where a team of robots shares relative observations to improve individual estimates of relative state. It has been shown that a team which incorporates less accurate measurements shared among team members can produce better state estimates than individual robots relying only on their own more accurate measurements. Simulation results are provided which demonstrate an improvement in state estimation for the swarm when shared relative pose measurements are incorporated into the filters running on each spacecraft. For example, for a swarm of ten spacecraft, orientation estimates are three to four times more accurate when incorporating shared relative pose measurements. This performance, however, comes at a proportional increase of computation time. Estimation performance in non-fully connected sensing networks, including minimum spanning trees, are addressed through simulation as well. The methodology and results presented in this paper provide a foundation for future research by introducing the concept of cooperative localization into spacecraft swarm state estimation.
      • 02.1310 Algorithmic Approaches to Reconfigurable Space Assembly Systems Allan Costa (MIT), Amira Abdel Rahman (MIT), Kenneth Cheung (NASA - Ames Research Center), Benjamin Jenett (), Neil Gershenfeld (MIT) Presentation: Allan Costa - Wednesday, March 6th, 09:00 PM - Jefferson
        Assembly of large scale structural systems in space is understood as critical to serving applications that cannot be deployed from a single launch. Recent literature proposes the use of discrete modular structures for in-space assembly and rel- atively small scale robotics that are able to modify and traverse the structure. This paper addresses the algorithmic problems in scaling reconfigurable space structures built through robotic construction, where reconfiguration is defined as the problem of transforming an initial structure into a different goal con- figuration. We analyze different algorithmic paradigms and present corresponding abstractions and graph formulations, examining specialized algorithms that consider discretized space and time steps. We then discuss fundamental design trades for different computational architectures, such as centralized versus distributed, and present two representative algorithms as concrete examples for comparison. We analyze how those algorithms achieve different objective functions and goals, such as minimization of total distance traveled, maximization of fault- tolerance, or minimization of total time spent in assembly. This is meant to offer an impression of algorithmic constraints on scalability of corresponding structural and robotic design. From this study, a set of recommendations is developed on where and when to use each paradigm, as well as implications for physical robotic and structural system design.
      • 02.1313 Robust Estimation of Motion States for Moving Target Capture Abril Poó Gallardo (German Aerospace Center - DLR), Hrishik Mishra (German Aerospace Center - DLR), Roberto Lampariello (German Aerospace Center - DLR) Presentation: Abril Poó Gallardo - Wednesday, March 6th, 09:25 PM - Jefferson
        On-Orbit Servicing (OOS) applications necessitate the automated capture of a partially cooperative tumbling satellite (target) by a manipulator-equipped satellite (servicer). This in turn requires high fidelity state estimation and tracking of a moving target relative to the chasing service satellite. The relative velocity measurements of target and servicer spacecrafts are key to control methods aimed at fulfilling this goal. Firstly, due to the the non-cooperative nature of the target, the parameters of motion and inertia of the target are only know with a limited certainty. For the same reason, only slow-sampled pose measurements from an exteroceptive sensor, like a camera, may be available. Secondly, although the manipulator states are available from fast-sampled joint-encoders, for the servicer platform, only a pose estimate from other co-located exteroceptive sensors (LiDAR and star-tracker) might be available. From the free-body vector triangle between the servicer-base CoM, the target CoM, and the end-effector, only the forward kinematics of the manipulator (servicer base to end-effector) are fast-sampled. Therefore, even computation of forward kinematics of the overall servicer-manipulator system is slow-sampled. This, together with the noisy nature of the aforementioned sensors, limits the achievable performance and the rate at which actuation forces may be computed. In this paper we develop a novel robot-navigation Extended Kalman Filter (RN-EKF) that uses a multitude of sensors, together with the kinematics of the manipulator, to robustly estimate the states of the multibody dynamic system at hand. The states include the inertial motion states of both, the servicer base and the target. Furthermore, we derive novel closed-form expressions of the output (measurements) which allow us to perform prediction/estimation while including the collection of measurements. Together with a camera mounted on the end-effector of the manipulator, our setup considers sensor information provided by Light Detection and Ranging sensor (LiDAR) at the base of the servicer, Inertial Measurements Unit (IMU) and star tracker co-located at the base, which we show to be paramount for reliable estimation, tracking and successive capture of the target. A Monte Carlo analysis was performed for a large number of initial conditions and system parameters, addressing the generally poor knowledge about the relevant system dynamics. The simulation results demonstrate the robustness and convergence properties of the estimator which shows accurate results even for the special cases of occlusion of the LiDAR and end-effector camera. Additionally, we also validate the design using simulations and verify it in open-loop on the OOS-Simulator (OOS-SIM) at German Aerospace Center (DLR). In general, we demonstrate the achievement of a combined control regulation task for both the end-effector and the servicer base through simulations while running in closed-loop with the proposed RN-EKF.
      • 02.1314 Perception-Constrained Robot Manipulator Planning for Satellite Servicing Tariq Zahroof (Stanford University), Andrew Bylard (Stanford University), Hesham Shageer (Stanford University), Marco Pavone (Stanford University) Presentation: Tariq Zahroof - Wednesday, March 6th, 09:50 PM - Jefferson
        The NASA Restore-L Robotic Servicing Mission seeks to be one of the first demonstrations of fully robotic satellite servicing, using an uncrewed spacecraft to capture, refuel, and reposition a large observation satellite in LEO. To this end, the Restore-L spacecraft is equipped with two 7-joint robotic manipulators, which it will use to capture the satellite and perform a complex series of refueling tasks, including swapping between various end-effector tools stored onboard. The manipulator trajectories must meet a number of constraints including collision-avoidance and perception-related goals, such as keeping parts of the scene in view of the end-effector camera, as desired by the operators. These constraints make manipulator trajectories very time-consuming to design by hand, and thus a more automated trajectory design tool is needed. Thus, we present a planning algorithm and software tool for real-time computation of near-optimal, collision-free trajectories which abide by the constraints defined above. This tool will allow operators to define high-level objectives, and rapidly generate trajectories to autonomously maneuver the robotic manipulators to their goals while maintaining constant view of the target in order to improve robot-operator information relay and task performance. The underlying algorithm leverages BFMT* - a bidirectional variant of the sampling-based planner FMT* - to compute, in real time, high-quality paths in a seven-dimensional space to be tracked by the robotic arm. Afterwards, a partial shortcut post-processing technique is used to locally optimize each joint for significant path performance improvement. The paths are then time-parameterized to satisfy the dynamics of the manipulator. To meet perception constraints, the field of view of the camera is considered within the collision-checker, which is applied during both the initial sampling-based planning and post-processing phase in order to ensure final constraint satisfaction. The tool is integrated with the ROS MoveIt! framework to simulate, plan, and visualize the resulting trajectories. In addition, we have included features in the tool which leverages our planning algorithm to make smooth local adjustments of generated paths requested by the operators. Though the algorithm and software tool are applied to the Restore-L servicing mission in this paper, they can be applied to any fixed-base manipulator planning scenario having a similar class of constraints.
  • 3 Antennas, RF/Microwave Systems, and Propagation Farzin Manshadi (Jet Propulsion Laboratory) & James Hoffman (Jet Propulsion Laboratory)
    • 03.01 Phased Array Antenna Systems and Beamforming Technologies Abbas Omar (University of Magdeburg) & Janice Booth (AMRDEC Weapons Development and Integration Directorate) & Glenn Hopkins (Georgia Tech Research Institute)
      • 03.01 3.01 Keynote Presentation: - - Cheyenne
      • 03.0102 GPU Acceleration for Synthesis of Coherent Sparse Arrays Zachary Baker (Los Alamos National Lab) Presentation: Zachary Baker - Wednesday, March 6th, 10:35 AM - Cheyenne
        Radio frequency aperture synthesis from multiple free-flying collectors is traditionally dependent on highly accurate intra-constellation metrology and shared clocks. We demonstrate that coherent alignment of independent collectors with poor knowledge of relative positioning and clocking can be achieved through computational means in post processing. This allows the synthesis of a coherent sparse array of RF collectors with the time and position knowledge available from a cheap and commercial GPS receiver. This paper extends our previous publication in the 2018 IEEE Aerospace conference with several key advances, including on-GPU execution of the CAF algorithm, on-GPU multi-emitter tracking, and interpolation-based correction adaptation. The alignment algorithm has significantly increased in performance and capability since 2018. The Complex Ambiguity Function (CAF) is used for simultaneous estimation of time difference of arrival (TDOA) and frequency difference of arrival (FDOA) for multiple pairs of received signals. These time, frequency and phase corrections are applied to the data, bringing them into alignment with each other. Maintaining this alignment over a longer time span is the challenge, addressed through overlapped sequential CAF estimates to estimate the evolving TDOA/FDOA of multiple emitters. Achieving good alignment over millions of samples requires an evolving model of the collection geometry. By breaking a long signal into successive, overlapping frames, we can estimate the signal parameters; however, the computation time is untenable on traditional CPUs. In this paper, we explore the use of GPU-based parallel computations to speed up the CAF algorithm and to make it suitable for the alignment of larger signal lengths. To achieve accelerated computations of the CAF while maintaining the high-level features offered by Python, we use the ArrayFire library to offload computations to a graphics processing unit (GPU). ArrayFire in a general purpose, open-source library that provides accelerated and scalable computing solution, targeting parallel and massively parallel architectures. The CAF computations are equivalent to performing Fourier transforms and complex multiplications. In addition, the Fourier transform could also be seen as the projection of a signal onto a basis comprising of complex sinusoids. These operations can be vectorized and independently computed for different values of time and frequency offsets to determine the peak value of the CAF; therefore, is ideally suited for parallel computations on a GPU. Comparison of the optimized GPU implementation reduced the processing time drastically and showed more than 100x improvement in computational performance on the NVIDIA TITAN_V GPU compared to sequential execution using NumPy on the Intel Haswell CPU. Detailed comparison of different NVIDIA GPU architectures were also made and the latest Volta architecture provided the best performance.
      • 03.0103 Retrodirective Phased Array Antennas for Small Satellites Justin Long (University of Alaska Fairbanks), Denise Thorsen (University of Alaska Fairbanks), Obadiah Kegege (NASA Goddard Space Flight Center) Presentation: Justin Long - Wednesday, March 6th, 11:00 AM - Cheyenne
        This paper proposes the design of a retrodirective phased array antenna for CubeSat applications. A phased array antenna can offer high gain and beamforming capabilities to small satellites. Retrodirective capabilities allow the antenna system to autonomously determine the direction of an incoming signal without prior knowledge, and form the beam appropriately to achieve maximum gain. The end result is a compact high gain antenna without the strict pointing requirements that are standard in most high gain antennas. This can improve data throughput, remove the need for complex gimbal systems, and open CubeSats to a larger assortment of potential applications including relay satellites, multiple access technology, and improved constellation communication. This continues on the work previously published by [1] which showed that a 2 by 2 active phased array offers a 5 dB gain improvement in the direction to which the antenna is electronically steered, and highlighted the advantages and challenges of phased arrays. This paper proposes a new phased array architecture that adds retrodirective capabilities and meets several of the challenges discovered by [1]. Simulation results are presented that highlight the benefits of phased arrays and retrodirective steering in satellite applications. The simulations show the gain improvement of a phased array over a single element antenna, the formation of side lobes, the effects of retrodirective steering, and the effects of scaling the array size. Analytical results show the expected efficiency, gain, and maximum EIRP using existing commercial-off-the-shelf (COTS) technology. Various potential front-end architectures are discussed in terms of the relative advantages and performance. It is shown that for the proposed architecture with existing COTS parts, a power added efficiency of 37% can be achieved. The per element power overhead was reduced from [1] by using a single microcontroller and simplifying the architecture. A smaller per element overhead makes larger arrays with greater gain practical. The array size per 1U (10 cm x 10 cm) area is shown, with an analysis of practical limitations in array size due to per element overhead. Test results are presented that show the insertion loss and phase accuracy of the selected COTS components. Scanning tests were performed that show the operational concept of power scanning. The analytical performance of the proposed system is compared to other common CubeSat antennas, such as the reflectarray. The advantages and disadvantages of each antenna are discussed. References: [1] J. Klein, et. al., “Improving cubesat downlink capacity with active phased array antennas,” in Aerospace Conference, 2014 IEEE, Big Sky, MT This work was supported by a NASA Space Technology Research Fellowship
      • 03.0105 General Analysis of Coupled-Element Antenna Arrays Abbas Omar (University of Magdeburg) Presentation: Abbas Omar - Wednesday, March 6th, 11:25 AM - Cheyenne
        A general analysis of antenna arrays with inter-element coupling is presented/revisited. It is shown that the array in this case should be treated as a whole entity, which is characterized by a number of eigenmodes equal to that of the array elements. Such array modes can be independently excited and used for beamforming and steering exactly like the individual elements in the uncoupled case. The analysis adopts a coupled-resonator structure as a model for the array. The related equivalent circuit is derived field theoretically for a fairly general antenna element. Only one-dimensional arrays are considered in details. The generalization to the two-dimensional case is however straightforward. The paper is principally of a fundamental nature with revisited physical insight into the operation of antenna arrays.
    • 03.02 Ground and Space Antenna Technologies and Systems Farzin Manshadi (Jet Propulsion Laboratory) & Vahraz Jamnejad (Jet Propulsion Laboratory)
      • 03.0201 The Multibeam Radar Sensor BIRALES: Performance Assessment for Space Surveillance and Tracking Matteo Losacco (Politecnico di Milano), Pierluigi Di Lizia (Politecnico di Milano), Mauro Massari (Politecnico Di Milano), Germano Bianchi (INAF), Giuseppe Pupillo (INAF - IRA), Andrea Mattana (), Giovanni Naldi (National Institute for Astrophysics), Claudio Bortolotti (), Mauro Roma (), Marco Schiaffino (INAF), Federico Perini (INAF), Luca Lama (), Alessio Magro (University of Malta), Denis Cutajar (University of Malta), Josef Borg (University of Malta), Marco Reali (Italian Ministry of Defense - Italian Air Force), Walter Villadei () Presentation: Matteo Losacco - Wednesday, March 6th, 08:30 AM - Cheyenne
        Near-Earth space has become progressively more crowded in active and inactive spacecraft and debris. Consequently, an international effort is being devoted to improving the performance of optical and radar sensors for space objects monitoring. The aim of this work is to assess the performance of the novel multibeam BIstatic RAdar for LEo Survey (BIRALES) sensor within the European Space Surveillance and Tracking Framework. BIRALES sensor uses a radio frequency transmitter located at the Italian Joint Test Range of Salto di Quirra in Sardinia as transmitter and part of the “Northern Cross ” (NC) radiotelescope located in Medicina (Bologna, Italy) as a multibeam receiver. The transmitter consists of a powerful amplifier able to supply a maximum power of 10 kW in the bandwidth 410-415 MHz. The Northern Cross represents one of the largest UHF-capable antenna in the world. It is made of two perpendicular arms: the East-West arm is 564m long and consists of a single cylindrical antenna, the North-South arm is made of 64 parallel antennas. Eight antennas of this arm have been refurbished to mount 4 receivers each, which allow its field of view (FoV) to be divided in 24 independent beams. When an object transits inside the antenna FoV, the beams are illuminated by the reflected radio wave. Consequently, besides the classical range, Doppler shift and Signal-to-Noise Ratio (SNR) measurements, the beam illumination sequence can be exploited to obtain an estimate of the angular deviation profiles of the scattering object with respect to the nominal receiver pointing direction, with a higher level of detail with respect to a single-beam system. The data received from BIRALES are provided to a tailored orbit determination (OD) algorithm. The OD process is divided in two phases. First, the angular profiles are estimated starting from the SNR data available from each beam. By identifying the different peak SNR values measured by each illuminated beam, a curve fit aimed at minimizing the angular displacement from the center of each beam peak provides a first guess for the profiles. This first guess is then refined by matching the generated SNR profiles with the measurements. This provides, for each observation instant, the estimated angular deviation profiles of the object with respect to the nominal pointing direction of the receiver. During the second phase, the object state corresponding to the first observation epoch is then estimated from the observables by matching the generated orbital trajectory with the available measurements, i.e. the slant range, Doppler shift and the estimated angular deviations. The first part of this work illustrates the results achieved with numerical simulations. The sensor performance is assessed considering the cases of known and unknown objects, single and repeated passages and different sensor configurations. For all cases, the effect of measurement noise on each available measurement is investigated. The second part of the work illustrates the results achieved with real measurements, showing the impact of object distance, noise level and beam illumination sequence on the accuracy of the results.
      • 03.0203 A Thin-ply Composite with Embedded Metallic Mesh for a Cryogenically-rated Antenna Jonathan Mihaly (), Maria Sakovsky (California Institute of Technology) Presentation: Jonathan Mihaly - Wednesday, March 6th, 08:55 AM - Cheyenne
        Composite materials are well established in the space industry and offer significant strength-to-weight advantages for large-scale structures. However, composite materials with embedded conductive elements have not been implemented in spaceborne antenna systems as radiating elements. Furthermore, the use of composite materials for deep space missions requires the materials to be robust to extreme temperatures. Microcracking and delamination between fiber and matrix components, as well as the conductive element, presents an inherent limitation in the application of composite materials to cryogenic temperatures. Thin-ply laminates (ply thickness of less than 70 μm) have been identified as a technique to mitigate microcracking, however little data exists in open literature that aids the design and fabrication of such structures. This study presents a technique developed for embedding a conductive metallic mesh into the layup of thin-ply carbon fiber reinforced polymer (CFRP) composite material. Thermal cycling and subsequent mechanical testing on fabricated samples has been used to evaluate thin-ply composites as a crack mitigation technique. Furthermore, the construction of thin-ply CFRP composite tubes has been demonstrated for both straight segments and curved segments utilizing a novel sacrificial ceramic molding method. These prototypes represent the building blocks for a 1 m scale radiating element and are the first test articles for the evaluation of thin-ply CFRP composites with an embedded metallic mesh. The successful implementation of a CFRP composite with an embedded metallic mesh would enable ultra-lightweight conductive structures for future applications, such as antennas, requiring survival in cryogenic temperatures.
      • 03.0204 Distributed Swarm Antenna Arrays for Deep Space Applications Marco Quadrelli (Jet Propulsion Laboratory), Saptarshi Bandyopadhyay (Jet Propulsion Laboratory), Richard Hodges (Jet Propulsion Laboratory), Victor Vilnrotter (Jet Propulsion Laboratory) Presentation: Marco Quadrelli - Wednesday, March 6th, 09:20 AM - Cheyenne
        It is desirable to develop a high Equivalent Isotropically Radiated Power (EIRP), autonomous, distributed, reconfigurable, on-demand Ka/X-band transmit-antenna array using small satellites, for deep-space communication (Mars and beyond). We have been investigating the feasibility of this concept. Our objective is to show that a distributed, free-flying swarm array can be phased to provide a coherent beam in Ka/X-band, with performance (mass, power, data rate) comparable to the state-of-the-art Mars Reconnaissance Orbiter (MRO). NASA has a need for high data-rate deep-space communication. Multifunctional subsystem integration will reduce the mass, volume, and power of autonomous assets being sent to targets of planetary exploration. This capability would improve entire classes of future NASA missions, with benefits to key challenges in multiple areas. A large swarm could be assembled in multiple launches, could be part of Mars campaign of Mars human exploration, and combined transmit/receive architectures could also benefit science. While monolithic apertures are in use today, larger apertures (membrane, gossamer, or inflatable type) will suffer from lower fault-tolerance, structural vibrations, structural misalignments, tight planarity requirements, thermo-structural stability, ageing and creep, continuous calibrations, deployment complexities, sub-mm-level surface accuracy in the primary, unavoidable systematic manufacturing errors, material outgassing and surface contamination. The advantages of distributed apertures over single apertures are: electronic beam steering, spatial power combining, lower power density in the transmit system components, and graceful degradation capability. The disadvantages of distributed apertures over single aperture are: highf uel cost to control and reposition spacecraft, high-precision metrology limitations, require accurate ACS, accurate clock needed for precise modulation alignment phasing, possible side lobes due to geometric distortion in element antenna pattern. A comparison of monolithic spacecraft (based on MRO) vs. a swarm of small satellites (based on MaRCO) shows that high EIRP is feasible with a low mass swarm. We would need ~34 MarCO-size CubeSats to match MRO, assuming Ka-Band telecom, an IRIS radio, and 30 x 60 Ka-Band reflectarray on each spacecraft. A chief/director spacecraft would still be needed, which would handle UHF telecom comm, and relaying and coordination with all “deputy” spacecraft. We find that 100 CubeSats would increase the data rate ~10X that of MRO, and that the data rate scales with EIRP. There are several technology developments that would be needed to achieve this goal. First, increasing data rates at Ka-band would require: a large antenna area for MRO, vs. larger power in miniaturized electronics for an array; appropriate timing/sequencing algorithms to cohere array; and laser metrology to determine relative s/c positions with sufficient accuracy to carry out the phasing of the array.
      • 03.0207 Automated Ground Station Design for an Amateur LEO Satellite System Lipika Garg (Manipal University), Atharva Kand (Manipal University), Malhar Pradhan (Manipal Institute Of Technology), Abhishek Agarwal (Manipal Institute of Technology) Presentation: Lipika Garg - Wednesday, March 6th, 09:45 AM - Cheyenne
        This paper describes the RF architecture and automated functioning of PAGOS, the Ground Station of the Parikshit Student Satellite team. The Parikshit satellite is a 2U class nano-satellite, with a Thermal Camera as its primary payload. The station has been set up to have reception capability for the VHF and UHF amateur radio frequency bands. The intent behind the automation of the ground station is to enable data collection and satellite tracking during off hours. The ground station hardware architecture has been described along with the specification of the components used. The paper also includes the link budget calculations and the subsequent link margin determination. At the ground station, Doppler shift correction and the control of the Yagi Uda antennas via the rotor control during a satellite pass is automated for continuous data reception. The Parikshit satellite will transmit its payload data on the UHF band at a frequency of 437.8 MHz using 2-FSK modulation. The reception of the raw data bits from the satellite using a CC1101 transceiver chip, and its subsequent decoding on the computer has been described. The modulation formats and the coding of the communication protocol, has been taken into account. The beacon data is sent on the VHF band at frequency of 145.89 MHz. It is transmitted as Morse Code using the ASK/ OOK modulation format. The Morse data is received using a high-end radio transceiver and recorded onto the computer. Both, the received payload and beacon data undergo independent post processing to get readable information. The radio, rotor control set up and the chip transceiver are all interfaced to a dedicated PC via a UART line. The PC also hosts mission critical third party software required during reception and decoding. This includes the satellite tracking software, audio recorder and decoder and the chip transceiver GUI. The specification of the aforementioned software and their automation capabilities has been discussed. A communication test bench was set up to verify both the VHF and UHF lines. The ground station functioning was verified by receiving and decoding beacon data from other nano-satellites operating at the same amateur radio frequency bands, at heights comparable to the proposed LEO height for the Parikshit Satellite. The received data has been tabulated and verified. Ground Station antenna tests for VSWR and gain have also been documented with a procedural description. The paper includes all necessary calculations and diagrams.
    • 03.03 RF/Microwave Systems James Hoffman (Jet Propulsion Laboratory)
      • 03.0301 Signal Recovery and Detection of Certain Wideband Signals Using Multiple Low-Rate ADCs Michael Johnson (Naval Postgraduate School) Presentation: Michael Johnson - Thursday, March 7th, 09:25 PM - Elbow 1
        There are certain wideband signals that occupy quite large bandwidths but may have resonant amplitudes in certain frequency bands. While the sampling theorem only requires slightly more than the Nyquist rate for detection, estimation, or reconstruction of signals, in practice higher sampling rates are actually used due to various RF receiver effects such as signal distortion and noise. Signals that have a very large bandwidth thus require a very high rate analog-to-digital converter (ADC) which may be costly or technologically impractical. Many studies have addressed the large bandwidth (in frequency) but sparse (in time) type of signals. Previous methods demonstrated use compressed sensing techniques, which rely on a sparse signal (in some domain) such that a very low sampling rate is feasible. For reconstruction however, the computational burden and latency may be significant due to the optimization methods needed for signal reconstruction. In this work, we show that these signals can be effectively sampled by a lower sampling rate compared to what is required by the Nyquist-Shannon sampling theorem. Although some wideband signals may be sparse in the time domain due to the well-known frequency-time duality, in this work, we are interested in large-bandwidth signals that are not necessarily sparse in the time domain. In other words, we investigate signals where compressive sensing techniques may fail. Yet we still desire a receiver architecture which lowers the sampling rate needed for these types of signals. We propose a receiver capable of doing so in this work. Indeed, our proposed method also lowers the computational burden. In our design, these dominant frequency components are split into separate receiver paths (subchains) and sampled at a much lower sampling rate than would be required of the entire wideband signal. The signal processing architecture we propose is simple with an effective lower sampling rate and delivers a probability of detection equivalent to a traditional wideband matched filter. This approach is ideal for a programmable application specific receiver detecting a known signal of interest that is large in bandwidth and contains dominant frequency bands. We call our method `multiple low-rate samplers' (MLRS) technique. The technology described above is investigated using two performance metrics. First the sum-squared error is determined and compared against a traditional wideband matched filter receiver. Next, we propose a probability of detection experiment and run our receiver design compared to a traditional matched filter receiver. These results are generated via MATLAB simulation, SIMULINK hardware simulation, and finally a field-programmable gate array (FPGA) implementation. A hypothetical signal and real world signal are modeled through this entire simulation process. The simulations and hardware performance results of both signals agree with the expected traditional wideband matched filter results confirming our hypothesis that portions of the signal can be discarded with little or no loss in the probability of detection performance.
      • 03.0305 Linearisation of SATCOM Power Amplifiers Suat Ayoz (Honeywell International, Inc.) Presentation: Suat Ayoz - Thursday, March 7th, 09:50 PM - Elbow 1
        The ultimate objective of this work was to develop a demonstrator of a highly efficient, GaN based, linearised, passively cooled solid-state power amplifier (SSPA) for avionic SATCOM terminals operating in L-band. The SSPA should provide sufficient output power to comply with all SATCOM classes in all installation options, meet all in-band and out-of-band distortion limits (EVM and spectral regrowth) and guarantee reliable operation for 1M-hours. The contract, undertaken with European Space Agency (ESA) funding, mandates the use of GaN technology and power amplifier linearisation techniques to maintain high efficiency while operating in a linear fashion. The SSPA needs to meet all these requirements without the presence of any forced-air cooling. The project has been completed successfully achieving the 47% PAE target while amplifying a single-carrier test waveform (Inmarsat R20T4.5X – 16-QAM, 151.2kS/s) to ~15W nominal / ~65W peak power level. DPD linearisation was used to achieve <2% EVM and >10dB margin in spectral regrowth measured against the ETSI and Inmarsat channel masks at all offset frequencies. Passive cooling was demonstrated with GaN junction temperature not exceeding 150°C (absolute maximum rating 275°C) and MTTF reliability more than 10M hours.
    • 03.04 Radio Astronomy and Radio Science Mark Bentum (Eindhoven University of Technology) & Melissa Soriano (Jet Propulsion Laboratory)
      • 03.0403 The First Two Years of Juno Spacecraft Astrometry with the Very Long Baseline Array Dayton Jones (Space Science Institute), William Folkner (), Ryan Park (), Christopher Jacobs (JPL), Jonathan Romney (National Radio Astronomy Observatory), Vivek Dhawan (National Radio Astronomy Observatory) Presentation: Dayton Jones - Thursday, March 7th, 08:30 AM - Cheyenne
        The Very Long Baseline Array (VLBA) is a ten-antenna radio interferometer with baseline lengths up to 8000 km. It can provide astrometric measurements of spacecraft orbiting planets and other objects in our solar system with sub-nrad precision (5 nrad = 1 milli-arcsec). These measurements can be used to create a time series of positions for solar system objects in the inertial International Celestial Reference Frame, which in turn can be combined with other data to refine the planetary ephemeris. An accurate solar system ephemeris is critical for interplanetary spacecraft navigation, dynamical studies and tests of gravitational theories, the analysis of pulsar timing observations, predictions of transits, eclipses, and occultations, and other applications. We are using VLBA observations of the Juno spacecraft in orbit about Jupiter to provide accurate positions for the Jupiter system barycenter for the ephemeris development group at the Jet Propulsion Laboratory, using observing and data reduction techniques developed for similar observations of the Cassini spacecraft while it orbited Saturn from 2004 until 2017. The VLBA observations of Cassini helped to improve the accuracy of Saturn's orbit by nearly an order of magnitude, and we expect that our observations of Juno will produce a similar improvement in our knowledge of Jupiter's orbit. Astrometric positions are particularly useful in constraining the orientation (inclination and longitude of ascending node) of an orbit, while range measurements are most useful in constraining the semi-major axis and ellipticity. Juno's orbit around Jupiter has a longer period than initially planned due to a concern about the spacecraft main engine. The resulting extended mission duration will improve our constraints on Jupiter's orbit inclination beyond that originally expected. Our VLBA observations of Juno are scheduled during approximately every third or fourth perijove pass. During these times the Juno spacecraft is continuously tracked by the Deep Space Network and the most precise solutions for the orbit of Juno about Jupiter are available. A good spacecraft orbit solution is needed to transfer our spacecraft sky positions to planet system barycenter positions. VLBA astrometry of planetary spacecraft has previously been applied to Mars orbiting space-craft, and will also be used during the OSIRIS-REx mission to improve the accuracy of the orbit of the potentially hazardous asteroid Bennu.
      • 03.0404 Modeling of Venus Atmospheric RF Attenuation for Communication Link Purposes David Everett (NASA - Goddard Space Flight Center), Cornelis Du Toit (AS and D, Inc.), Ralph Lorenz (Johns Hopkins University/Applied Physics Laboratory) Presentation: David Everett - Thursday, March 7th, 08:55 AM - Cheyenne
        This paper describes a method for modeling microwave propagation and signal attenuation along an electromagnetic ray linking a spacecraft above the atmosphere with a descent probe within the atmosphere of Venus. The two aspects that determines the total attenuation along the ray are energy absorption and defocusing effects along the path of propagation. The underlying factors determining absorption in the Venusian atmosphere are the attenuation parameters associated with the dominant gasses in the Venusian atmosphere and the physical properties of the Venusian atmosphere such as pressure, temperature, density and gaseous concentration content. Past laboratory experiments form the knowledge base for the attenuation due to individual gasses, and observations made by past missions to Venus were used to create empirical models of the relevant atmospheric properties as functions of elevation. From these, atmospheric attenuation due to absorption can be determined as a function of altitude. The ray path through the atmosphere is governed by the atmospheric refraction index as derived from the atmospheric properties and the polarization and permittivity of carbon dioxide, which makes up 96.4% of the Venusian atmosphere. Once the ray path between the spacecraft and a descent probe is established, the total absorption attenuation can be determined by integration along the ray path. Small deviations in the path can be used to determine defocusing/focusing effects, i.e. ray divergence that is more or less than expected from normal straight path divergence. An iterative procedure is required to establish the approximate ray path connecting the spacecraft and the descent probe, since the ray’s exact launching direction from either end is unknown. Because this process requires multiple ray path integrations, a fast path integration algorithm was developed. A fast converging iterative procedure (typically requiring only 3 steps) was developed to find the correct path, using the concept of a virtual focal point for ray divergence. To facilitate link margin calculations, a set of lookup tables can be generated to avoid path integrations, using simple interpolation between data points. Cases of extreme path curvature are problematic since the calculation of the defocusing effects, by way of small path deviations, becomes numerically unstable, typically when the total path curvature is more than 140°. Clear trends can be inferred, however, from the curvature range between 110° and 140°, which can then be used for extrapolation beyond 140°. The use of the lookup tables containing data extending beyond 140° of path curvature therefore also eliminates the additional numerical complications when handling extreme path curvatures.
      • 03.0405 Radio Science at Jupiter: Past Investigations, Current Results, and Future Prospects Dustin Buccino (Jet Propulsion Laboratory), Marzia Parisi (Jet Propulsion Laboratory), Yu Ming Yang (NASA Jet Propulsion Lab), Daniel Kahan (), Kamal Oudrhiri (Jet Propulsion Laboratory) Presentation: Dustin Buccino - Thursday, March 7th, 09:20 AM - Cheyenne
        Over the last 40 years, missions of the National Aeronautics and Space Administration (NASA) and European Space Agency (ESA) have explored the largest planet in the system, Jupiter. Radio Science has been a key component of each mission to the planet, where radio signals between the spacecraft and the Earth-based observing antennas have been utilized to determine the physical properties of Jupiter and its moons, their atmospheres, ionospheres, and gravity fields. As a spacecraft passes through or around the Jovian system, small changes in phase, frequency, and amplitude of the radio signal are induced by the surrounding environment. These changes are detectable and form the basis for computing gravitational fields, planetary motion, and reconstruction of atmospheric and ionospheric profiles. The era of outer-planet exploration began in 1973 and 1974 with the Pioneer 10 and 11 flybys of Jupiter. Several missions have used Jupiter’s massive gravitational well to perform a gravity assist to reach farther in the solar system, starting with Pioneer 11 in 1974 and Voyager 1 and Voyager 2 in 1979 with subsequent flybys from Ulysses in 1992, Cassini-Huygens in 2000, and New Horizons in 2007. Jupiter has hosted two dedicated orbiting spacecraft thus far: Galileo from 1995-2003 and Juno from 2016 to present. Throughout this time, instrumentation advances in both spacecraft technology and technology developed for ground antennas have improved the precision and accuracy of the radiometric measurements, leading to improved results from radio science investigations. The latest mission to Jupiter, Juno, includes the most advanced radio science instrumentation to date. With Juno’s unique polar orbit and dual frequency radio links, it is able to probe the planet’s deep interior structure and zonal wind profile with measurements of the gravitational field and probe the electron densities in the Io plasma torus, a doughnut-shaped ring around Jupiter charged with particles emitted by the volcanic activity on Io. Upcoming missions, such as the planned NASA’s Europa Clipper multiple flyby mission in 2022, potential follow-on Europa Lander, and the ESA’s Jupiter Icy Moons Explorer mission in 2022 may make further strides in the study of the planet and its moons utilizing radio science.
      • 03.0406 The Radio Environment for a Space-based Low-frequency Radio Astronomy Instrument Mark Bentum (Eindhoven University of Technology), Pieter Van Vugt (University of Twente) Presentation: Mark Bentum - Thursday, March 7th, 09:45 AM - Cheyenne
        Opening the last frequency window for radio astronomy in the sub - 30 MHz region consists of a few challenges. First of all, at frequencies below 30 MHz the Earth’s ionosphere severely distorts radio waves originating from celestial sources, and it completely blocks radio waves below 10 MHz. This means that radio astronomy and astrophysics below ~30 MHz is best conducted from space. Secondly, the radio spectrum below 30 MHz is filled with very strong transmitters signals, making it difficult to do Earth-based radio observations. Most space-based radio telescope studies and initiatives aim to place a swarm of satellites far away from the Earth’s radio interference. Deployment location options include lunar orbit and the Earth-Moon Lagrangian point behind the Moon (L2). When the swarm is behind the Moon, the latter can act as a shield against the RFI produced by the Earth. However, due to diffraction, the RFI might bend around the Moon, and still have significant presence at locations where there is no direct line of sight to Earth, especially at the lowest frequencies. The amount of RFI presence behind the Moon due to diffraction is (except for RAE-B lunar orbit altitudes) not known, but can be predicted. Interference produced at the surface of the Earth, artificial radio transmissions and lightning, are effectively blocked by the Moon. Below 3 MHz, these signals are already attenuated a lot by the ionosphere of the Earth, and above 3 MHz the attenuation during ground wave propagation around the Moon is so high that the surface produced RFI will not be observed behind the Moon. The Auroral Kilometric Radiation (AKR, frequencies 30kHz - 1MHz) is more troublesome because it is not produced at the surface, but at an altitude of 1-3 times the radius of the Earth. This means that it does not have to propagate through the ionosphere to reach the Moon. These low frequencies are less attenuated by ground wave propagation. However, due to the very low conductivity of the Moon its surface and the very rough terrain, the RFI produced by AKR will still be much weaker than the Galactic Background Radio Noise when there is no direct Line of Sight (LOS). However, due to the high altitude of the AKR, there will be a smaller area behind the Moon where there is no direct LOS to the source, which reduces the amount of time where observations can be executed during a Moon orbit. In this paper we will map the radio environment for space-based low frequency radio astronomy. We will address all the current measurements done at this moment, characterize it, identify the lessons learned and present a model for the radio environment at low frequencies in space.
      • 03.0408 Correlators for Synthetic Apertures in Space Alexander Hegedus (University of Michigan), Melissa Soriano (Jet Propulsion Laboratory), Andy Kurum (), Justin Kasper (University of Michigan) Presentation: Alexander Hegedus - Thursday, March 7th, 10:10 AM - Cheyenne
        The Earth’s Ionosphere limits radio measurements on its surface, blocking out any radiation below 10 MHz. Valuable insight into many astrophysical processes could be gained by having synthetic aperture interferometers in space, from atmospheric sounding to tracking particle acceleration by imaging the radio emission associated with coronal mass ejections in the inner heliosphere, to imaging distant radio galaxies enabling the determination of magnetic fields and astrometric measurements, and studying magnetospheric emission from extrasolar planets. A key aspect of any interferometer is the correlator, which is responsible for forming the synthetic aperture by appropriately combining the signals from the individual antennas from every spacecraft. Each spacecraft pair in the constellation yields a Fourier component of the radio brightness in the sky (also called a visibility). Correlation consists of a Fourier Transform step and a cross correlation step. These steps can be done in either order, but as digital signal processing has progressed, FX correlation (where the Fourier Transform step takes place first) has grown more common. This is opposed to XF correlation where the cross correlation step is done first, and more favors an analog approach. In this work, we explore the trade space for 3 options of FX based correlation for space based arrays: 1) The F portion is done individually across each spacecraft, then a subselection of the data from each spacecraft is sent down to the ground for the X portion and post processing. 2) The F portion is done individually across each spacecraft, thsen the data is sent to a single spacecraft (a mothership) that does the X portion and downlinking to Earth. 3) Raw data from each spacecraft is sent to a mothership which does the F portion, the X portion, and the downlinking to Earth. We calculate the number of operations, the inter-spacecraft data transmission volumes, and the DSN downlink costs for each correlation strategy and observe how those numbers change with tunable parameters like number of spacecraft and number of channels. We develop software to combine Global Navigation Satellite System (GNSS) precise orbit determination (POD) solutions to determine the relative propagation delays between each spacecraft pair and correctly combine their radio signals coherently to form a synthetic aperture. We examine the trade space in the spacecraft positioning, including required orbit accuracy, orbital parameters, and array size. We also study the error budget from GNSS POD solutions to understand the impact of phase errors on the radio images for various array configurations and signal to noise ratios. Finally, we analyze various Field Programmable Gate Arrays (FPGAs) and Graphical Processing Units (GPUs) to determine what hardware will meet our requirements across different correlation strategies. We find breakpoints in the trade space and match processors to correlation styles to find the optimal setup for each mission type. This work will help inform future space based radio array missions.
      • 03.0409 The Sun Radio Interferometer Space Experiment (SunRISE) Mission Concept Justin Kasper (University of Michigan), Joseph Lazio (Jet Propulsion Laboratory) Presentation: Justin Kasper - Thursday, March 7th, 10:35 AM - Cheyenne
        The Sun Radio Interferometer Space Experiment (SunRISE) would provide an entirely new view on particle acceleration and transport in the inner heliosphere by creating the first low radio frequency interferometer in space to localize heliospheric radio emissions. By imaging and determining the location of decametric-hectometric (DH) radio bursts from 0.1 MHz–25 MHz, SunRISE provides key information on particle acceleration mechanisms associated with coronal mass ejections (CMEs) and the magnetic field topology from active regions into interplanetary space. Six small spacecraft, of a 6U form factor, would fly in a supersynchronous geosynchronous Earth orbit (GEO) orbit within about 10 km of each other, in a passive formation, and image the Sun in a portion of the spectrum that is blocked by the ionosphere and cannot be observed from Earth. Key aspects that enable this mission concept are that only position knowledge of the spacecraft is required, not active control, and that the architecture involves a modest amount of on-board processing coupled with significant ground-based processing for navigation, position determination, and science operations. Mission-enabling advances in software-defined radios, GPS navigation and timing, and small spacecraft technologies, developed and flown over the past few years on DARPA High Frequency Research (DHFR), the Community Initiative for Continuing Earth Radio Occultation (CICERO), and the Mars Cube One (MarCO) missions, have made this concept finally affordable and low-risk. The SunRISE concept involves utilizing commercial access to space, in which the SunRISE spacecraft would be carried to their target orbit as a secondary payload in conjunction with a larger host spacecraft intended for GEO. The Phase A study on the SunRISE mission concept was completed in 2018 July. This paper presents a summary of the concept study. Part of this research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration.
      • 03.0410 The Data Processing Pipeline and Science Analysis of the Sun Radio Interferometer Space Experiment Alexander Hegedus (University of Michigan), Justin Kasper (University of Michigan), Ward Manchester (University of Michigan), Andrew Romero Wolf (Jet Propulsion Laboratory), Joseph Lazio (Jet Propulsion Laboratory) Presentation: Alexander Hegedus - Thursday, March 7th, 11:00 AM - Cheyenne
        The Earth’s Ionosphere limits radio measurements on its surface, blocking out any radiation below 10 MHz. Valuable insight into many astrophysical processes could be gained by having a radio interferometer in space to image the low frequency window, which has never been achieved. One application for such a system is observing type II bursts that track solar energetic particle acceleration occurring at Coronal Mass Ejection (CME)-driven shocks. In this work we create a data processing pipeline for the pathfinder mission SunRISE, a 6 CubeSat interferometer to circle the Earth in a GEO graveyard orbit, and evaluate its performance in localizing type II bursts with a simulated CME. Traditional radio astronomy software is hard coded to assume an Earth based array. To circumvent this, we manually calculate the antenna separations and insert them along with the simulated visibilities into a CASA MS file for analysis. We model the response of different array configurations over the 25 hour orbit, combining an ephemeris for the Sun with simulated recovered positions of spacecraft using GPS localization to get the array geometry into the correct frame for processing. We include realistic thermal noise dominated by the galactic background at these low frequencies, as well as new sources of phase noise from positional uncertainty of each spacecraft. To create realistic virtual type II input data, we employ a 2-temperature MHD simulation of the May 13th 2005 CME event, and superimpose realistic radio emission models on the CME-driven shock front, and propagate the signal through the simulated array. Data cuts based on different plasma parameter thresholds (e.g. de Hoffman-Teller velocity and angle between shock normal and the upstream magnetic field) are tested to get the best match to the true recorded emission. We take into account sources of angular scattering of the emission such as coronal turbulence. We test simulated trajectories of SunRISE and image what the array recovers, comparing it to the virtual input, finding that SunRISE can resolve the source of type II emission to within its prescribed goal of 1/3 the CME width.
      • 03.0411 Development of an Ultra-Wideband Receiver Package for the Next Generation Very Large Array Jose Velazco (Jet Propulsion Laboratory) Presentation: Jose Velazco - Thursday, March 7th, 11:25 AM - Cheyenne
        The next-generation Very Large Array (ngVLA), is a concept for a radio astronomical interferometric array that will provide large improvements in sensitivity and angular resolution over existing telescopes such as the Jansky Very Large Array (JVLA) and the Atacama Large Millimeter/submillimeter Array (ALMA). The ngVLA will operate in the 1.2 to 116 GHz frequency range and its design is aimed at reducing operational and maintenance costs. The concept includes about 214 18-m reflector antennas and baselines up to 1000 km with a dense core on few-km scales for high surface brightness imaging, centered at the current JVLA site in New Mexico. It is envisioned that the ngVLA will employ several receivers to cover the 1.2 to 116 GHz frequency range. At the Jet Propulsion Laboratory, we have implemented a single wideband receiver package that could cover the 8 to 48 GHz frequency range of the ngVLA. The current JVLA covers this frequency range employing five distinct receiver packages. We estimate that reducing the number of receiving systems required to cover the full frequency range should reduce operating costs. The receiver package we developed consists of a quad-ridge feed horn, low-noise amplifiers (LNA), and a down-converter to analog intermediate frequencies. Both the feedhorn and the LNA are cryogenically cooled. In order to simplify assembly and operating costs, we pursued a simple and compact receiver package design. Key features of this design are a 6:1 frequency ratio, 10-20 dB gain, quad-ridge feedhorn with dielectric loading and a compact cryogenic receiver with a noise temperature of no more than 30 K at the low end of the band. We pursued two wideband 8–48 GHz LNA MMIC designs, the first using 70-nm gallium arsenide, metamorphic high-electron-mobility-transistors (HEMT), and the second using 35-nm indium phosphide HEMTs. The down-converter stage translates the 8-48 GHz input range to an intermediate frequency range of 0-8 GHz with ~20 dB sideband rejection between the upper and lower sideband IF outputs. In this paper, we will report the results obtained, measured gain and noise temperature, with the entire 8-48 GHz receiver package, including the feed, LNA and down-converter.
      • 03.0412 A Common Platform for DSN Receiver Development Andre Jongeling (Jet Propulsion Laboratory) Presentation: Andre Jongeling - Thursday, March 7th, 11:50 AM - Cheyenne
        NASA's Deep Space Network is currently updating a number of sub systems within the Signal Processing Centers at its Deep Space Communication Complexes in order to modernize aging equipment in the downlink receivers for telemetry, tracking, radio science, and radio astronomy. To reduce development costs and increase commonality among these traditionally custom-built receivers, the implementation has developed a flexible architecture built primarily around commercial off-the-shelf hardware compliant with the Micro Telecommunications Computing Architecture (MicroTCA) specification and commercial high speed 10Gbit ethernet switches. Custom firmware and software is being developed to perform the required signal processing functions needed to replace the legacy systems in a phased implementation approach that establishes a new digital intermediate frequency signal distribution system first, followed by implementations of various receiver functions as dictated by need. The first of these new receivers, the Open Loop Receiver (OLR), will come online in the Fall of 2018. A description of the new architecture, referred to as the “Common Platform”, will be provided followed by an overview of the phased implementation approach and initial OLR performance results.
    • 03.05 Miniaturized RF/Microwave Technologies Enabling Small Satellite and UAV Systems Dimitris Anagnostou (Heriot Watt University) & James Lyke (Space Vehicles Directorate)
      • 03.0502 Propagation Analysis in Support of Wireless Spacecraft Capability Yu Ming Yang (NASA Jet Propulsion Lab), Norman Lay (Jet Propulsion Laboratory), Daniel Cho (NASA Jet Propulsion Lab), Ryan Rogalin (NASA Jet Propulsion Lab), Clayton Okino (Jet Propulsion Laboratory), Arby Argueta () Presentation: Yu Ming Yang - Thursday, March 7th, 04:30 PM - Elbow 1
        Wireless technologies have been widely applied in many communication systems on Earth to enhance the transmission data rate, improve the receiving signal-to-noise-ratio (SNR), and increase the channel capacity. For flight mission and spacecraft design, reliable wireless systems will expand the scientific applications and research opportunities of planetary explorations. To retire key risks to use of wireless technologies in spacecraft design and space applications, in this research, we analyze its capability and propagation in the test and operational environments including thermal chamber and on-board of a rover at JPL Mars yard. The software-based wireless system consists a transmitting antenna, two receiving antennas, and a software-based defined radio, which enable to transmit and receive multiple frequency band wireless signals with different modulations and a flexible data rate. Here, we summarize the signal propagation analysis of different test and operational scenarios. The propagation analysis demonstrates that the design of two receiving antennas provides about 20 dB improvement comparing to the single-antenna result. Additionally, the chamber test result shows significant multipath effects observed from the data collected from the two scenarios: 1) transmitting and receiving antennas are inside of the chamber, and 2) the inside-chamber receiving antennas receive signals from the outside-chamber transmitter. The statistical analysis of the Mars yard test indicates that signal power has relatively small standard deviation for the scenario that transmitting and receiving antennas are aligned in the line of sight direction with a distance. The standard deviations increase for other scenarios that transmitter is set on rover’s wheel, and receiving antennas are on the top of the rover probably due to multipath effects. The investigation of propagation analysis will be beneficial to a design of a reliable wireless system toward the development of the spacecraft wireless networks in support of NASA’s future flight missions (such as planetary aerobots, rovers, and spacecraft).
      • 03.0503 Additive Manufacturing of Metal-Insulator-Metal (MIM) Capacitors on Flexible Substrate Abu Md Numan Al Mobin (South Dakota School of Mines and Technology), Jacob Petersen (South Dakota School of Mines and Technology), Mingrui Liu (South Dakota School of Mines and Technology), William Cross (South Dakota School of Mines and Technology), Jon Kellar (SD School of Mines and Technology), Grant Crawford (South Dakota School of Mines and Technology), Jennifer Jordan (NASA - Glenn Research Center), George Ponchak (NASA Glenn Research Center) Presentation: Abu Md Numan Al Mobin - Thursday, March 7th, 04:55 PM - Elbow 1
        This study aims to demonstrate the fabrication of radio-frequency metal-insulator-metal (MIM) polyimide capacitors on a flexible KaptonTM (polyimide film) substrate using a purely additive manufacturing technology. The aerosol jet printing technology can effectively replace the use of bulky lumped components in the RF circuits such as filters, matching networks, oscillators, resonators, and mixers. Two capacitor geometries with areas of 0.25, and 0.5625 mm2 have been printed. Optical profilometry of the printed metal capacitor plates and the polyimide (insulation) film show that they have average thicknesses of 2.34 ± 0.22 μm and 4.05 ± 0.44 μm, respectively. The measured capacitance for the set of capacitors are 1.746 ± 0.231 pF and 2.513 ± 0.133 pF, which is equivalent to a capacitance tolerance of 10% and 5%, respectively. The S-parameters were measured using the vector network analyzer up to frequency of 50 GHz and a lumped circuit model is extracted and presented through the Smith Chart using Advanced Design SystemTM. Moreover, the results of normalized measured capacitance and conductance as a function of temperature, adhesion, temperature (100 °C/ice bath) cycling, and the maximum operating voltage of the capacitor are presented.
      • 03.0505 Solar and Lunar Calibration for Miniaturized Microwave Radiometers Angela Crews (Massachusetts Institute of Technology), William Blackwell (MIT Lincoln Laboratory), Kerri Cahoy (Massachusetts Institute of Technology), Robert Leslie (), Michael Grant (NASA - Langley Research Center) Presentation: Angela Crews - Thursday, March 7th, 05:20 PM - Elbow 1
        Miniaturized microwave radiometers deployed on nanosatellites in Low Earth Orbit (LEO) are now demonstrating the ability to provide science-quality weather measurements, such as the 3U Micro-sized Microwave Atmospheric Satellite-2A (MicroMAS-2A). The goal of having cost-effective miniature instruments distributed in LEO is to field constellations and improve temporal and geospatial coverage. The Time-Resolved Observations of Precipitations structure and storm Intensity with a Constellation of Smallsats (TROPICS) is a constellation of six 3U CubeSats, based on MicroMAS-2A, scheduled to launch in 2020. Each CubeSat hosts a scanning 12-channel passive microwave radiometer. TROPICS will improve temporal resolution to less than 60 minutes compared to larger satellites in polar orbit, such as NOAA-20 which hosts the Advanced Technology Microwave Sounder (ATMS) and has a revisit rate of 7.6 hours. [1] The improved refresh rate will provide high value observations of inner-core conditions for tropical cyclones. [2] In order to effectively use miniaturized microwave radiometers on small satellites such as MicroMAS-2A and TROPICS operationally, new approaches to calibration to achieve state of the art performance are required. For example, the ATMS instrument has a measured noise equivalent delta temperature (NeDT) at 300 K of less than 1.95 K. [3] Calibration on nanosatellite platforms present new challenges, as standard blackbody targets are too bulky to fit on CubeSats. Instead, internal noise diodes are used for calibration on CubeSats, with long-term drifts measured on the order of 0.2% to 3.0% (approximately 0.4 K to 6.0 K). [4] Blackbody calibration targets such as used on ATMS have 0.14 K error or better for warm calibration. [3] In order to provide state of the art calibration for CubeSats, methods must be developed to track and to correct noise diode drift. We develop a new way to calibrate CubeSat constellations, such as TROPICS, by incorporating frequent and periodic solar and lunar intrusions as an additional source of information to counter noise diode drift. These solar and lunar intrusions also occur for existing satellites hosting microwave radiometers in polar orbits, but are much more infrequent than for the scanning payload on the TROPICS constellation, and are typically treated as an observational and calibration limiting constraint. The higher occurrence rate of intrusions motivates the novel idea of using the intrusions to support calibration. An algorithm is developed to compare expected effective brightness temperature from solar and lunar measurements to actual measured brightness temperatures, and a detailed error budget is developed to characterize the effectiveness of the solar and lunar calibration procedure. We test the algorithm using actual sun and moon measurements taken by MicroMAS-2A and ATMS, and present initial results.
      • 03.0506 A Novel Reconfigurable GaN Based Fully Solid-State Microwave Power Module Rainee Simons (NASA - Headquarters) Presentation: Rainee Simons - Thursday, March 7th, 09:00 PM - Elbow 1
        Historically, the term microwave power module (MPM) is associated with a small fully integrated self-contained RF amplifier that combines both solid-state and microwave vacuum electronics technologies. In this paper, we present the research and development of a novel fully solid-state microwave power module (SSMPM), which is distinctly different from the above MPMs. The SSMPM advances the state-of-the-art in spacecraft transmitters. A typical payload on an Earth and planetary exploration spacecraft includes S-band system for telemetry, tracking, and command (TT&C) and a X-band or higher frequency system for science instruments and telecommunications. The role of the TT&C system is receiving commands and downlinking spacecraft house-keeping data. Typical science instruments are scatterometers, radiometers, and radar for synthetic aperture imagers. The role of the telecommunication system is to down- link science data acquired by these instruments. The current state of practice uses two separate S-band and X-band amplifiers in each of the above systems. However, due to the push for developing small satellites with enhanced system capabilities/performance at lower cost, it is advantageous to develop a single wideband, reconfigurable high-power, high-efficiency SSMPM that can operate at multiple frequency bands depending on the need at any given time. Innovations in compound semiconductor materials, devices, and circuits to increase the functionality and reconfigureability of RF systems are reported in several papers in the literature. In this paper, we present as a proof-of-concept (POC) the design, integration, and performance of a novel reconfigurable GaN based fully solid-state MPM. The module synergistically integrates diplexers, pre-amplifiers, multistage medium power amplifiers (MPAs), SPDT switches, and CW/Pulsed high power amplifiers (HPAs) with a voltage sequencer, a DC blanking controller, and a low voltage electronic power conditioner. The POC SSMPM operates at both S-band and X-band to serve multiple roles. The SSMPM can be reconfigured to deliver Psat of 39 dBm (8 W CW) at S-band, Psat of 46 dBm (40 W CW) at X-band, and Psat of >50 dBm (>100 W Pulsed) at X-band. Lastly, our link budget calculations indicate that the SSMPM with Psat = 40 W when coupled to a 10 cm X-band transmit antenna on a low Earth orbiting (900 km) satellite can close a 1 Gbps (QPSK) data downlink to a 1 m receive antenna on ground with 3 dB margin.
  • 4 Communication & Navigation Systems & Technologies Phil Dafesh (Aerospace Corporation) & Shirley Tseng (Tseng LLC)
    • 04.01 Evolving Space Communication Architectures Shervin Shambayati (SSL)
      • 04.0101 Navigation Tracking with Multiple Baselines Part I: High-Level Theory and System Concepts Kar Ming Cheung (Jet Propulsion Laboratory), Charles Lee (Jet Propulsion Laboratory) Presentation: Kar Ming Cheung - Thursday, March 7th, 04:30 PM - Amphitheatre
        Delta Differential One-Way Ranging (DDOR) and Same Beam Interferometry (SBI) are deep space tracking techniques that use two widely separated ground antennas, known as a baseline, to simultaneously track a transmitting spacecraft to measure the time difference between signals arriving at the two stations. Errors are introduced into the delay measurements when the radio waves pass through the solar plasma and the Earth’s atmosphere, and also due to clock bias and clock instability of the ground stations. These errors can be eliminated or calibrated by tracking a quasar in the angular vicinity of the spacecraft (DDOR), or by tracking another close-by spacecraft whose trajectory/orbit is accurately known (SBI). Both DDOR and SBI uses this double-differencing of signal arrival time to eliminate the aforementioned error sources, and to generates highly accurate angular measurements with respect to the baseline. The three Deep Space Network (DSN) sites cover three approximately equally-spaced longitudes to provide near-continuous coverage of deep space. Spacecraft occasionally see two DSN sites simultaneously, but never three. Current DSN’s DDOR and SBI techniques are based on one baseline of two sites. But recent additions of non-DSN deep space antennas and increased cross-support collaboration between space agencies allow spacecraft seeing more than one baselines simultaneously. In this paper, we consider simultaneous SBI of two baselines that share one common ground station. We show that under certain condition, precise pointing vector between the common ground station and the spacecraft can be computed using simultaneous DDOR measurements from the two baselines. When there is another spacecraft in the vicinity of the first spacecraft, precise pointing vector of the second spacecraft can be derived from the simultaneous SBI measurements. Also, precise angular distance between the two spacecraft can be computed. We expect these new data types could enhance ground antenna pointing, and deep space spacecraft trajectory estimation and orbit determination. This technique can have near-Earth applications. We describe a system concept to detect and to locate dead and non-cooperatively spacecraft in the Geostationary Orbit (GEO). This can be done by placing a “reference” spacecraft into an eccentric geosynchronous orbit over a region of interest (e.g. above North America). By adjusting the orbit, the “reference” spacecraft can sweep through the sky back-and-forth in the vicinity of the GEO over the region . In this way, the reference spacecraft can be close to any “static” GEO targets along its path. Using the multi-static radar approach, the ground transmitting radar illuminates both the reference and target spacecraft, and the ground receiving radars measure the different time-delays of signal arrival. The different time-delays (double-differencing) can then be used to compute the precise relative position of the target spacecraft with respect to the reference spacecraft, whose position can be accurately estimated using the weak Global Positioning Satellite (GPS) signals.
      • 04.0104 Telecommunication System Architecture for Low Earth Orbit Nano Satellites Mission Support Kavya Doddamane (PES University) Presentation: Kavya Doddamane - Thursday, March 7th, 04:55 PM - Amphitheatre
        PES University, a premier Technical Institution in the State, encouraged students to develop a low cost nano-Satellite PISAT for providing opportunity to gain hands on experience in ‘Satellite Technology’ and to establish a full pledged ground station facility to create ample scope for the students to experiment with satellites and operations. Nano satellites are small satellites weighing between 1kg and 10 kg. They are low cost and provide unique benefits compared to traditional satellites. One of the major concern in developing a nano satellite is the design of remote communication framework to suit the particular mission requirement. Designing a secured and high speed data transfer communication framework over a harsh remote environment for a nano satellite in Low Earth Orbit postures many difficulties and challenging problems. The critical design constraints in a student satellite are Satellite Dynamics, Communication Resource Utilization, Look Angle Geometry, Attenuation, Interference, Small Size, Reliable Communication. During the design of an optimum remote communication framework, these design constraints must be addressed. As a part of this, PISAT a student nano satellite was designed and developed at PES University. VHF/UHF frequency bands, conventionally used for student nano satellites, got a impediment with information rate thereby limiting the payload and severely get affected with high noise and interference. Hence, the PISAT satellite is configured in the standard S-band for TT&C support. PISAT is a three axis stabilized imaging nano satellite weighing 5 kg and having cuboid structure with S-band RF Communication System. An earth station is a vital element in the satellite communication network. A satellite has to be monitored and controlled from an earth station in addition to receiving payload data for further processing. A fully-fledged ground station has been commissioned for TT&C support in the University campus. PISAT was successfully launched into the intended polar sun synchronous orbit at 670 km altitude on 26th Sep 2016 by ISRO using the PSLV-C35 Rocket. PISAT was successfully tracked from PSCF during visibility. Communication was established with PISAT.
      • 04.0105 RESINATE – an RF and Optical Testbed Craig Kief (COSMIAC at UNM), James Lyke (Space Vehicles Directorate), Don Fronterhouse (PnP Innovations, Inc), Matthew Hannon (COSIMAC University of New Mexico), Robert Richard (Acme), Mayer Landau (), Christian Peters (Air Force Research Laboratory), Derek Buckley (USAF), Zachary Bergstedt (USAF) Presentation: Craig Kief - Thursday, March 7th, 05:20 PM - Amphitheatre
        The Resilient Network Advanced Testbed (RESINATE) is a robust RF and Optical testbed for performing research and characterization testing on RF and optical systems communication systems. RESINATE is located in Albuquerque, NM and consists of a series of software transceivers at the 4.5GHz range, free space laser transceivers running over a 24km slant path, and a satellite ground station with a three-meter S-band dish. RESINATE is being used to study both RF and FSO links along with innovative disruption tolerant networking approaches integrating space and ground systems.
    • 04.02 Communication Protocols and Services for Space Networks Shervin Shambayati (SSL)
      • 04.0201 A Delay Tolerant Networking-based Approach to a High Data Rate Architecture for Spacecraft Alan Hylton (NASA), Daniel Raible (NASA Glenn Research Center), Gilbert Clark (Ohio University) Presentation: Alan Hylton - Sunday, March 3th, 09:25 PM - Amphitheatre
        Historically, an apparent asymmetry in spacecraft has been an internal bus whose capabilities greatly outweigh the communications system’s. With the advent of optical communications platforms and improved RF systems, this asymmetry has turned. There are two primary issues at hand. At the satellite level, there are examples where the communications link is almost completely not utilized, due to the mismatch between the radio and the bus. At the overall systems level, the infrastructure is not yet ready to handle data rates increased by orders of magnitude. A facet of the strategy to address these limitations is the High Data Rate Architecture (HiDRA) project lead out of NASA’s Space Communications and Navigation (SCaN) program. With an overall goal of getting more data to the ground, HiDRA’s approach includes a hardware accelerated Delay Tolerant Networking (DTN) implementation (RFC 4838, RFC 5050), with the requirement to support emerging technologies that could operate at 200+Gbps. HiDRA describes a general packet-based networking capability for spacecraft in a wide variety of scenarios. This capability provides assets with greatly improved data handling ability while simultaneously reducing the capital costs and operational costs of developing and flying future space systems. This paper characterizes modern satellite design and discusses the need for better overall throughput. Then the approach to DTN is explained in detail. This section begins with a discussion on HiDRA’s evolution, which now includes Software Defined Networking (SDN). The design calls for DTN to separate the data plane from the control plane, and several hardware approaches are explained; this includes various FPGA platforms and Open Compute Project switches. This section also discusses how HiDRA is used to schedule data flow, and experimental work with neighbor discovery protocols. The DTN section concludes with a note on interoperability testing plans. The next section describes how HiDRA might operate in near-Earth and deep-space, and how environment considerations (for example, radiation and latency) influence HiDRA’s configuration and operation. Finally, there is a description of an International Space Station (ISS) implementation. The ISS instantiation should provide a data path for the ISS while also providing the project with a mighty stepping stone towards future optimizations, refinements, and other work, while also enabling advancement through the technology readiness levels (TRL).
    • 04.04 Relay Communications for Space Exploration David Israel (NASA - Goddard Space Flight Center) & Charles Edwards (Jet Propulsion Laboratory)
      • 04.0401 NASA’s Operational Optical Communications Relay Betsy Park (NASA - Goddard Space Flight Center) Presentation: Betsy Park - Monday, March 4th, 09:00 PM - Amphitheatre
        The NASA Space Communications and Navigation (SCaN) program intends to create an optical communications network to enhance the existing radio frequency network. NASA is currently planning for a geosynchronous-orbiting optical communications relay node to be commissioned in 2025. Evolving from the Laser Communications Relay Demonstration (LCRD), this optical communications relay node will serve as an initial element of an optical relay constellation. It will support high-rate communications with users from Earth’s surface, LEO, MEO, GEO, cislunar and Earth-Sun L1/L2 distances, as well as optical crosslinks between nodes to minimize ground station constraints. Goddard Space Flight Center has established a flight project to develop and fly this operational optical communications relay node. The optical communications network concept consists of a multi-relay satellite system to provide global coverage, with Goddard charged to build an operational optical relay including the payload and the spacecraft with a Ka-band system. The optical relay will provide up to 10 Gbps relay services to users and 100 Gbps relay crosslinks and space-to-ground services. In addition to the relay node, Goddard will develop optical user terminals. From the user's platform, the user terminal allows the user to communicate with the relay system. This paper describes NASA’s plan for the optical communications relay.
      • 04.0402 MSL Relay Coordination and Tactical Planning in the Era of InSight, MAVEN, and TGO Rachael Collins (NASA Jet Propulsion Lab), Pegah Pashai (Jet Propulsion Laboratory) Presentation: Rachael Collins - Monday, March 4th, 09:25 PM - Amphitheatre
        This study examines an approach for optimizing the scheduling of regular relay communications between the Mars Science Laboratory (MSL) rover and the non-sun-synchronous Mars orbiters MAVEN and TGO as well as the impacts of the approaching InSight landing on MSL relay and tactical planning. Rover operations require knowledge of recently executed activities, termed decisional data, in order to inform tactical activity planning. As a result, timely and routine data return is critical for nominal rover operations. Mars orbiters are used as relay assets to achieve such timeliness. They also provide greater overall rover data throughput considering their larger data transfer capacity between Mars and Earth. Relay opportunities and their performance are thus tightly coupled to MSL’s operations efficiency and science return. With InSight landing only 600 kilometers away and at the same longitude as MSL, orbiter view periods will be shared between the missions, resulting in fewer relay opportunities for MSL. The introduction of MAVEN and TGO as relay assets helps to alleviate this, but the orbit geometries of these orbiters introduce their own challenges. Unlike sun-synchronous orbiters MRO and ODY, the timing of MAVEN and TGO overflights walks sol-to-sol, resulting in seasonal variations that preclude their usability. The overflights may occur too early in the sol to enable science activities or too late in the sol to be decisional for the subsequent planning cycle. Moreover, the highly elliptical orbit of MAVEN results in much longer view periods as well as intervals of lower or higher data volume return. With the introduction of InSight, MAVEN, and TGO, the MSL mission undertook a design effort in order to define new overflight selection criteria and identify the impact to operational efficiency. Instead of selecting all usable relay opportunities, as was the case with just MRO and ODY, this new paradigm requires deconflicting and down-selecting from available overflights. The overflight selection algorithm presented in this study selects based on key overflight metrics such as timing, the predicted data volume return, and the latency between the relay and data arrival to Earth. The relative priority of each of these metrics are scenario specific; thus, the algorithm is flexible and configurable for when mission priorities evolve. Additionally, operational constraints and considerations such as human factors are applied. The resulting tactical planning timeline post-InSight landing suggests comparable operational efficiency to the pre-InSight era but yields more variation in the timing of the planning shifts, adding strain on the MSL planning team.
      • 04.0403 Proximity Link Telecommunication and Tracking Scenarios for a Potential Mars Sample Return Campaign Charles Edwards (Jet Propulsion Laboratory), Allen Farrington (Jet Propulsion Laboratory), Roy Gladden (Jet Propulsion Laboratory), Charles Lee (Jet Propulsion Laboratory), Robert Lock (Jet Propulsion Laboratory), Austin Nicholas (NASA Jet Propulsion Lab), Ryan Woolley (Jet Propulsion Laboratory), Brian Muirhead (Jet Propulsion Laboratory), Orson Sutherland (European Space Agency) Presentation: Charles Edwards - Monday, March 4th, 09:50 PM - Amphitheatre
        NASA is currently developing the Mars 2020 Rover mission, slated for launch in Jul-Aug 2020 and for arrival at Mars on Feb 18, 2021. In addition to its planned science investigations, the Mars 2020 Rover will also acquire and cache a set of scientifically selected samples, for potential retrieval and return to Earth, enabling in-depth sample analysis in terrestrial laboratories. In April 2018, NASA and ESA signed a Statement of Intent, establishing a plan for the two agencies to explore the follow-on missions that could return samples to Earth. In this plan, NASA would lead a Sample Retrieval Lander (SRL) mission, carrying a Sample Fetch Rover (SFR) to retrieve the Mars 2020 cached samples, return them to the lander, and transfer them to a Mars Ascent Vehicle (MAV) for launch into low Mars orbit; ESA would lead an Earth Return Orbiter (ERO) mission that would rendezvous with and capture the on-orbit samples, contain them, and transfer them to an Earth Entry Vehicle (EEV) which would deliver the samples to a US landing site. The operations concept for this Mars Sample Return campaign introduces a number of new and challenging mission scenarios in terms of proximity link telecommunication and tracking. At the time of arrival of the SRL, the existing relay network elements consisting of NASA’s Odyssey, Mars Reconnaissance Orbiter, and MAVEN orbiter, and ESA’s Mars Express Orbiter and ExoMars/Trace Gas Orbiter, would all be well beyond their original design lifetimes. We will summarize strategies being pursued to maximize the likelihood that one or more of these orbiters will still be operational in the time frame of SRL arrival. Surface operations would be challenged by the fact that the SRL, SFR, and Mars 2020 may all be operational in this time frame, within close proximity of each other. The on-orbit UHF relay transceivers in the existing relay network would only allow support to one surface user at a time. We will evaluate provisional communication requirements for SRL, SFR, and Mars 2020 during the surface retrieval mission, including data volume and contact frequency requirements. Launch of the MAV represents a particular new proximity link challenge. The ERO would be in position to view the launch, acquiring critical event tracking and telemetry data from the MAV for this first launch of an ascent vehicle from the surface of another planet. In the event of a MAV anomaly, these critical event data will be crucial in reconstructing the failure mode. For a successful launch, these data would serve to provide an initial orbit estimate for the Orbiting Sample canister released into low Mars orbit by the MAV. We will describe the detailed sequence of MAV launch operations, including ERO-SRL link establishment on launch day to confirm launch readiness, and ERO capture of MAV tracking and telemetry data throughout the full ascent of the MAV. We will identify any potential new capabilities for the ERO UHF relay transceiver needed to meet these needs.
    • 04.05 Space Communication Systems Roundtable : Networking the Solar System Charles Edwards (Jet Propulsion Laboratory)
      • 04.05 4.05 Space Communication Systems Roundtable : Networking the Solar System Presentation: - - Dunraven
        The roundtable will provide a forward-looking view of the development of a Solar System Internetwork - a layered architecture aimed at offering ubiquitous, high-bandwidth communication throughout the solar system in support of robotic and, ultimately, human exploration in deep space. Panelists will assess trends in physical layer capabilities, including migration to higher RF frequencies (Ka-band) and/or to optical wavelengths, as well as higher layers in the protocol stack, including networking protocols such as DTN, suited for use in long light-time applications. Based on assessment of forecasted commercial satcom trends, and building on the multi-hop relay capabilities operating today at Earth and at Mars, the roundtable will describe the evolution towards a true Solar System Internetwork in the coming decades.
    • 04.06 Innovative Space Communications and Tracking Techniques Kar Ming Cheung (Jet Propulsion Laboratory) & Alessandra Babuscia (NASA Jet Propulsion Laboratory)
      • 04.0602 Tele-Command Based Ranging for Deep-space Applications Victor Vilnrotter (Jet Propulsion Laboratory) Presentation: Victor Vilnrotter - Tuesday, March 5th, 08:30 AM - Amphitheatre
        There is current interest in developing a ranging concept for deep-space applications, that does not rely on regenerated sequential sinusoids or pseudonoise (PN) codes on both the uplink and downlink legs of the communications link. These conventional ranging signals require a separate ranging channel, which expand the required bandwidth and consume some of the available spacecraft power on the downlink. Previous papers on Telemetry Based Ranging have examined a new concept that transmits a PN code on the uplink within the signal-band, hence does not expand the bandwidth. This approach makes use of the known structure of the PN code to measure the uplink phase, which is relayed to the ground station enabling measurement of the two-way delay, and hence the total range. Here we expand further on this new concept by replacing the uplink PN code with operational command sequences consisting of encoded and randomized information bits not known a priori to the spacecraft, thus requiring new techniques to determine the uplink delay to the desired accuracy. We examine several extensions of the previously studied PN code concept, including the use of demod-remod techniques on the received codewords at the spacecraft, to aid in the determination of the uplink delay that must be relayed to the ground to complete the range measurement. Performance in terms of range resolution will be determined, and compared with previous techniques to evaluate the degree of improvement afforded by this new approach.
      • 04.0603 X-BAND PN DOR Signal Design and Implementation on the JPL Iris TRANSPONDER Mazen Shihabi (Jet Propulsion Laboratory) Presentation: Mazen Shihabi - Tuesday, March 5th, 08:55 AM - Amphitheatre
        This paper describes the design and implementation of the Pseudo Noise Delta Differential One-way Ranging (PN DOR) signal format on the JPL Iris Software Defined Radio. The spread spectrum DOR format enables more accurate differential ranging measurements over the classical DOR tone format, and it is applicable to deep space missions that require accurate navigation or require accurate angular position measurements for another purpose such as determining the ephemeris of a planet or small body. The classical Delta-DOR technique makes time delay measurements of spacecraft and quasar signals to determine spacecraft angular position in the radio reference frame defined by the quasar coordinates. The measurement system is configured to provide common-mode error cancellation as nearly as possible. In the PN DOR mode, instead of a spacecraft modulating their downlink, with a sinusoidal signal, referred to as a DOR tone as done in the classical DOR mode, the sinusoidal tone is replaced with a spread spectrum signal. Using a spread spectrum DOR signal instead of a DOR tone provides cancellation of effects due to phase dispersion across the channels used to record quasar signals. This error source, referred to as phase dispersion or phase ripple, is often the dominant measurement error for the classical DOR format. PN spreading will improve on classical DOR performance accuracy because by choosing the PN spreading code and shaping filter carefully, the spacecraft signal can be made to closely resemble the quasar signal resulting in reduction of the Delta-DOR error due to phase dispersion by 80 to 90% over the classical DOR approach. The article will describe the choice of a Gold code sequence that possesses good autocorrelation properties as well as excellent cross-correlation properties that was used to spread the DOR tone, and the choice of root-raised-cosine (RRC) chip-shaping to reduce the amount of excess-bandwidth of the output waveform. In addition, a digital pre-distortion filter was designed to compensate for the effects of the Iris radio analog transmit filter and make sure that the output waveform has a flat spectrum. The paper will also describe the path that was taken for the FPGA implementation on the Iris radio starting with a Matlab floating-point simulation, a Simulik HDL code generation, and a HDL simulation platform that was used to optimize the power distribution between the carrier and the PN spread DDOR tone.
      • 04.0604 Omnidirectional Optical Communicator Jose Velazco (Jet Propulsion Laboratory) Presentation: Jose Velazco - Tuesday, March 5th, 09:20 AM - Amphitheatre
        We are developing an inter-satellite omnidirectional optical communicator (ISOC) that will enable cross-link communications between spacecraft at Gbps data rates over distances of up to thousands of kilometers in free space. The ISOC under development features a truncated dodecahedron geometry that can hold and array of fast photodiode detector detectors and gimbal-less MEMS scanning mirrors. The main goals of the ISOC development include: 1) full sky coverage, 2) Gbps data rates and 3) the ability to maintain multiple simultaneous links. We have developed two omnidirectional communicator prototypes capable of full-duplex operation. We are using advanced, efficient lightweight single-mode laser diodes operating at 850 nm capable of producing hundreds of milliwatts of laser radiation. We are also employing MEMS-based beam steering mirrors, and fast PIN photodiodes to achieve long range communications. The ultimate goal of the project is to achieve full duplex operation at 1 Gbps data rates over 200 km and slightly lower data rates at longer distances. This paper will describe the overall ISOC architecture and will present the design tradeoffs for gigabit data-rate operation. We will also present preliminary NRZ On-Off Keying communications results obtained using our ISOC prototypes. The ISOC is ideally suited for crosslink communications among small spacecraft, especially for those forming a swarm and/or a constellation. Small spacecraft furnished with ISOC communications systems, should be able to communicate at gigabit per second rates over long distances. This data rate enhancement can allow real-time, global science measurements and/or ultra-high fidelity observations from tens or hundreds of Earth-orbiting satellites, or permit high-bandwidth, direct-to-earth communications for planetary missions.
      • 04.0606 A Kriging Based Framework for Rapid Satellite-to-site Visibility Determination Xinwei Wang (Beihang University) Presentation: Xinwei Wang - Tuesday, March 5th, 09:45 AM - Amphitheatre
        Rapid satellite-to-site visibility determination is of great significance to coverage analysis of satellite constellations as well as onboard mission planning of autonomous spacecraft. A Kriging interpolation technique based framework for rapid visibility determination is proposed in this paper. Kriging is an advanced geostatistical procedure that generates an estimated surface from a scattered set of points. The idea is introduced in the paper to determine the rise and set times of visibility, where the estimation errors are controlled by weighted surrounding measurement values. To further increase the computational speed, an interval shrinking strategy preprocessing is adopted via investigating the geometric relationship between the ground viewing cone and the orbit trajectory. Our proposed method has a broad range of applications for all orbital types and orbit propagators. We conduct the experiments by using different Kriging surrogate functions. Numerical simulations indicate that the improved proposed framework outperforms brute force and other interpolation methods.
      • 04.0607 POINTR: Polar Orbiting INfrared Tracking Receiver Michael Taylor (Stanford University), Anjali Roychowdhury (Stanford University), Sasha Maldonado (Stanford University), Orien Zeng (Stanford University), Shi Tuck (Stanford SSI), Michal Adamkiewicz (Stanford University), Sandip Roy (Stanford University), Jake Hillard (Stanford University), Simone D'amico (Stanford University) Presentation: Michael Taylor - Tuesday, March 5th, 10:10 AM - Amphitheatre
        The Satellites Team of the Stanford Student Space Initiative (SSI) has designed and built Polar-Orbiting Infrared Tracking Receiver (POINTR), a 1U cubesat payload to demonstrate optical-communications technology. Optical Communications provides significant advantages for both inter-satellite and intra-satellite communications. Using lasers instead of radio frequencies allows for higher bandwidth communications free from increasingly large spectrum crowding. Furthermore, because of the tighter beam divergence of optical transmitters - leading to more power concentrated in a smaller spot - the technology requires significantly less Size Weight and Power (SWaP) to achieve the same or better data rate performance as its radio counterparts. Optical communications allows higher data-rate communications over larger distance, as well as improved accuracy and precision Position Navigation and Time (PNT) measurements which are crucial to space-craft formation flight. Lastly, the nature of optical communications provides communications paths which are almost impossible to correctly intercept and are thus significantly more secure than radio-communications. However because of the aforementioned narrow beam divergence it is crucial to have reliable and accurate Pointing, Acquisition, and Tracking (PAT) abilities on board the spacecraft. POINTR aims to demonstrate and validate the use of a silicon MEMS fast steering mirror (FSM) for fine pointing in a cubesat optical receivers by tracking a ground laser beacon from a 3U cubesat in a 550km-polar-synchronized orbit.
      • 04.0609 Optimizing Multiple Frequency-Shift Keying during Spacecraft Critical Events for Future Missions Shweta Dutta (JPL (Caltech)), Melissa Soriano (Jet Propulsion Laboratory) Presentation: Shweta Dutta - Tuesday, March 5th, 10:35 AM - Amphitheatre
        When attempting to land on a planetary body, perform a complex direction change, or even while simply cruising, challenges arise in spacecraft to Earth communications. If a spacecraft must perform a complicated maneuver such as orbit insertion, the spacecraft often cannot use its High Gain Antenna (HGA) for communication to another craft, or back to Earth. Additionally, if the spacecraft enters a fault state, it is possible that the antennas cannot be oriented as precisely. In these scenarios, a Low Gain Antenna (LGA) may be used to communicate limited information to know that the spacecraft is alive in the former situation, and to recover the spacecraft in the latter situation, so it is imperative to use communication methods that have a high probability of correctly obtaining a weak signal. This study will first summarize the mechanism to predict the probability of successfully detecting the carrier frequency and data tones used in Multiple Frequency-shift Keying (MFSK, or tones) by the spacecraft, and then explore the applicability of MFSK in future missions for events including but not limited to orbit insertion; deorbit, decent, and landing (DDL); and cruising. Using the search space, non-coherent integration time, fast Fourier Transform (FFT) bandwidth, and modulation index, one may compute the predicted probability of correctly detecting and tracking a frequency versus the total power to noise ratio. Tones allowed for communication with planetary landers and other spacecraft in the past, namely the Mars Exploration Rover (MER), Mars Science Laboratory (MSL), both primarily for EDL, and Juno for orbit insertion. Following the models outlined in Satorius, E., et. al., we demonstrate the probability of properly detecting a tone during JOI to be greater than 95% under worst-case expected operating conditions for the Europa Clipper mission. We also examine the optimal use of tones during other spacecraft critical events, such Europa Lander’s DDL via examination of parameter modifications to MFSK as used in Juno, MER, and MSL, and comparing how these modifications perform compared to the original technique.
      • 04.0610 Single-Satellite Doppler Localization with Law of Cosines (LOC) Kar Ming Cheung (Jet Propulsion Laboratory), William Jun (Georgia Institute of Technology), Edgar Lightsey (Georgia Institute of Technology), Charles Lee (Jet Propulsion Laboratory) Presentation: Kar Ming Cheung - Tuesday, March 5th, 11:00 AM - Amphitheatre
        Modern day localization requires multiple satellites in orbits, and relies on ranging capability which may not be available in most proximity flight radios that are used to explore other planetary bodies such as Mars or Moon. The key results of this paper are: 1. A novel relative positioning scheme that uses Doppler measurements and the principle of the Law of Cosines (LOC) to localize a user with as few as one orbiter. 2. The concept of “pseudo-pseudorange” that embeds the satellite’s velocity vector error into the pseudorange expressions of the user and the reference station, thereby allowing the LOC scheme to cancel out or to greatly attenuate the velocity error in the localization calculations. In this analysis, the Lunar Relay Satellite (LRS) was used as the orbiter, with the reference station and the user located near the Lunar South Pole. Multiple Doppler measurements by the stationary user and the reference station at different time points from one satellite can be made over the satellite’s pass, with the measurements in each time point processed and denoted as from a separate, faux satellite. The use of the surface constraint assumption was implemented with this scheme; using the knowledge of the altitude of the user as a constraint. Satellite’s ephemeris and velocity, and user’s and reference station’s Doppler measurement errors were modeled as Gaussian variables, and embedded in Monte Carlo simulations of the scheme to investigate its sensitivity with respective to different kinds of errors. With only two Doppler measurements, LOC exhibited root mean square (RMS) 3D positional errors of about 22 meters in Monte Carlo simulations. With an optimal measurement window size and a larger number of measurements, the RMS error improved to under 10 meters. The algorithm was also found to be fairly resilient to satellite velocity error due to the error mitigating effects in the LOC processing of the pseudo-pseudorange data type. A sensitivity analysis was performed to understand the effects of errors in the surface constraint, showing that overall position error increased linearly with surface constraint error. An analysis was also performed to reveal the relationship between the distance between the user and the reference station; a distance of up to 100 km only lead to an increase of 10 meters in RMS 3D position error. Ultimately, the LOC scheme provides localization with a minimal navigation infrastructure that relaxes hardware requirements and uses a small number of navigation nodes (as small as one).
    • 04.07 Space Navigation Techniques Amir Emadzadeh (Nvidia)
      • 04.0701 Autonomous Orbital Rendezvous Using a Coordinate-Free, Nonsingular Orbit Representation Matthew Walsh (Cornell University) Presentation: Matthew Walsh - Wednesday, March 6th, 09:00 PM - Amphitheatre
        Orbital rendezvous is one of the primary requirements of spacecraft guidance, navigation, and control systems and an essential component of all spacecraft missions that include navigating to a specific orbit or location. This paper develops a general method to execute orbital rendezvous and verifies it in simulations. The method uses a coordinate-free representation of the orbit as a basis for discrete feedback control and can be used autonomously to rendezvous to arbitrary target orbits. Simulation results permit a comparison of ∆V to that of other techniques. The primary benefits of this method are that it can be used for arbitrary orbits, executes autonomously, and avoids limitations by either singularities or linearized dynamics. The following elements comprise the proposed method. The coordinate-free representation separates the spacecraft state into a 5 degree-of-freedom orbit and an along-track position, which the algorithm matches separately. The 5 DoF portion of the state is represented by the angular momentum and eccentricity vector of the orbit, along with the orbital energy. This choice of state variables offers several advantages over both position/velocity and orbital-elements representations. First, these quantities do not depend on coordinate representation, are nonsingular, and are stationary in the dynamic steady state of no applied forces. Further, their dynamics are governed by simple equations, leading to a simple transition matrix. Rendezvous consists of two phases: orbit matching and phasing. A metric that represents the relative distance between the spacecraft and the target orbit determines when the orbit-matching phase is complete. Then a phasing maneuver slightly changes the relative orbit such that the follower catches up to or falls back to the leader’s along-track position. Once within a short distance of the target position, the spacecraft enters a close-proximity mode that utilizes a previously developed state transition matrix of relative motion on elliptical orbits for higher precision. Historically, the maneuvers to execute orbital rendezvous have been calculated on the ground and uploaded to the spacecraft. This approach requires not only a dedicated ground station and reliable communication but also calculations that must be manually computed each time a spacecraft’s orbit changes. For inexpensive satellites and/or spacecraft in a large constellation, these steps can become impractical due to limited communication capability or ground-station access. The efficient, autonomous method for executing orbital rendezvous developed in this paper therefore improves the feasibility and robustness of small spacecraft missions.
      • 04.0702 A Static Estimation Method of Autonomous Navigation of Relativistic Spacecraft Doga Yucalan (Cornell University), Mason Peck (Cornell University) Presentation: Doga Yucalan - Wednesday, March 6th, 09:25 PM - Amphitheatre
        In-situ exploration of deep space is an inevitable next step; however, even if suitable propulsion technology can be developed, state-of-the-art navigation techniques are not up to the task. They rely on reference-frame-independent physical laws and treat relativistic effects as perturbations to the classical theories of motion and gravity. Moreover, most methods are Earth-based and would lack the bandwidth to navigate a spacecraft through the interstellar medium. Given the unprecedented uncertainties that interstellar trajectories will introduce, autonomous navigation is a virtual necessity. This paper describes an innovative method of autonomous navigation for spacecraft travelling at relativistic speed, which uses Einstein’s special theory of relativity as a baseline dynamics model. The method assumes that the spacecraft has access to a standard star catalog describing the relative positions, velocities, and colors of many stars and can detect relative directions and apparent colors of the corresponding stars with on-board hardware that resembles a contemporary star tracker. By relating the motion of the spacecraft to changes in observed quantities, the proposed algorithm statically estimates the position and velocity vectors of the spacecraft. The method represents the nonlinearities of the system as multi-step linearities and is more general than the standard well-studied linear estimator, as it makes no assumptions regarding estimated variables. The paper includes the results of simulations of this nonlinear approach and compares them to a simpler linear estimator. It does so for the case of a spacecraft with a generic star tracker, travelling from Earth to Proxima Centauri at 20 percent of the speed of light. Assuming the hardware can distinguish the relativistic effects of interest here, this approach provides a universal navigation algorithm that can be used by any space vehicle travelling between any two points in the universe at any speed. This work represents a first, fundamental, and critical step in relativistic navigation.
      • 04.0703 Point-to-CAD 3D Registration Algorithm for Relative Navigation Using Depth-Based Maps Antonio Teran Espinoza (Massachusetts Institute of Technology), Timothy Setterfield (Jet Propulsion Laboratory, California Institute of Technology) Presentation: Antonio Teran Espinoza - Wednesday, March 6th, 09:50 PM - Amphitheatre
        This paper presents an end-to-end 3D registration algorithm for relative navigation between known objects based on a point-to-CAD iterative closest point (ICP) principle. The objective of this method is to take in a measured point cloud extracted from a depth or disparity map – such as the ones obtained from stereo cameras, time-of-flight cameras, LiDARs, or depth from defocus sensors – and calculate the rigid body transformation that best aligns the measured data with a corresponding 3D CAD model. By leveraging the geometric information encoded into stereolithography (STL) files, it is sought to address the computational intractability imposed by the naı̈ve generation of dense target point clouds solely based on the target’s known surface. To this end, the proposed approach computes a bijective projection onto the known triangular mesh to obtain a target point cloud with which to use ICP techniques for incremental alignment; the projection step is then carried on recursively until the convergence criteria are met, yielding a relative 6DOF pose between the two objects to be used within the estimation pipeline. Demonstrations of the algorithm are presented using simulated datasets; results include time complexity analyses for real-time operation cases, performance variation assessments with respect to CAD model complexity, and sensitivity analysis for determining the tolerance to distinct noise levels and spurious measurements. The design and implementation of the algorithm makes use of the open-source Point Cloud Library, and access to its source code is included within this work.
    • 04.08 Communication System Analysis & Simulation Yogi Krikorian (Aerospace Corporation)
      • 04.0801 Performance and Utilization Results for Time-Triggered Data Transfers over SpaceWire Kai Borchers (German Aerospace Center - DLR), Daniel Lüdtke (German Aerospace Center - DLR), Sergio Montenegro (University Würzburg), Frank Dannemann (German Aerospace Center - DLR) Presentation: Kai Borchers - Wednesday, March 6th, 10:35 AM - Lake/Canyon
        SpaceWire as a serial communication technology is widely used throughout the space domain but is still insufficient in providing real-time data transfers, especially if network structures contain cascaded routers. Our previous paper offered concepts to overcome this limitation by use of a time-triggered data transfer with focus on a decentralized time distribution. Network controllers are the key component of the introduced approach. They serve as a bridge between the attached host systems like traditional on-board computers and the network itself. These network controllers exchange information to derive a system-wide clock which is used to operate according to a shared schedule. Because a centralized time master is non-existent, several network controllers need to agree on an initial time-base during an asynchronous start-up phase, before entering the synchronous and schedule-based operation. The approach allows us to tolerate specific faults within the network without losing synchronous operation or the capability of performing a start-up phase. The system is designed to operate with routers which are developed according to the actual revised SpaceWire standard ECSS-E-ST-50-12C Rev. 1. This paper extends our previous work by evaluating different system properties. The start-up behavior as a critical part is analyzed to determine timings until synchronous operation can start. This time is influenced by several parameters, like timeouts, number of start-up participating network controllers and the structure of the network. Additionally, the system-wide clock synchronization and the related precision are examined. This is done with respect to the oscillator quality. However, the frequency of the applied clock state and rate correction must be taken into account as well. The analysis itself is performed empirical by use of a Universal Verification Methodology (UVM ) verification environment with a flexible structure to adapt to different network topologies and schedules. The system is implemented by a hardware description language to be used on a Field Programmable Gate Array (FPGA). Therefore the FPGA resource utilization and the maximum operation frequency are investigated for different target devices.
      • 04.0802 Statistical Optical Link Budget Analysis Hua Xie (Jet Propulsion Laboratory), Kar Ming Cheung (Jet Propulsion Laboratory) Presentation: Hua Xie - Wednesday, March 6th, 11:00 AM - Lake/Canyon
        Communication Link Analysis is a system engineering tool that is used to evaluate mission data return capability and develop requirements in communication system design. Traditional link analysis approach applies deterministic values for link parameters and adopts a rule-of-thumb 3-dB link margin to account for link uncertainties. This policy works for most S-band and X-band links due to their insensitivity to weather effects, however for higher frequency links such as Ka-band, Ku-band, and optical communication links, it is unclear whether a 3-dB margin would ensure link closure. A statistical link analysis methodology was presented by Cheung et.al and provided insights on the design choices of coding scheme and link margin for reliable data delivery over high frequency RF links (e.g., Ka-band links). In this paper, we describe work on extending this statistical analysis framework to optical communications, with a primary focus on direct detected, Serially Concatenated Pulse-Position Modulation (SCPPM) links. We perform analysis on the relationship between BER/FER requirement, the statistical characteristics of the operating signal and noise power, and the coded performance curves. Our analysis uses a Monte Carlo method to model the link uncertainties taking into account the fluctuation of the signal power and noise power at the receiver, and the shape of the coded performance curves. We use the CCSDS SCPPM prototype software to obtain coded performance curves under different operating conditions with varying signal power, noise power, code rates, and PPM modulation orders. Preliminary statistical models of the signal and noise will be derived from literature in this work. We mainly consider signal power fluctuations due to various unpredictable non-ideal operation effects and weather effects. The transmitter points the beam to the receiver with a quasi-static pointing error and time varying pointing jitter, or tracking error. The tracking error causes beam wandering effects which manifest as a dynamic fading of the received signal power; Atmospheric turbulence induces distortions on the laser beam wavefront and causes random fluctuations in the received signal irradiance. Noise enters the systems mainly via photo-electrons generated from incident background light, e.g., sky radiance, and dark noise electrons generated by the photo-detector. We consider Sun-related sky radiance as the dominant source of background noise and will use NASA's AERONET measured data to derive statistical models for noise under varying atmospheric conditions. The effects of signal and noise power fluctuations on the link performance with different code rates and modulation orders will be analyzed under various weather and atmospheric conditions.
      • 04.0804 Dynamic Link Analysis and Application for a MEO Space Vehicle Gleason Chen (The Aerospace Corporation), Jack Kreng (Aerospace Corporation), Yogi Krikorian (Aerospace Corporation) Presentation: Gleason Chen - Wednesday, March 6th, 11:25 AM - Lake/Canyon
        The study is to perform dynamic link analysis to determine the earliest separation time of the Space Vehicle (SV) from the Launch Vehicle (LV) while meeting the SV link requirements for Telemetry, Tracking and Control (TT&C) uplink & downlink services from/to ground stations. Successful rocket launch required adequate link coverage throughout the flight and good Radio Frequency (RF) performance. The presentation will discuss the concept of the dynamic link analysis, SV antenna switching schedule, recommended SV separation time, as well as the performance for different launch scenarios within the 24-hour launch window. Topics will include antenna patterns, launch trajectories, elevation angle, and clock & cone angle geometry, and dynamic link budget.
      • 04.0805 SSB and DSB Enabled Hybrid Waveforms for the Space-Ground Link System Dan Shen (Intelligent Fusion Technology, Inc), Khanh Pham (Air Force Research Laboratory), Genshe Chen (Intelligent Fusion Technology, Inc), Xingyu Xiang () Presentation: Dan Shen - Wednesday, March 6th, 11:50 AM - Lake/Canyon
        The spectrum sharing problems call for the solutions to compress the uplink SGLS spectrum and minimize the interference to the neighboring bandwidth. In this paper, single sideband (SSB) and double-sideband (DSB) enabled hybrid waveforms are proposed to reduce the bandwidth and improve the spectrum efficiency of SGLS. A hybrid modulation (HM) structure is designed to generate the double sideband command signals and single sideband telemetry and ranging signals. At the receiver side, a Costas Loop based carrier synchronization supports the demodulation of hybrid waveforms. Numerical results show the hybrid waveforms can provide better performance if the transmission power remains the same and the implementation losses associated with the hybrid modulation-demodulation processes are negligible.
    • 04.09 Wideband Communications Systems David Taggart (Self) & Claudio Sacchi (University of Trento)
      • 04.0901 A Genetic Algorithm for Joint Power and Bandwidth Allocation in Multibeam Satellite Systems Aleix Paris (Massachusetts Institute of Technology), Inigo Del Portillo (Massachusetts Institute of Technology), Bruce Cameron (Massachusetts Institute of Technology), Edward Crawley (Massachusetts Institute of Technology) Presentation: Aleix Paris - Wednesday, March 6th, 04:30 PM - Lamar/Gibbon
        Communications satellites are becoming more flexible and capable in order to make better use of on-board resources and available spectrum, and to satisfy the varying demand in the satellite broadband market. New generations of communications satellites will provide hundreds of Gbps of throughput by using advanced digital payloads, which will allow for beam-steering and beam-shaping, in addition to individually allocating power and bandwidth for each beam. Therefore, dynamic resource management (DRM) techniques for communications satellites will be crucial for operators to fully exploit the capabilities of their satellites. This paper presents a new method for joint power and bandwidth allocation in multibeam satellite systems. To that end, we first develop a multibeam satellite model that accounts for propagation effects and interference among beams. Next, we formulate the joint power and bandwidth allocation optimization problem and propose a novel algorithm to solve it. The basis of this algorithm is a genetic algorithm which is combined with repair functions to guarantee the validity of the solutions and speed up convergence. Our results show that by using our joint power and bandwidth allocation algorithm the unmet system capacity (USC) can be reduced up to 40% (as compared to just power allocation approaches). Furthermore, our experiments identify the variation of the demand among beams as a parameter that has a large impact on potential improvement: the higher the variation in demand among beams, the more beneficial it is to allow a greater flexibility in the range of bandwidth allocations allowed.
      • 04.0902 A Virtualized Border Control System Based on UAVs: Design and Energy Efficiency Considerations Claudio Sacchi (University of Trento) Presentation: Claudio Sacchi - Wednesday, March 6th, 04:55 PM - Lamar/Gibbon
        European borders are hard to be controlled in an effective and efficient way. The recent emergencies related to immigration revealed the substantial inefficiency of conventional means of border patrolling based on warships, coast guard speedboats and helicopters. A reliable technical answer to these emergency problems may come from the use of different kinds of unmanned aerial vehicles (UAVs). These flying vehicles may allow at improving border control. Nevertheless, such technologies require significant amount of personnel, energy and infrastructure to properly serve border protection. In order to be really effective, UAVs should autonomously cooperate in networked manner, collecting information from the on-ground and/or water-surface sensors, exchanging data among them and conveying the critical information to remote border control centres. This is the main objective of DAVOSS project (Dynamic Architectures based on UAVs Monitoring for border Security and Safety), funded by NATO in the framework of the Science for Peace and Security Programme. This paper aims at presenting the novel adaptive and virtualized aerospace network architecture proposed in DAVOSS. The leading concepts of DAVOSS are flexibility, dynamic reconfigurability, energy efficiency and broadband connection availability also in critical application scenarios. In order to improve robustness and resilience of the avionic network and to enable the efficient information backhaul also in absence of terrestrial links, advanced networking and communications technologies like Software-Defined Networking (SDN), network slicing and virtualization are introduced. System requirements, coming from potential end-users, along with real application scenarios will be carefully analyzed in order to drive the architectural design phase, whose preliminary outcomes will be shown in the paper. Preliminary results demonstrate the effectiveness of the adoption of virtualization techniques for the considered aerospace network architecture in terms of reduced power consumption at the drone side, with an observed tradeoff with latency.
      • 04.0903 Measurement Sensitivity of Modulation Indices in Telemetry, Tracking, and Command Systems Srinivasa Raghavan (Aerospace Corporation) Presentation: Srinivasa Raghavan - Wednesday, March 6th, 05:20 PM - Lamar/Gibbon
        Subcarrier modulation, along with direct modulation, is typically used in telemetry, tracking, and command (TT&C) systems. The Space Ground Link Subsystem is an example of the TT&C system using a subcarrier and direct modulation together in a phase modulator. In this system, telemetry is placed on a 1.7 MHz subcarrier, whereas ranging and commanding are modulated onto a carrier without a subcarrier. The ranging signal is made up of a 1 mega-chip-per-second pseudorandom code, directly modulating the carrier signal using binary phase shift keyed modulation (BPSK). The command signal is made up of 3-tone frequency shift keyed signal amplitude modulation (AM). All three signals together phase-modulate (PM) the carrier before transmission. In a phase-modulated system, power in each of the services—command, telemetry and ranging—is apportioned from the total available power based on the modulation indices used. The performance of each of the services is dependent on the amount of power in those services. It is important to measure the modulation indices accurately to assure the required service performance. In this paper the issues related to the measurement of command and telemetry modulation indices are addressed and the link sensitivity to the measurement errors in modulation indices is also discussed.
      • 04.0904 Satellite SDR Gateway for M2M and IoT Applications Vlad Popescu (Transilvania University of Brasov), Cristinel Gavrila (Transilvania University of Brașov ), Marian Alexandru (Transilvania University of Brasov), Claudio Sacchi (University of Trento), Daniele Giusto (University of Cagliari) Presentation: Vlad Popescu - Wednesday, March 6th, 09:00 PM - Lamar/Gibbon
        Short-range terrestrial radio communications, in low-power and low-cost configurations are the current enabler for the IoT and M2M applications. The main drawback of such applications is the need of a gateway that enables them to communicate with the world. Satellite gateways have become in recent years more affordable, opening the possibility to the aforementioned applications to get even more present in the current technological development, but still the combined equipment and especially the access costs are still prohibitive for large scale deployments. The equipment costs for a satellite gateway could be substantially lowered using the Software Defined Radio (SDR) concept which can provide a high degree of flexibility in dynamically operating multiple wireless interfaces, especially combined with the plethora of terrestrial communication standards. The design of an embedded hardware device based on SDR technologies would therefore significantly lower the overall costs of satellite communication for IoT and M2M applications. Combining the presented elements, the main goal of this paper is to analyze the requirements of the satellite part of a SDR-based gateway for M2M and IoT applications and to present a hardware implementation of the gateway together with its operational characteristics.
      • 04.0905 Performance and Hardware Complexity Trade-offs for Digital Transparent Processors in 5G Satcoms Vincenzo Sulli (University of L'Aquila ), Giuseppe Marini (), Fortunato Santucci (University of L'Aquila), Marco Faccio (University of L'Aquila), Graziano Battisti (Università degli Studi dell'Aquila) Presentation: Vincenzo Sulli - Wednesday, March 6th, 09:25 PM - Lamar/Gibbon
        With the fifth generation (5G) wireless technology paradigm, huge efforts have been made in the very latest years to incorporate satellite communications. Indeed, in this global frame satellite communications can provide a valuable resource to extend and complement terrestrial networks both in terms of throughput and global connectivity. When on-board transponders are considered, transparent satellites may be considered as an appealing solution to provide backhaul connectivity to the on-ground Relay Nodes. Nevertheless, along the last decade semi-transparent transponder architectures have received major attention. This kind of architectures have been emerging as a viable alternative to provide broadband connectivity in modern network topologies with large users' populations and a variety of requirements in terms of bandwidths and QoS, while maintaining the payload complexity affordable. In this frame, significant on-board digital processing is involved, which calls for careful system modeling and accurate digital hardware design to achieve feasible trade-offs between hardware efficiency and overall link-budget performance. In these regard, an equivalent noise model for the analog-digital hybrid receiving chain that composes the satellite transparent transponder has been proposed in our recent works. The proposed analytical method is applied to a specific DTP (Digital Transparent Processor) architecture and the validation is discussed by comparing results with those obtained via Monte Carlo simulation. A detailed description of the architecture of each block within the DTP chain has been provided, by also enclosing the hardware complexity analysis of the basic building blocks and of the whole DTP chain. Numerical examples, that illustrate the application of the developed framework and the related design methodology in scenarios of practical interest, have been presented, with a first HW validation of the design choices in the DTP chain. In the present paper the theoretical framework is extended in order to take into account the actual hardware resource utilization and the related power consumption for a given FPGA technology. The optimization problems based on the proposed models are dealt with specific reference to 5G scenarios, by focusing optimization for very low-complexity, low cost pay-load for small satellites.
      • 04.0907 UWB Air-to-Ground Propagation Channel Measurements and Modeling Using UAVs Wahab Ali Gulzar Khawaja (North Carolina State University), Ozgur Ozdemir (NCSU), Fatih Erden (North Carolina State University), Ismail Guvenc (), David Matolak (University of South Carolina) Presentation: Wahab Ali Gulzar Khawaja - Wednesday, March 6th, 09:50 PM - Lamar/Gibbon
        This paper presents an experimental study of the air-to-ground~(AG) propagation channel through ultra-wideband~(UWB) measurements in an open area using unmanned-aerial-vehicles (UAVs). Measurements were performed using UWB radios operating at a frequency range of $3.1$~GHz~-$4.8$~GHz and UWB planar elliptical dipole antennas having an omni-directional pattern in the azimuth plane and typical donut shaped pattern in the elevation plane. Three scenarios were considered for the channel measurements: (i) two receivers~(RXs) at different heights above the ground and placed close to each other in line-of-sight~(LOS) with the transmitter~(TX) on the UAV and the UAV is hovering; (ii) RXs are in obstructed line-of-sight (OLOS) with the UAV TX due to foliage, and the UAV is hovering; and, (iii) UAV is moving in a circular path. Different horizontal and vertical distances between the RXs and TX were used in the measurements. In addition, two different antenna orientations were used on the UAV antennas (vertical and horizontal) to analyze the effects of antenna radiation patterns on the UWB AG propagation. From the empirical results, it was observed that the received power depends mainly on the antenna radiation pattern in the elevation plane when the antennas are oriented in the same direction, as expected for these omni-azimuth antennas. Moreover, the overall antenna gain at the TX and RX can be approximated using trigonometric functions of the elevation angle. The antenna orientation mismatch increases path loss, and produces a larger number of small powered multipath components~(MPCs) then when aligned. Similarly, additional path loss and a larger number of MPCs were observed for the OLOS scenario. In the case of the UAV moving in a circular path, the antenna orientation mismatch has smaller effects on the path loss than when the UAV is hovering, because a larger number of cross polarized components are received during motion. A statistical channel model for UWB AG propagation is built from the empirical results.
    • 04.10 Communications and/or Related Systems: Theory, Simulation, and Signal Processing Rajendra Kumar (California State University) & David Taggart (Self)
      • 04.1001 Channel Estimation for a Multi-User System with Iterative Interference Cancelation Lukas Grinewitschus (Universität Duisburg-Essen), Christian Schlegel (Dalhouse University) Presentation: Lukas Grinewitschus - Monday, March 4th, 09:45 AM - Amphitheatre
        Due to a rise in popularity of satellite services, with satellite internet delivery systems being provided by a growing number of operators, there is renewed interest in methods and systems to increase the capacity of such services. Since the uplink for two-way internet connectivity uses many uncoordinated small earth terminals with intermittent and sporadic packet traffic, there is also a renewed interest in multiuser receiver principles and random-access protocols. Especially for geostationary satellite systems with a large ground terminal population and large propagation delays and packet flight times, using random-access methods are an attractive alternative. Here, we study a random-access protocol which operates on the physical layer. Partial information gained even from colliding user packages using for soft signal reconstruction and then for cancelation in subsequent iteration steps. The system includes user-specific equalizers to correct for time-varying carrier and clock frequency offsets experienced by the different data streams. These offsets cannot be avoided using only a single free-running oscillator at the receiver. To verify system performance under realistic transmission assumptions, a dynamic channel model is presented for the land-mobile satellite channel and used to study the estimation and tracking algorithms that are added to our multiuser receiver. We study variations of estimation approaches, in particular the question if and how the estimator should be rerun as part of the natural iteration steps of the data decoder. Variations of the adaptive channel estimator are also investigated, however, there is strong evidence that a low-complexity estimator integrated into the iteration loop instead of using single estimation per received frame is advantageous. Because estimation and tracking are done concurrently for all users, user separation needs to be extended to the estimators as well, which is done with signature sequences that are superimposed on both pilot and data signals. The system configuration includes users with different modulation schemes as well as different code rates as discussed in an earlier publication. The proposed estimators will be studied in the context of an iterative multi-packet receiver, whose near-capacity performance potential was demonstrated by the authors in recent publications under the assumptions of known transmission channels. We will quantify how close a receiver with estimators for a realistic dynamic channel model can approach the ideal performance of the receiver with known channel-state information, and which estimator implementation version provides the closest performance match and at what computational complexity. Studying these estimators we will use realistic satellite channel parameters used for DVB system modeling, and pay particular attention to the carrier frequency offsets caused by the high-carrier-frequency oscillators and potential Doppler effects, especially in low-earth orbiting satellites. The results will be compared to idealized channel capacity predications available from a Shannon-type capacity analysis of the channel. Both theoretical and simulated performances will be compared with conventionally achievable spectral and power efficiencies to argue for the use of multi-packet reception in the context of satellite data uplink for distributed ground terminals.
      • 04.1002 Acquisition, Tracking, and Communication between Lunar South Pole and Earth Dariush Divsalar (Jet Propulsion Laboratory), Marc Sanchez Net (Jet Propulsion Laboratory), Kar Ming Cheung (Jet Propulsion Laboratory) Presentation: Dariush Divsalar - Monday, March 4th, 10:10 AM - Amphitheatre
        In this paper we design and analyze end to end communication system between a lander/rover on the surface of Lunar south pole and an Earth station. In this paper mainly the acquisition and tracking system will be discussed in more detail. The communication system on lander or rover could be used for the Earth-to-Moon Communication. To communicate to and from the lander/rover on the lunar South Pole, low and/or medium directional antennas onboard the lander/rover will have to be pointed at low elevation angles between 2 to 10 degrees, thus causing multipath fading effects due to reflection of portion of transmitted electromagnetic waves from the surface of Moon that are not commonly encountered in traditional deep space communications between a spacecraft and a ground station. To design and analyze such communication system and in particular acquisition and tracking system in presence of multipath fading, first we model the fading channel based on existing and simulated data. In addition to multipath fading, the channel also introduces Doppler frequency up to 11.5 KHz, and Doppler rate up to 0.735 Hz/sec. For coherent reception the Doppler frequency which is time varying should be acquired and the incoming carrier phase which includes the fading phase should be tracked in the presence of multipath fading. For this communication system in addition to estimating the received carrier phase, the amplitude of fading signal should also be estimated in particular to be used in decoder. In addition to acquisition and tracking, we propose simple modulation/coding schemes. Space diversity using two antennas on earth to mitigate the effects of fading could also be used. We design phase locked loops and frequency sweeping schemes robust to the attenuations due to fading. After designing various components of the communication system, we will use Simulink models to obtain the end to end performance of the communication link under investigation. Based on the available data, the fading channel can be accurately modeled as a Rician fading channel with Rician parameter of 10 dB, and Doppler spread that depends on the Doppler frequency and transmit/receive antenna patterns. The challenge is how to make such communication system robust in presence of the multipath fading where the Doppler spread changes in time and thus produces fading with time varying durations (short and very long fades). In summary, the paper covers, communication system design, performance analysis, and simulations.
      • 04.1003 Resilient Synchronization of Radio Networks of Clocks: A Pursuit-Evasion Graphical Game Approach Khanh Pham (Air Force Research Laboratory) Presentation: Khanh Pham - Monday, March 4th, 10:35 AM - Amphitheatre
        This paper provides an analytical framework to investigate judicious topology reweighting of radio networks of clocks, when distributed time transfer and synchronization are based on physical layers and subject to the presence of false timing signals. Protagonist clocks exchange timing information pairwise, which is modeled as clocks tending to follow the majority of their neighbors. Antagonist clocks inject false timing signals, thereby, influencing the timing synchronization of (some of) the other protagonist clocks they meet. A class of pursuit-evasion graphical games subject to complete state observations and exploitation of phase noise disturbances, is proposed in designing clock steering protocols for resilient time metrologies that will be immune to erroneous timing signals injected into remote time dissemination networks.
      • 04.1004 Using Control Engineering to Improve Regulatory Review of Flexible SATCOM Terminal Advocacy Khanh Pham (Air Force Research Laboratory) Presentation: Khanh Pham - Monday, March 4th, 11:00 AM - Amphitheatre
        Some difficulties are reviewed in keeping satellite communication (SATCOM) responsive as operational environments and requirements rapidly evolve, especially related to accessing advanced services and providing resilience against threats. Growing interest is currently being addressed to SATCOM terminal flexibility for operating across multiple Geostationary orbit (GSO) and Non-Geostationary orbit (NGSO) constellations, in multiple frequency bands and support different modems/routers. In this paper, the emphasis is on the feasibility of using learning and control engineering to help SATCOM regulatory agencies more efficiently, consistently, and effectively analyze requests to autonomously operate flow control and dynamic resource allocation consistent with increasing demands for connectivity and bandwidth. Due to the repetitive nature of user experiences, application performances, and service level agreements, terminal assignments of center frequencies for transmission, signal bandwidths, communication modes, and time intervals for transmission could benefit from the data collected during previous downlink mode change requests and uplink terminal reports. Intelligent terminal agents and enforcement coordination between terminal routers and modems are proposed and discussed in the views of iterative learning and Minimal-Cost-Variance (MCV) control-theoretic frameworks.
      • 04.1006 Joint Sensing and Communications Multiple Access Receiver Implementation Characterization Richard Gutierrez (The Aerospace Corporation), Daniel Bliss (Arizona State University) Presentation: Richard Gutierrez - Monday, March 4th, 11:25 AM - Amphitheatre
        One solution to addressing the spectral congestion problem is to co-design communications and remote sensing systems to cooperate, such that each system benefits from the presence of one another. In this work, we present a novel joint sensing and communications multiple access system architecture that allows for simultaneous decoding of communications information and radar target tracking and parameter estimation within the same space, time, and frequency continuum. We further demonstrate the feasibility of this co-design architecture by implementing it on a network of software defined radios (SDRs). To characterize the system we consider a scenario where one transceiver transmits an emulated radar return waveform, while a second transceiver simultaneously transmits a communications waveform. A third transceiver receives and processes the emulated radar return and communications waveforms jointly. The radar transmitter and joint sensing and communications receiver acts as a quasi-monostatic tracking radar. We characterize the joint performance of the system using the communications and estimation rate metrics. The results show that the system decodes communications information with less than 2\% bit errors and achieves excellent radar tracking performance for the given experiment parameters. This work demonstrates the feasibility of such a system using low cost commercial-off-the-shelf (COTS) transceiver hardware and demonstrates previously reported analytical results on system performance.
      • 04.1007 Designing and Implementing SVMs for High-Dimensional Knowledge Discovery Using FPGAs John Porcello (N/A) Presentation: John Porcello - Monday, March 4th, 11:50 AM - Amphitheatre
        Abstract - Support Vector Machines (SVMs) represent a robust and valuable tool for Machine Learning. SVMs are resilient to overfitting and can provide useful information for high-dimensional data when sufficient hyperplane margin can be identified. However, SVMs require substantial computational resources for large datasets. This paper presents an approach to SVMs for high-dimensional data using a reduced computational load. A polynomial based approach is used to assess hyperplanes of high-dimensional data for the purpose of Knowledge Discovery. This paper describes issues and challenges with this approach and while the approach is not applicable in every case, it covers a wide range of Machine Learning applications. Furthermore, this paper considers implementation of such SVMs using Field Programmable Gate Arrays (FPGAs). The approach described in this paper applies to high performance, high throughput and scalable implementations for big data. The paper provides design data for FPGA implementation for these types of SVMs. Finally, an example is provided based on the Xilinx UltraScale+ FPGAs to illustrate the concepts in this paper.
    • 04.11 Global Navigation Satellite Systems Gabriele Giorgi (German Aerospace Center - DLR) & Lin Yi (NASA Jet Propulsion Lab)
      • 04.1103 A Future GNSS Constellation with Inter-satellite Links: Preliminary Space Segment Analyses Gabriele Giorgi (German Aerospace Center - DLR), Bethany Kroese (), Grzegorz Michalak (Helmholtz Centre Potsdam German Research Centre for Geosciences - GFZ) Presentation: Gabriele Giorgi - Wednesday, March 6th, 04:30 PM - Amphitheatre
        Future Global Navigation Satellite Systems (GNSSs) will benefit from two key technologies: optical clocks and inter-satellite laser links. To fully exploit these technologies, the Kepler constellation proposes a network of inter-connected satellites carrying cavity-stabilized lasers, optical frequency references, and terminals for two-way optical links. The Kepler space segment is comprised of 24 Medium Earth Orbit (MEO) and 4 Low Earth Orbit (LEO) satellites. The MEO satellites are evenly distributed over three orbital planes and broadcast navigation signals on traditional radio-frequency carriers. Within an orbital plane, laser links between adjacent satellites form a ring to synchronize each satellite’s cavity-stabilized laser. These continuous links achieve robust intra-plane synchronization with low latency. In addition to these two intra-plane links, each MEO satellite has a nadir-pointing, two-way optical communication and ranging system. For inter-plane synchronization, a small set of Low Earth Orbit (LEO) satellites simultaneously connects with multiple MEO satellites. Each LEO satellite has three optical terminals and MEO satellites alternate connections with these LEO satellites for synchronization, ranging, and data relay. The terminal design, operational constraints and placement on the satellite body determine the availability of these LEO-to-MEO connections. The accuracy of the space segment’s time and frequency synchronization is entirely dependent on the accuracy and availability of real-time precise orbit determination (POD) products. Current GNSS constellations employ a global network of monitoring stations to process long batches of data, predict all satellites’ orbits, and produce ephemeris data. The Kepler architecture introduces several advantages to improve the POD of the MEO satellites: a) inter-satellite optical links enable extremely precise comparisons between frequency references onboard the satellites, which results in a constellation of time references more precisely accurately synchronized to a common system time; b) sub-mm ranging observations are available between neighboring MEO satellites and between LEO and MEO satellites; c) additional radio frequency receivers onboard LEO satellites provide atmosphere-free observations of MEO satellites’ navigation broadcast transmissions. These new features will enable cm-level POD only a very small set of ground stations. The ground stations are only needed to maintain the relation between space- and terrestrial-based reference frames. This paper discusses the joint task of designing the low Earth orbits and engineering the linking rule between LEO and MEO satellites. It justifies the need for only four LEO satellites to meet all operational constraints, thus guaranteeing sufficient coverage, full-time synchronization capabilities, and robustness against failures of single LEO satellites. In addition, the resulting POD performance is discussed based on simulated Kepler LEO and MEO constellations.
      • 04.1105 Secure Multi-constellation GNSS Receivers with Clustering-based Solution Separation Algorithm Kewei Zhang (KTH Royal Institute of Technology), Panos Papadimitratos (KTH) Presentation: Kewei Zhang - Wednesday, March 6th, 04:55 PM - Amphitheatre
        Because of the limited satellite visibility, reduced reliability and constraining spatial geometry, e.g., in urban areas, the development of multi-constellation global navigation satellite systems (GNSS) has gained traction rapidly. Applications are expected to handle observations from different navigation systems, e.g., GPS, GLONASS, BeiDou and Galileo, in order to improve positioning accuracy and reliability. Furthermore, multi-constellation receivers present an opportunity to better counter spoofing and replaying attacks leveraging approaches that take advantage of the redundant measurements. In particular, our previously proposed cluster-based solution separation algorithm (CSSA), redundant available satellites to identify malicious/faulty satellite signals. Intuitively, the algorithm targets directly the consequence of spoofing/replaying attacks: the receiver position. It works independently of how the attacks are launched, either through modifying pseudorange measurements or manipulating the navigation messages, without changing the receiver hardware. Multi-constellation GNSS receivers will utilize all observations from different navigation systems, that is more than 30 available satellites at each epoch after Galileo and BeiDou systems become fully operational, providing abundant redundancy. At least 3+M satellites are necessary to calculate the receiver’s position, where M is the number of constellations, i.e., at least one satellite from each constellation and three more satellites, because the receiver needs to determine the clock offset between different constellations. Our algorithm, building on and extending CSSA, is able to identify up to N-(M+3)-1 faulty signals and exclude them from the final position calculation, where N is the number of all available satellites. In an extreme case, i.e., if one constellation fails, the algorithm is able to identify which constellation fails by cross-checking different constellations. The work shows that a multi-constellation GNSS receiver equipped with our algorithm works effectively against a strong spoofing/replaying attacker that can manipulate a large number of one constellation signals, or even the entire constellation. When the attacker tries to mislead the victim more than a couple of hundred meters from his true location, the victim is able to detect and identify the manipulated signals within seconds, and obtain the legitimate position by excluding the malicious signals.
      • 04.1106 Precise Positioning of Robots with Fusion of GNSS, INS, Odometry, Barometer, LPS and Vision Patrick Henkel (Technische Universität München) Presentation: Patrick Henkel - Wednesday, March 6th, 05:20 PM - Amphitheatre
        The autonomous driving of robots requires precise and reliable positioning information with low-cost sensors. In this paper, we propose a tightly coupled sensor fusion of multiple complementary sensors including GNSS-RTK, INS, odometry, Local Positioning System (LPS) and Visual Positioning. The focus of this paper is on the integration of LPS and vision since the coupling of GNSS-RTK, INS and odometry is already state of the art. We include the positions of the LPS anchors and the bearing vectors and distances from the robot's camera towards the patch features as state vectors in our Kalman filter, and show the achievable positioning accuracies.
    • 04.12 Software Defined Radio and Cognitive Radio Systems and Technology Eugene Grayver (Aerospace Corporation) & Genshe Chen (Intelligent Fusion Technology, Inc)
      • 04.1201 Inter-satellite Range Estimation Using Discovery & Resolution Modes for Space Traffic Management Zakaria Bouhanna (University of Surrey), Christopher Bridges (Surrey Space Centre) Presentation: Zakaria Bouhanna - Friday, March 8th, 08:30 AM - Lamar/Gibbon
        The increase in satellite launches has led to a jump in the number of satellites orbiting Earth to over 1900 active satellites to date. Most of these satellites rely on the two-line elements (TLEs) to define the satellite tracks. However, the accuracy obtained from TLEs is insufficient for accurate satellite collision predictions which led to the undeniable uncertainty regarding satellite conjunction predictions. This paper extends the research conducted about investigating the implementation of a new inter-satellite ranging instrument by proposing two operational modes namely Discovery and Resolution. Discovery allows long-distance satellite detection and fast range estimation from the received signal strength indicator (RSSI). This ensures a larger observation time-window for the Resolution mode to define precisely the nature of the satellite encounter. The system switches to Resolution when the relative range between the satellites source and observer decreases below 10 km according to the scenario studied. Unlike Discovery, Resolution estimates the range from the round-trip propagation delay using sequential ranging techniques. Results reveal that RSSI range measurements are prone to heavy fluctuations due to the path loss variations. In fact, a standard deviation of 63 km for the ranging errors over 1s measurement time is measured. However, RSSI measurements are obtained within 2-µs time intervals. On the other hand, Resolution measures the range in chips by calculating the argument of the maxima of the cross-correlation function (CCF) output between the transmitted and the received sequences. Results using Resolution show that an accuracy of 110-m is obtained from a ranging sequence of 800 kcps during 1 s observation window. This value is drastically improved compared to the results achieved with Discovery but with the cost of an acquisition- and processing-time of 20 ms compared to 2 µs attained with Discovery.
      • 04.1203 A Software Radio Based Satellite Communications Simulator for Small Satellites Using GNU Radio Seth Hitefield (), Zachary Leffke (Virginia Tech) Presentation: Seth Hitefield - Friday, March 8th, 08:55 AM - Lamar/Gibbon
        In this paper, we present the architecture for an open source satellite communication simulator using software defined radio that is designed to accurately model a communications channel between a ground station and satellite during a given satellite pass. This simulator provides a low-cost, open source platform that allows for testing and prototyping satellite communication systems and waveforms. The primary use case for our simulator is providing a test bench for evaluating the performance of a chosen communication systems within the expected mission parameters such as the orbit of the satellite. Since the simulator is implemented using software defined radio, this allows for an extremely flexible system that can be used in multiple test scenarios and provides a utility that engineers and developers can utilize throughout the entire lifecycle of a satellite’s development. During the initial mission design, the simulator can be used to model the expected communications channel in order for the correct waveforms and/or radio hardware to be selected in order to meet mission requirements. It can also be used within a larger framework to simulate an entire ground station network, as well as, the missions command and control structure. During a satellites development, it can be used to validate the communication systems for both the ground and flight hardware (both hardware and software based radios). Our simulator uses the GNU Radio software defined radio framework to implement the signal processing pipeline and takes into account the varying properties of a satellite communication channel such as doppler shift, path loss, propagation delay, and hardware impairments. Given the orbital parameters of a specific satellite, the simulator models both the uplink and downlink channel characteristics of upcoming passes for the selected satellite. It primarily provides an interface for software defined waveforms implementing both the ground and flight hardware, but can also be used to interface hardware radios. The simulator allows for easy testing and verification of communication systems and can be used throughout the entire development of both the communication systems itself and the other mission components such as the command and control framework and ground network. Engineers can use the simulator to built and test the command control framework against expected real world behavior.
      • 04.1205 A Software Receiver and Tracking Despreader for Ultra-Wideband Signals Adam Parower (), Eugene Grayver (Aerospace Corporation) Presentation: Adam Parower - Friday, March 8th, 09:20 AM - Lamar/Gibbon
        Ultra-wideband communication systems offer several advantages over traditional narrowband systems, including minimal interference with other systems, low transmit power density requirements, and resistance to jamming. However, the high sample rates needed to digitize ultra-wideband signals have traditionally made it necessary to receive and demodulate these signals using FPGA- or ASIC-based designs, which can have high costs and long development cycles. In this paper, we reveal a receiver and tracking despreader, implemented entirely in software, which is capable of demodulating ultra-wideband spread-spectrum signals in real time. The despreader achieves throughput in the range of 2-4 gigasamples per second (equivalent to 2-4 GHz of bandwidth) on a machine with 16 cores. A high-speed digitizer streaming data over a 40 Gigabit network was used to verify the functionality of the software. By transmitting and receiving an analog ultra-wideband signal, we verified that the despreader is able to despread and track in the presence of a frequency offset and timing drift.
      • 04.1206 Energy Efficient Routing Algorithm for Wireless MANET Genshe Chen (Intelligent Fusion Technology, Inc), Wenhao Xiong (), Dan Shen (Intelligent Fusion Technology, Inc) Presentation: Genshe Chen - Friday, March 8th, 10:10 AM - Lamar/Gibbon
        Mobile ad-hoc networks (MANET) have been widely used in many different areas, such as mobile sensor networks, smart vehicle systems, unmanned aerial vehicle relayed networks, etc. In all scenarios, routing is one of the key technical challenges. In order to improve the quality of communication and extend the lifetime of all battery-powered units, we propose a novel routing algorithm that incorporates the concerns about battery level, quality of service (QoS), motion of nodes, and routing overhead. In addition to the routing algorithm, we also design a connection maintenance method that monitors the quality of the route during communication and determines the time to find a new route. Via numerical results, it is proven that our design has a good balance among the requirements of stability, delay, overhead, and network lifetime.
      • 04.1207 Scaling the Fast X86 DVB-S2 Decoder to 1 Gbps on One Server Eugene Grayver (Aerospace Corporation) Presentation: Eugene Grayver - Friday, March 8th, 09:45 AM - Lamar/Gibbon
        years. Researchers have focused on implementing the computationally expensive algorithm on both GPPs and GPUs. A major leap in performance was reported in the groundbreaking paper by Bertrand le Gal [1]. This paper builds on the work in [1] by considering the scaling of that implementation on modern many-core processors. We look at the performance of LDPC code specified in the DVB-S2 standard. The large block size of the DVB-S2 code makes the memory architecture of the processor just as important as the clock rate and instruction set. We present results for two generations of Intel Xeons, an Intel Phi (KNL), the recently released AMD EPYC. The key finding is that performance scaling is limited by the amount of available cache memory rather than the number of cores. We also find that heavily multi-threaded, but deterministic software architecture benefits from explicit allocation of threads to cores vs. allowing the operating system to manage threading. The maximum throughput of 1 Gbps was achieved on a mid-range AMD server – issuing a new era of all-software receivers for very high rate waveforms. We also present the performance of the algorithm ported to a low-power ARM processor and compare that to a low-end Intel Core.
      • 04.1208 Software Defined Radio Implementation of Carrier and Timing Synchronization for Distributed Arrays Han Yan (University of California, Los Angeles), Samer Hanna (University of California, Los Angeles), Kevin Balke (Google, Inc.), Riten Gupta (UtopiaCompression Corporation), Danijela Cabric (UCLA) Presentation: Han Yan - Friday, March 8th, 10:35 AM - Lamar/Gibbon
        The communication range of wireless networks can be greatly improved by using distributed beamforming from a set of independent radio nodes. One of the key challenges in establishing a beamformed communication link from separate radios is achieving carrier frequency and sample timing synchronization. This paper describes an implementation that addresses both carrier frequency and sample timing synchronization simultaneously using RF signaling between designated master and slave nodes. By using a pilot signal transmitted by the master node, each slave estimates and tracks the frequency and timing offset and digitally compensates for them. A real-time implementation of the proposed system was developed in GNU Radio and tested with Ettus USRP N210 software defined radios. The measurements show that the distributed array can reach a residual frequency error of 5 Hz and a residual timing offset of 1/16 the sample duration for 70 percent of the time. This performance enables distributed beamforming for range extension applications.
    • 04.13 CNS Systems and Airborne Networks for Manned and Unmanned Aircraft Denise Ponchak (NASA Glenn Research Center) & Chris Wargo (Mosaic ATM, Inc.)
      • 04.1301 Advancing the Standards for Unmanned Air System (UAS) Comms, Navigation and Surveillance (CNS) Fred Templin (The Boeing Company) Presentation: Fred Templin - Sunday, March 3th, 04:30 PM - Amphitheatre
        Under NASA program NNA16BD84C, new architectures were identified and developed for supporting reliable and secure Communications, Navigation and Surveillance (CNS) needs for Unmanned Air Systems (UAS) operating in both controlled and uncontrolled airspace. An analysis of architectures for the two categories of airspace and an implementation technology readiness analysis were performed. These studies produced NASA reports that have been made available in the public domain and have been briefed in previous conferences including iCNS 2017, IEEE Aerospace 2018, iCNS 2018 and DASC 2018. We now consider how the products of the study are influencing emerging directions in the aviation standards communities. The International Civil Aviation Organization (ICAO) Communications Panel (CP), Working Group I (WG-I) is currently developing a communications network architecture known as the Aeronautical Telecommunications Network with Internet Protocol Services (ATN/IPS). The target use case for this service is secure and reliable Air Traffic Management (ATM) for manned aircraft operating in controlled airspace. However, the work is more and more also considering the emerging class of airspace users known as Remotely Piloted Aircraft Systems (RPAS), which is simply another term used to refer to certain UAS classes. In addition to ICAO CP WG/I, two Special Committees (SCs) in the Radio Technical Commission for Aeronautics (now referred to simply as “RTCA”) are developing Minimum Aviation System Performance Standards (MASPS) and Minimum Operational Performance Standards (MOPS) for UAS. RTCA SC-223 is investigating an Internet Protocol Suite (IPS) and AeroMACS aviation data link for interoperable (INTEROP) UAS communications. Meanwhile, RTCA SC-228 is working to develop Detect And Avoid (DAA) equipment and a Command and Control (C2) Data Link MOPS establishing L-Band and C-Band solutions. These RTCA Special Committees along with ICAO CP WG/I are therefore overlapping in terms of the Communication, Navigation and Surveillance (CNS) alternatives they are seeking to provide for an integrated manned- and unmanned air traffic management service as well as remote pilot command and control. This paper presents UAS CNS architecture concepts developed under the NASA program that apply to all three of the aforementioned committees. It discusses the similarities and differences in the problem spaces under consideration in each committee, and considers the application of a common set of CNS alternatives that can be widely applied. As the works of these three committees progress, it is clear that the overlap will need to be addressed to ensure a consistent and safe framework for worldwide aviation. In this study, we discuss similarities and differences in the various operational models and show how the CNS architectures developed under the NASA program apply.
      • 04.1302 Confidential ADS-B: A Lightweight, Interoperable Approach Brandon Burfeind () Presentation: Brandon Burfeind - Sunday, March 3th, 04:55 PM - Amphitheatre
        ADS-B technology offers significant safety and efficiency benefits to the growing worldwide air transport industry. Its use is widespread and continues to grow as countries near or pass their equipage deadlines. As an interoperable extension of ATCRBS and Mode S, Mode S - Extended Squitter is a widely used ADS-B protocol which is void of security features generally found in modern information transmission systems. The historical and future requirements for interoperability among air surveillance systems also cause difficulty in implementing modern security technology. Many proposals exist for ADS-B security protocols which have sound technology but sacrifice interoperability. By decomposing security principles to focus only on the requirement for confidentiality, we devise a confidential sub-protocol for Mode S-ES which is lightweight and interoperable while using industry standard cryptography. The use of format preserving encryption with unidirectional asymmetric cryptography allows users who require confidentiality to fully participate in ADS-B without impacting those who do not.
      • 04.1303 Safety Assessment Process for UAS Ground-Based Detect and Avoid Chris Wargo (Mosaic ATM, Inc.) Presentation: Chris Wargo - Sunday, March 3th, 05:20 PM - Amphitheatre
        UAS Beyond Visual Line of Sight (BVLOS) capability with conflict detection and traffic avoidance technology is progressing through innovation and improved opportunity within the US National Airspace System and Internationally. However, the actual flight hours flown to gather statistically appropriate levels of real time safety data to quantitatively assess new technical system architectures and operational flight risk is lacking. In this paper we will describe the safety risk based approach and airspace data assessment technology used to address the lack of live or real time data. This approach is used by the to develop a safe operational Ground Based Detect and Avoid (GBDAA) system for a wide variety of UAS operations in the US National Airspace System (NAS). This paper will describe the studies used to determine acceptable safety risk levels using this approach and its effect of improved risk based decision making, and it can improve predictability of certain risks in safety assessments. A GBDAA system has been deployed and is a development of the technology currently incorporated into flight operations. During this development the project team developed a process for safety assessment that provides situational awareness and conflicting traffic alerts for UAS flight crews using existing Airport Surveillance Radar (ASR) and Common Air Route Surveillance Radar (CARSR) sensor and traffic information from the FAA Standard Terminal Automation Replacement System (STARS). Description of the steps and methods of this process is the purpose of this paper. This approach is a valuable process to be followed for any GBDAA. Our described approach assumes that advanced software with conflicting traffic encounter warning logic based on detect and avoid algorithms are developed are part of the functions that have been integrated into the fusion tracker of the radar systems selected for the terminal area operations. The tracker and alert system also provide a GBDAA decision support tool display capability. These enhancement provides an electronic means of mitigation for the “See and Avoid” requirements for operating in the NAS in a wide variety of operational applications. We will describe the rigorous scientific approach to collect safety data using proven simulation platforms to analyze realistic traffic encounters using NAS archived track data from the FAA RADARs. These simulations were used to assess risk for the specific GBDAA system behaviors and airspace encounters for the operational airspace to be used. These robust simulations and safety assessments are used to validate acceptable target levels of safety for the UAS operational airspace, intruder aircraft encounters and safe conflict avoidance using the GBDAA alerting algorithms. Keeping with FAA guidance on the Safety Management System process, this paper will describe a methodology for a continually updated and refined safety assessment to be used in decision making processes as the GBDAA system and UAS avionics evolve over time.
      • 04.1304 An Overview of Current and Proposed Communication Standards for Large Deployment of UAS Rene Wuerll (Friedrich-Alexander-Universität Erlangen-Nürnberg) Presentation: Rene Wuerll - Sunday, March 3th, 09:00 PM - Amphitheatre
        Current public, private, and military communication systems, like conventional terrestrial and satellite phone data communication systems, have never been designed to provide connectivity to Unmanned Aircraft Systems (UAS) in very large numbers. UAS need not only to communicate, but require ubiquitous coverage and secure data links by design, catapulting them to the forefront of demanding communication systems. It is shown that current communication systems are neither designed, nor capable to handle the assumed growth of Unmanned Aircraft (UA) and that their routing, security, data rate, and range are not optimized for UAS flight characteristics. Ideas for upgrading current communication systems to better suit the communication needs of UAS are researched. Yet, it is argued that new communication standards are necessary, to not only cover singular and localized use cases of UAS, but enable global networking benefits. The paper gives an overview of the communication standards in development and their focus area selected by their standardization bodies. The standards are investigated regarding whether they sufficiently address the needs arising from the predicted numbers of UAs and are suited to enable a broad set of use cases. In particular, the new proposed schemes are examined on the physical (PHY) and medium access (MAC) layers. It is differentiated between communication for command and control (C2), also termed control and non-payload communication (CNPC) in IEEE and ITU, communication for cooperative surveillance, and payload communication. These separate needs of information transmission have different requirements, but combining them into a joint design has technological and economical benefits. New proposals for routing algorithms are surveyed and categorized, emphasizing on Mobile Ad-hoc Networks (MANETs), Vehicular Ad-hoc Networks (VANETs) and their relation to Flying Ad-hoc Networks (FANETs). These networks are primary means for establishing connectivity between UAs, but also between UAs and ground stations. Because of the enormous amount of routing protocols and proposals, there is no desire to offer a complete list, but to give a comprehensive overview over the most suitable algorithms taking the unique UAS requirements into account.
      • 04.1306 A Low Altitude Manned Encounter Model Developed Using Crowdsourced ADS-B Observations Andrew Weinert (MIT Lincoln Laboratory), Ngaire Underhill (), Ashley Wicks () Presentation: Andrew Weinert - Monday, March 4th, 04:30 PM - Amphitheatre
        With the integration of unmanned aircraft systems (UAS) into the U.S. National Airspace System, low altitude regions are being stressed in historically new ways. The FAA must understand and quantify the risk of UAS collision with manned aircraft during desired low altitude unmanned operations in order to produce regulations and standards. A key component of these risk assessments are statistical models of aircraft flight. Previous risk assessments used models for manned aircraft based primarily on Mode C-based secondary surveillance radar observations. However, these models have some important limitations when used to model low altitude fight. We demonstrate a methodology for developing statistical models of manned flight that leverages the OpenSky Network, a crowdsourced ADS-B receiver network that provides open access to the aircraft data, and the FAA aircraft registry, an open database of registered aircraft. Unlike Mode C surveillance, a key advantage to this method is the availability of necessary metadata to distinguish between different types of low altitude aircraft. For example, previous models did not discriminate a large commercial aircraft transiting to higher altitudes from low altitude or small general aviation cruising at low altitudes. We use an aircraft’s unique Mode S address to correlate ADS-B reports with aircraft type information from the FAA registry. We filter surveillance data and statistically characterize the low altitude airspace based on aircraft type, performance, and location. Lastly, we leverage the characterization and aircraft tracks to develop a Dynamic Bayesian Network that models the behavior of low altitude manned aircraft, an extension of previous aircraft modeling approaches that have employed Bayesian networks. By sampling representative trajectories from the Bayesian network, we can model encounters between manned and unmanned aircraft at low altitudes, a key supporting technology to support safe integration of unmanned aircraft. **DISTRIBUTION STATEMENT** DISTRIBUTION STATEMENT A. Approved for public release. Distribution is unlimited. This material is based upon work supported under Air Force Contract No. FA8702-15-D-0001. Any opinions, findings, conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the U.S. Air Force. This document is derived from work done for the FAA (and possibly others), it is not the direct product of work done for the FAA.
      • 04.1307 Micro-UAV Classification from RF Fingerprints Using Machine Learning Techniques Martins Ezuma (North Carolina State University), Fatih Erden (North Carolina State University), Chethan Anjinappa (NCSU), Ozgur Ozdemir (NCSU), Ismail Guvenc () Presentation: Martins Ezuma - Monday, March 4th, 04:55 PM - Amphitheatre
        Accurate detection and identification of micro-unmanned aerial vehicles (UAVs) are vital to national security. Low-costs, and ease of access/operation of micro-UAVs lead to technical and societal concerns as unauthorized flights may threaten private and sensitive areas. Conventional radar-based techniques, which are widely deployed for detecting and identifying aircrafts, mostly fail to detect small-size UAVs. This is mainly because of the very small radar cross sections of these UAVs, making it difficult to distinguish them from birds and other small aircrafts. Alternative techniques like sound and video-based detections are only suitable for short range scenarios due to ambient noise. This paper focuses on the detection and classification of micro-UAVs using radio frequency (RF) fingerprints of the signals transmitted from the controller to the UAV. These RF fingerprints are sniffed wirelessly by an intelligent RF monitoring system on which several machine learning algorithms are running. Unlike the traditional approaches, which rely solely on time-domain signals and the corresponding statistical features, the proposed technique uses the energy transient signal. First, energy trajectory is formed based on the energy-time-frequency distribution of the raw control signal. Next, start and end points of the energy transient are detected by searching for the most abrupt changes in the mean of the energy trajectory. Then, statistical features such as variance, skewness, and entropy are extracted from the energy transient. Significant features are selected by performing neighborhood component analysis (NCA) to keep the computational cost of the algorithm low. Finally, selected features are fed to several machine learning algorithms for classification. A database containing 100 signals, each of duration of 25 ms, from controllers of 12 different commercially available UAVs is generated for the experiments. Controller signals are recorded wirelessly using a high-frequency oscilloscope in near-field. The data set is randomly partitioned into training set and test set for validation with the ratio 4:1. An average detection accuracy of 98.46% is achieved over 10 Monte Carlo simulations using k~nearest-neighbor (kNN) classification. It is also shown that the proposed technique is more robust to noise and different modulation techniques than the time-domain based techniques.
      • 04.1309 UAV Communications, Navigation and Surveillance: A Review of Potential 5G and Satellite Systems Nozhan Hosseini (University of South Carolina), Hosseinali Jamal (University of South Carolina), David Matolak (University of South Carolina), Jamal Haque (Honeywell) Presentation: Nozhan Hosseini - Monday, March 4th, 05:20 PM - Amphitheatre
        Drones or unmanned aerial vehicles (UAVs), as a subcategory of small-unmanned aerial systems (sUAS), are expected to be an important component of 5G/beyond 5G (B5G) communications. This includes their use within cellular architectures (5G UAVs), in which they can facilitate both wireless broadcast and point-to-point transmissions. Allowing UAS to operate within non-segregated airspace along with commercial, cargo, and other piloted aircraft will likely require dedicated and protected aviation spectrum—at least in the near term, while regulatory authorities adapt to their use. Non-segregated airspace is the airspace that is outside airspace allocated exclusively for manned aircraft. The command and control (C2), or control and non-payload communications (CNPC) link provides safety critical information for the control of the UAV both in terrestrial-based line of sight (LOS) conditions and in satellite communication links for so-called beyond LOS (BLOS) conditions. In this paper, we provide an overview of these CNPC links as they may be used in 5G and satellite systems by describing basic concepts and challenges. We review new entrant technologies that might be used for UAV C2, such as millimeter wave (mmWave) systems, and also review navigation and surveillance challenges. A brief discussion of hardware issues is also provided.
  • 5 Observation Systems and Technologies Ifan Payne (Magdalena Ridge Observatory) & Gene Serabyn (Jet Propulsion Laboratory)
    • 05.01 Space Based Optical Systems and Instruments Bogdan Oaida (Jet Propulsion Laboratory, California Institute of Technology) & Ryan Mc Clelland (NASA Goddard Space Flight Center)
      • 05.0101 Modular Inflatable Composites for Large Space Observatories Aman Chandra (Arizona State University), Jekan Thangavelautham (University of Arizona) Presentation: Aman Chandra - Monday, March 4th, 04:30 PM - Elbow 1
        There is an every-growing need to construct large space telescopes and structures for observation of exo-planets, asteroids in the main-belt and NEOs. Space observation capabilities can be greatly enhanced by large structures. Structures extending to several meters in size could potentially revolutionize associated enabling technologies. These include star-shades for imaging exo-planets and distant objects and high resolution large aperture telescopes. In addition to size, such structures require controllable surfaces and high packing efficiencies. A promising approach to achieve high compaction for large surface areas is through the usage of compliant materials or gossamers. Gossamer structures on their own do not meet stiffness requirements for controlled deployment. Supporting stiffening mechanisms are required to fully realize their structural potential. Our present work investigates structural assemblies constructed from modular inflatable membranes stiffened pneumatically using inflation gas. These units assembled into composites can yield desirable characteristics. We present the design of large assemblies of these modular elements. Our work focuses on separate assembly strategies optimized for two broad applications. The first is efficient load bearing and distribution. Such structures do not need high precision surfaces but the ability to efficiently transmit large loads. This can be achieved using a hierarchical assembly of inflatable units. Preferential placement of varying modular units leads to local stiffness modulation. This in-turn helps modify load transmission characteristics. Such structures include deployable drag-chutes or aero-breaking devices for atmospheric maneuvering. The second are structures with precision surfaces for optical imaging and high gain communication apertures. We demonstrate over-constrained modular assemblies exhibit elastic averaging when assembled with a very large number of modules. Averaging effects are amplified with the number of sub-units approaching required surface precision with a large enough number. Our work includes fundamental structural studies to evolve feasible sizing schemes for both classes of structures. A structural analysis strategy using discrete finite elements has been developed to simulate the assembled behavior of modular units. The structural model of each inflatable unit has been extended from our previous work to approximate each unit as a 3-dimensional truss system. Analysis results are compared with full scale simulations on commercial analysis package LS-Dyna. Our analysis leads to an understanding of the extent to which inflatables can be scaled up effectively. Critical geometric design considerations are identified for stowed and deployed states of each structure. We propose designs of compliant hinges between structure to assemble even large units. Further work includes prototype development and deployment force measurement to validate the structural model.
      • 05.0104 Camera Modeling, Centroiding Performance, and Geometric Camera Calibration on ASTERIA Christopher Pong (), Matthew Smith (Jet Propulsion Laboratory) Presentation: Christopher Pong - Monday, March 4th, 04:55 PM - Elbow 1
        The Arcsecond Space Telescope Enabling Research in Astrophysics (ASTERIA) is a 10-kg, 6U CubeSat in low-Earth orbit that was able to achieve subarcsecond pointing stability and repeatability. To date, this is the best pointing on a spacecraft of its size. This paper will analyze various aspects of the performance of its key piece of hardware---the payload. First, a model of the optics and imager, which is used to simulate stellar images, will be presented. The imager parameters used in this model were derived from simple ground measurements. Next, a centroiding algorithm is provided and used on the simulated images to predict centroiding performance. These results will be shown to match on-orbit telemetry of centroiding performance, validating the modeling approach. This paper will then describe an approach for and results of a geometric camera calibration algorithm to estimate the focal length, distortion, and alignment parameters. The modeling, analyses, and results presented in this paper provide key information that can be used in a time-domain pointing simulation or a frequency-domain pointing error analysis.
      • 05.0105 Effects of Errors on Target Motion Compensation for Optical Imagers in Flyby Trajectories Alyssa Ralph (California Polytechnic State University), Bogdan Oaida (Jet Propulsion Laboratory, California Institute of Technology) Presentation: Alyssa Ralph - Monday, March 4th, 05:20 PM - Elbow 1
        Remote sensing is particularly useful in interplanetary observation where in-situ measurements are not yet possible. For flyby spacecraft to obtain accurate data during short windows with high relative velocities, target motion compensation is often required to preserve signal amplitude and prevent pixel smear; such techniques include articulation of the boresight (e.g. rotating scan mirror within the imager) and image co-addition. Effects of predicted orbital determination errors, pointing accuracy error, and timing errors are exacerbated by use of such target motion compensation tactics to varying degrees. A Monte Carlo simulation was employed to determine trends in smear and required target motion capabilities when the previously mentioned errors are applied to various combinations of hardware and flyby geometry for a selection of Jovian and Saturnian moons of exploratory interest. Results for a variety of pixel sizes, image integration times, hyperbolic excess velocities, and altitudes are presented in terms of “line error,” a metric quantifying pixel distortion in the along-track direction. Scenarios are compared to show variation in smear sensitivity. Results from the simulation can inform mission planning, hardware, and operational requirements for a range of potential missions.
    • 05.02 Ground Based Telescopes, Instruments and Technologies Stefan Martin (Jet Propulsion Laboratory)
      • 05.0201 The History and Development of the Magdalena Ridge Observatory Interferometer. Ifan Payne (Magdalena Ridge Observatory) Presentation: Ifan Payne - Sunday, March 3th, 09:50 PM - Lamar/Gibbon
        In terms of relative resolution and sensitivity the Magdalena Ridge Observatory Interferometer (MROI) will arguably be the most powerful optical telescope on earth; with even greater resolution than the Hubble Space Telescope (HST) and than the three much hyped 30-meter class telescopes that are currently in development (ELT, TMT, GMT). The sensitivity of the MROI also far exceeds (by a factor of 10x100) that of other high resolution interferometers such as CHARA, NPOI and the European VLTI. This paper traces the development of the MROI from the creation of the Langmuir Atmospheric Laboratory on Magdalena Ridge in South Central New Mexico, the construction of the Comet Observatory, and the establishment of the Congressionally designated research site on Magdalena Ridge. The gift of a declassified 2.4-meter primary mirror, originally intended for the Hubble Space Telescope, led to the design of a 3-element optical interferometer. In the year 2002, with the signing of a memorandum of agreement with the University of Cambridge, the 3-element array was redesigned as an array of 10 x 1.4-meter optical telescopes to be the first of the third generation of sparse array optical interferometers. Finally, the paper will recount the development of the 10-element array and, owing to the rise of the Tea Party, the retirement of a senior Senator, and the subsequent lack of funding, it’s rescue from near extinction. Today, with the first complete Unit Telescope installed in the center of the array, the Magdalena Ridge Observatory Interferometer is at last poised to fulfill its promise as the most powerful optical telescope on earth.
    • 05.03 Exoplanet Instruments, Missions and Observations William Danchi (NASA Goddard Space Flight Center)
      • 05.0301 Successful Completion the JWST OTIS Cryogenic-Vacuum Test at NASA JSC during Hurricane Harvey Sang Park (Smithsonian Astrophysical Observatory) Presentation: Sang Park - Thursday, March 7th, 08:30 AM - Lake/Canyon
        The JWST Optical Telescope Element (OTE) assembly is the largest optically stable infrared-optimized telescope include the primary mirrors, secondary mirror, and the Aft Optics Sub-systems (AOS) are designed to be passively cooled and operate at near 45K. This paper describes the JWST cryogenic test program, focusing on the series of integrated ‘Pathfinder’ cryo-vac tests and finally performing the cryogenic-vacuum test of the Flight Optical Telescope Element mated with the Science Instruments as an integrated assembly (OTIS). The JWST OTIS cryo-vac was carefully planned, designed to safely manage numerous challenging risks, and executed in a highly-orchestrated operation. Although the OTIS test was operating and on-schedule as planned, the mother-nature provided an extreme challenge in the name of Harvey, a Category-4 Hurricane. Presented in this paper is an overview of the in-situ test operations and developing innovative solutions in real time to maintain the flight hardware safety with dwindling supplies of consumable materials, such as Liquid Nitrogen, while continuing with the cryo-vac test in the midst of one of the largest natural disasters.
      • 05.0303 Overcoming the Tradeoff between Efficiency and Bandwidth for Vector Vortex Waveplates Nelson Tabiryan (BEAM Engineering for Advanced Measurements Co.) Presentation: Nelson Tabiryan - Thursday, March 7th, 08:55 AM - Lake/Canyon
        Vector vortex waveplates offer distinct advantages compared to conventional light blocking components for coronagraphs due to their continuous structure, thinness, and transparency. They allow high contrast in a broad bandwidth for different spectral ranges. We present opportunities of reducing in-band leakage of vector vortex waveplates to <0.1% level, and improving the tradeoff between the spectral bandwidth and efficiency. Comparative analysis of different architectures of vector vortex waveplates will be performed, and technology for their fabrication will be presented. Values less than 0.0001% are expected for 20% bandwidth.
      • 05.0304 The LBTI HOSTS Project: Instrumentation, Observations, and Survey Results William Danchi (NASA Goddard Space Flight Center) Presentation: William Danchi - Thursday, March 7th, 09:20 AM - Lake/Canyon
        The Large Binocular Telescope Interferometer (LBTI) is a stellar interferometer consisting of two 8.4-m apertures on a 14.4 m baseline on a common mount at Mt. Graham, Arizona. The Hunt for Observable Signatures of Terrestrial Systems (HOSTS) is a NASA key project for the LBTI surveying the warm and hot dust in the inner regions of planetary systems, near the habitable zone (HZ) and closer in, commonly described as being `exo-zodiacal,' analogous to the zodiacal light in our Solar System. The presence of large amounts of dust in the HZs of nearby stars poses a significant challenge for target selection and planning of future exo-Earth imaging missions. The HOSTS survey on the LBTI is designed to determine typical exozodi levels around a sample of nearby, bright main sequence stars. The LBTI operates in a nulling mode in the mid-infrared spectral window (8-13 m), in which light from the two telescopes is coherently combined with a 180-degree phase shift between them, producing a dark fringe at the location of the target star. In doing so the starlight is greatly reduced, increasing the contrast, analogous to a coronagraph operating at shorter wavelengths. The LBTI is a unique instrument, having only three warm reflections before the starlight reaches cold mirrors, giving it the best photometric sensitivity of any interferometer operating in the mid-infrared. It also has a superb Adaptive Optics (AO) system giving it Strehl ratios greater than 98% at 10 m. Thus, nulling interferometry suppresses the bright stellar light and allows for the detection of faint, extended circumstellar dust emission. Here we present statistical results from 38 individual stars. We provide important new insights into the incidence rate, typical levels, and origin of HZ dust around main sequence stars. Our overall detection rate is 23%. While the inferred occurrence rates are comparable for early type and Sun-like stars, but decrease from [71 (+11/-20)]% for stars with previously detected mid to far infrared excess to [11 (+9/-4)]% for stars without such excess, confirming earlier results at high confidence. For completed observations on individual stars, our sensitivity is five to ten times better than previous results. Assuming a lognormal excess luminosity function, we put upper limits on the median HZ dust level of 11.5 zodis (95% confidence) for all stars without cold dust and of 16 zodis when focussing on Sun-like stars without cold dust. We find first hints of a bimodal distribution where some stars have high HZ dust levels but the majority have dust levels below our sensitivity. Our results demonstrate the strength of LBTI for vetting potential targets for future exo-Earth imaging missions.
      • 05.0305 LUVOIR Thermal Architecture Overview and Enabling Technologies for Picometer-Scale WFE Stability Sang Park (Smithsonian Astrophysical Observatory) Presentation: Sang Park - Thursday, March 7th, 09:45 AM - Lake/Canyon
        The Large UV/Optical/IR Surveyor (LUVOIR) is one of four 2020 Decadal Survey Missions, a concept for ‘flag-ship’ class space-borne observatory, operating across the multi-wavelength UV/Optical/NIR spectra. An Optical Telescope concept being considered is the segmented primary mirror architecture with composite backplane structure. In order to achieve the high-contrast imaging required to satisfy the primary science goals of this mission would require a pico-meter wavefront RMS stability over a wavefront control time step, milli-Kelivin level thermometry sensing and control, and near-zero CTE materials. The LUVOIR primary mirror segment assemblies and composite backplane support structure requires active thermal management to maintain operational temperature during flight operation. Furthermore, the active thermal control must be sufficiently stable to prevent time-varying thermally induced structure distortions to minimize optical aberrations. This paper describes a Thermal System Architecture of 2 concepts considered for LUVOIR decadal study, and systematic approach to developing a thermal architecture of modular composite sections of the mirror support structure and primary mirror segment assemblies.
      • 05.0309 Overview of the Habitable Exoplanet Observatory (HabEx) Concept Architecture Stefan Martin (Jet Propulsion Laboratory), Gary Kuan (Jet Propulsion Laboratory) Presentation: Stefan Martin - Thursday, March 7th, 10:10 AM - Lake/Canyon
        The Habitable Exoplanet Observatory (HabEx) is one of four large mission studies commissioned by NASA for the 2020 Decadal Survey in Astrophysics. HabEx has identified three driving science goals: 1) to seek out nearby worlds and explore their habitability, 2) to map out nearby planetary systems and understand the diversity of the worlds they contain, and 3) to enable new explorations of astrophysical systems from our solar system to galaxies and the universe by extending our reach in the UV through near-IR. To achieve these goals, the HabEx study has identified a space telescope at Sun-Earth L2, with a 4-m aperture, and four science instruments – a coronagraph, a starshade, a high resolution UV spectrograph, and a multi-purpose, wide-field camera – flying formation with a 52-m diameter external starshade occulter as its architecture of choice. To achieve the precision and stability required for the coronagraph to image earth-sized exoplanets to better than 10-10 contrast, the telescope flight system is equipped with an off-axis telescope with a monolithic primary mirror, a vector vortex coronagraph, and colloidal electrospray microthrusters, and takes advantage of the mass and volume launch capability of NASA’s SLS Block 1B delivery system. In this paper, we provide an overview of the HabEx telescope and starshade flight systems that would advance astrophysical science in the same spirit as the Hubble Space Telescope well into the century.
      • 05.0310 The Spectral Calibration of Verve Judah Van Zandt (University of Notre Dame) Presentation: Judah Van Zandt - Thursday, March 7th, 10:35 AM - Lake/Canyon
        With the exception of the transit method employed by the Kepler Space Telescope, the radial velocity (RV) technique has proven to be the most fruitful exoplanet detection method to date. High precision measurements with state-of-the-art equipment have achieved RV precision on the order of 0.5 m/s. Given that Earth’s induced RV on the sun is on the order of 10 cm/s, this regime represents the next milestone on the path to finding Earth-like planets orbiting distant stars. We introduce the Vacuum Extreme Radial Velocity Experiment (VERVE), a single mode fiber-fed high-resolution echelle spectrometer which integrates interferometry and spectroscopy to achieve RV precision at the 10 cm/s level. The addition of interferometry relaxes the spectral calibration requirement by several orders of magnitude. We detail the spectral calibration of VERVE, including the development of a precise forward model and a data reduction pipeline to facilitate spectral extraction. We examine the model's agreement with the spectra of various calibration sources and present the results of spectral extractions performed on 635 nm and 780 nm laser sources. We describe the 10 cm/s RV precision threshold for detecting Earth-like planets, and note that the ~5 pixel agreement we achieve is consistent with this requirement. We discuss further corrections to be implemented in the model and conclude by revisiting the implications of achieving 10 cm /s RV precision once interferometry is employed in VERVE.
      • 05.0311 Origins Space Telescope: Mid-Infrared Transit Spectroscopy for the Detection of Bio-Signatures Johannes Staguhn (Johns Hopkins University & NASA Goddard Space Flight Center) Presentation: Johannes Staguhn - Thursday, March 7th, 11:00 AM - Lake/Canyon
        The discovery of the Trappist-1 system, which consists of an ultra cool M-dwarf star orbited by 7 planets, 3 of which are located in the habitable zone, has demonstrated that these types of planetary systems around dwarf stars are very common. Such systems are well suited for the study of exoplanets. In particular the search for bio-signatures in the atmosphere of planets in the habitable zone around M-stars will be a high-priority science goal of future space missions. The mid-infrared (mid-IR) band between 3 and 15 microns is probably the best available band for this science, because the band contains spectral lines of methane, ozone, and nitrous oxide. The coexistence of those in a planet's atmosphere would be a very strong indicator for life on the planet. The Origin Space Telescope's (OST) Mid-IR transit spectrometers will be the instrument of choice to detect these bio-signatures in exoplanets around M-dwarfs. However, current mid-IR detectors are based on impurity band conduction (IBC) devices such as Si:As detectors, which have significant problems with stability. As a result, those detectors are not expected to provide the required stability of ~ 5 ppm needed for a reliable detection of the aforementioned spectral lines. While efforts are under way to improve IBC detectors, it is unclear how far the performance can be improved. We describe alternative detector technologies and a calibration system we we are funded to demonstrate, which in combination promise to achieve the required stability.
      • 05.0312 Evaluating the LUVOIR Coronagraph Sensitivity to Telescope Aberrations Roser Juanola Parramon (NASA - Goddard Space Flight Center), Neil Zimmerman (), Tyler Groff (NASA - Goddard Space Flight Center), Matthew Bolcar (NASA Goddard Space Flight Center), Maxime Rizzo (NASA - Goddard Space Flight Center) Presentation: Roser Juanola Parramon - Thursday, March 7th, 11:25 AM - Lake/Canyon
        Direct imaging of exoplanets in their habitable zone is extremely challenging due to two main factors: the proximity of the planet to the parent star and the flux ratio between the planet and the parent star, usually to the order of 10^-10 in the visible. Future missions like the Large UV-Optical-Infrared (LUVOIR) Surveyor and the Habitable exoplanet Imaging Mission (HabEx) require large apertures and coronagraphs with active wavefront control to be able to suppress the starlight so faint planets can be detected and characterized adjacent to their parent star. The Extreme Coronagraph for Living Planet Systems (ECLIPS) is the coronagraph instrument on the LUVOIR Surveyor mission concept. It is split into three channels: UV (200 to 400 nm), optical (400 nm to 850 nm), and NIR (850 nm to 2.0 microns), with each channel equipped with two deformable mirrors for wavefront control, a suite of coronagraph masks, a low-order/out-of-band wavefront sensor, and separate science imagers and spectrographs. The Apodized Pupil Lyot Coronagraph (APLC) is one of the baselined mask technologies to enable 1E10 contrast observations in the habitable zones of nearby stars. The LUVOIR concept uses a large, segmented primary mirror (9-15 meters in diameter) to meet its scientific objectives. For such an observatory architecture, the coronagraph performance depends on active wavefront sensing and control and metrology subsystems to compensate for errors in segment alignment (piston and tip/tilt), secondary mirror alignment, and global low-order wavefront errors. For the two LUVOIR architectures (9m unobscured telescope, and 15m obscured telescope), we evaluate the sensitivity to segment-to-segment tip/tilt, piston, power, astigmatism, trefoil and hexafoil errors, and to global errors such as X and Y bend, spherical, power and coma aberrations, among others. Here we present the latest results of the simulation of these effects for different working angle regions and discuss the achieved contrast for exoplanet detection and characterization under these circumstances. Finally, we show simulated observations using high-fidelity spatial and spectral models of planetary systems generated with Haystacks, setting boundaries for the tolerance of such errors.
      • 05.0314 The Large UV/Optical/Infrared (LUVOIR) Surveyor: Decadal Mission Study Update Jason Hylan (NASA - Goddard Space Flight Center), Matthew Bolcar (NASA Goddard Space Flight Center), James Corsetti (NASA - Goddard Space Flight Center), Tyler Groff (NASA - Goddard Space Flight Center), Andrew Jones (NASA - Goddard Space Flight Center), Bryan Matonak (), Sang Park (Smithsonian Astrophysical Observatory), Garrett West (NASA - Goddard Space Flight Center), Kan Yang (NASA Goddard Space Flight Center), Neil Zimmerman () Presentation: Jason Hylan - Thursday, March 7th, 11:50 AM - Lake/Canyon
        In preparation for the 2020 Decadal Survey in Astronomy and Astrophysics, NASA commissioned the study of four large mission concepts: the Large UV/Optical/Infrared Surveyor (LUVOIR), the Habitable Exoplanet Imager (HabEx), the far-infrared surveyor Origins Space Telescope (OST), and the X-ray surveyor Lynx. The LUVOIR Science and Technology Definition Team (STDT) has identified a broad range of science objectives for LUVOIR that include the direct imaging and spectral characterization of habitable exoplanets around sun-like stars, the study of galaxy formation and evolution, the exchange of matter between galaxies, star and planet formation, and the remote sensing of Solar System objects. The LUVOIR Study Office, located at NASA’s Goddard Space Flight Center (GSFC), is developing two mission concepts to achieve the science objectives. LUVOIR-A is a 15-m segmented-aperture observatory that would be launched in an 8.4-m fairing on the Space Launch System (SLS) Block 2 configuration. LUVOIR-B is an 8-m unobscured segmented aperture telescope that fits in a smaller, conventional 5-m fairing, but still requires the lift capacity of the SLS Block 1B Cargo vehicle. Both concepts include a suite of serviceable instruments: the Extreme Coronagraph for Living Planetary Systems (ECLIPS), an optical / near-infrared coronagraph capable of delivering 1010 contrast at inner working angles as small as 2 /D; the LUVOIR UV Multi-object Spectrograph (LUMOS), which will provide low- and medium-resolution UV (100 – 400 nm) multi-object imaging spectroscopy in addition to far-UV imaging; the High Definition Imager (HDI), a high-resolution wide-field-of-view NUV-Optical-NIR imager. LUVOIR-A also has a fourth instrument, Pollux, a high-resolution UV spectro-polarimeter being contributed by Centre National d’Etudes Spatiales (CNES). This paper provides an overview of the LUVIOR science objectives, design drivers, and mission concepts.
      • 05.0315 Modern Wavefront Control for Exoplanet Coronagraph Imaging He Sun (Princeton University), N. Jeremy Kasdin (Princeton University) Presentation: He Sun - Thursday, March 7th, 04:30 PM - Lake/Canyon
        One of the most timely and important areas of astrophysics today is exoplanet science. Due to the success of NASA's Kepler mission, we now know that almost every star hosts at least one planet, and small rocky planets are the most plentiful. The next frontier of exoplanet science is direct imaging; that is, collecting light reflected off the planet and characterizing the constituents of its atmosphere. For small, dim, rocky planets this can only be done using a large, space-based telescope. Over the past decade, NASA has been studying several mission concepts for direct imaging along with the accompanying needed technology. In this paper we describe the plans for one of those technologies, a coronagraph with wavefront control, and its demonstration on NASA’s next large space telescope, the Wide Field Infrared Survey Telescope (WFIRST). A coronagraph is one of the two current technologies being examined for exoplanet direct imaging. By removing the starlight diffracted halo using a series of masks, a coronagraph is able to create high-contrast regions in the image plane, thus revealing companion exoplanets. Typically, in real exoplanet imaging telescopes, wavefront control is simultaneously introduced to cancel the detrimental influence of wavefront aberrations. Conventional wavefront control on ground-based telescopes mainly works by removing the wavefront phase aberrations caused by atmospherical turbulence. By placing wavefront sensors at the pupil plane, it directly senses the distorted wavefront and compensates by creating an opposite shape on deformable mirrors. Typically, a conventional wavefront control system can reach a contrast of 10^-6, which enables observations of large, self-luminous planets. However, to reach high enough contrast for an earth-like planet image (below 10^-10), the inherent aberrations in the optical system, such as the lens or mirror surface defects, need also to be canceled. In this case, wavefront sensors are no longer available because they introduce non-common-path errors. Modern wavefront control in space observatories instead apply focal plane approaches: the control commands of the deformable mirrors are computed only based on the science camera images. In this paper, we present a brief survey of the current focal plane wavefront control techniques, comparing their advantages and limitations. We start from the basic feedback control law, speckle nulling, which adjusts the deformable mirror shape based on the bright spots’ (speckles’) locations in the image plane. Then we move to the family of model-based controllers and explain the widely used optimal control laws, electric field conjugation (EFC) and stroke minimization. The recently developed linear dark filed control (LDFC), which uses the previously neglected bright field measurements to derive the control commands, is also discussed in the paper,. In addition, we also report our recent progress in adaptive wavefront control. This applies machine learning to correct the model errors in real time, so the correction speed and final achievable contrast is greatly improved. We finally describe the implementation of wavefront control on the coronagraph instrument for WFIRST and propose some future directions for the modern wavefront controllers.
      • 05.0317 The WFIRST CGI Integral Field Spectrograph: Requirements and Performance Predictions Tyler Groff (NASA - Goddard Space Flight Center), Neil Zimmerman (), Maxime Rizzo (NASA - Goddard Space Flight Center), Michael Mc Elwain (NASA Goddard Space Flight Center) Presentation: Tyler Groff - Thursday, March 7th, 04:55 PM - Lake/Canyon
        The WFIRST coronagraphic instrument (CGI) will demonstrate exoplanet spectroscopy using an integral field spectrograph (IFS). The CGI IFS, being designed and built at Goddard Space Flight Center, has a spectral resolution of R50 and is designed to accommodate a 20% bandpass spanning 600-970nm. The IFS is principally targeting the abundance of Methane features, with the primary coronagraph band being centered around 770nm. Key to the performance estimates are the achievable signal-to-noise (SNR) ratios and the stability of the microspectra over the course of tens and hundreds of hours. As a technology demonstration for CGI, the ability to close a wavefront control loop around the IFS, maintain a stable dark hole, and provide time resolved data that simultaneously spans spatial and spectral dimensions are crucial demonstrations for future observatories. The IFS is optimized both for coronagraphs and science observations with a potential future starshade. We highlight how the long duration observations, and requirements for both starshades and coronagraphs drive the IFS requirements and the calibrations required both on-orbit and on the ground. We also provide further detail on the optomechanical design, its stability based on thermal and structural predictions, anticipated performance, and operations concept of the CGI IFS. The impact of these performance metrics are projected into simulated data products, demonstrating cube extraction of noisy images and the subsequent planet spectrum that can be extracted from them. These demonstrations and performance predictions are key to future missions such as LUVOIR and HabEx, whose principal science case relies on efficient spectroscopy of exoplanets.
    • 05.04 Atmospheric Turbulence: Propagation, Phenomenology, Measurement, Mitigation Jack Mc Crae (Air Force Institute of Technology) & Noah Van Zandt (Air Force Research Laboratory)
      • 05.0401 Statistics of Target-induced Array Tilt in Coherently Combined Laser Array Engagements Milo Hyde (Air Force Institute of Technology) Presentation: Milo Hyde - Sunday, March 3th, 04:30 PM - Lamar/Gibbon
        The significant savings in size, weight, and power afforded by the coherent combination of multiple fiber lasers has the potential to revolutionize directed energy applications. While major engineering hurdles have been overcome in recent years, several aspects of target-in-the-loop (TIL) phasing systems still need to be resolved before these systems can be deployed. One of these, originally studied by Tyler [J. Opt. Soc. Am. A, 29, 722], is sensing and ultimately correcting array tilt. Tyler showed that TIL phasing schemes cannot accurately estimate array tilt, which is the discrete representation of the aberration tilt. Array tilt steers the main array lobe off axis—transferring power into the grating lobes—thereby significantly reducing system performance. Array tilt can be locally (optical-path-length-differences between fibers feeding the array elements, thermal effects, etc.), atmospherically, or target induced. The first two, local and atmospheric array tilt, have been the subjects of several recent studies. The latter has not been studied and is the subject of this work. Here, combining both theory and simulation, I determine the array tilt variance induced by a resolved, speckle target. Assuming that the field illuminating the rough speckle target is equal to the focused field emitted from the array, I first find the analytical mutual intensity of the speckle field scattered back to the receivers in the array plane. Then, I compute the x and y array tilt variances present in the receiver-plane field by generating many independent instances of the received speckle field from the aforementioned mutual intensity. I also present the joint probability density function of the x and y array tilts. Lastly, I conclude with a brief summary and discussion of the impact of my results.
      • 05.0402 Investigating the Outer Scale of Atmospheric Turbulence with a Hartmann Sensor Jack Mc Crae (Air Force Institute of Technology), Santasri Bose Pillai (Air Force Institute of Technology), Christopher Rice (), Steven Fiorino (AFIT) Presentation: Jack Mc Crae - Sunday, March 3th, 04:55 PM - Lamar/Gibbon
        A Hartmann Turbulence Sensor (HTS) system has been used to study the outer scale of turbulence. The atmospheric turbulence power spectrum is usually presumed to obey the Kolmogorov power law within some inertial range, while at spatial frequencies outside this range, the power spectrum is expected to fall away from this curve. The outer scale is the spatial frequency where the low frequency side of this roll-off occurs. In length units the outer scale is just the inverse of this spatial frequency. In the free atmosphere, this outer scale is presumed to be on the order of a hundred meters, but near the ground, the outer scale is expected to be on the order of the height above ground. The HTS used for this study has an aperture of 16” and employed a beam path which was around 5’ above the ground, thus the effects of the outer scale are expected to be minimal within the telescope aperture. However by relying on the cross wind to move turbulence across the telescope aperture much longer baselines can be achieved and outer scale effects can be sought. The presumption that the dominant temporal variation in turbulence is wind driven translation is called the Taylor Frozen Flow Hypothesis. When an outer scale is introduced into the Kolmogorov power spectrum the resulting power spectrum is called the von Kármán power spectrum. The wave structure functions due to these two power spectra are very different, as the Kolmogorov spectrum leads to a structure function which increases without bound as the separation between points increases, whereas the structure function due to the von Kármán spectrum rolls-over near the outer-scale and becomes constant. Unfortunately, the structure function itself isn’t measured by the HTS. Instead, the HTS can observe the tilt differences between subapertures separated in space or time. The Taylor Frozen Flow Hypothesis can then be used to switch between time and space. It is clear in the experimental data that this presumption is largely correct for some of the cases studied. Some of the data sets were collected with the fortuitous condition that the wind was approximately perpendicular to the path, with product between the wind speed and frame rate nearly matching the subaperture spacing. The expected differential tilt variance between subaperture pairs rolls over as the subaperture spacing increases and approaches a constant value for both the Kolmogorov and von Kármán power spectra, however in the case of the von Kármán spectrum this roll-over happens more clearly and the differential tilt variance exhibits a broad weak peak near the outer scale. Also, in the case of the von Kármán spectrum the constant value approached is smaller. Comparisons between measured differential tilt variances and those predicted by theory allows some estimates to be made about the size of outer scale.
      • 05.0403 Wave Optics Modeling of Solar Eclipse Shadow Bands Hanyu Zhan (New Mexico State University), David Voelz (New Mexico State University) Presentation: Hanyu Zhan - Sunday, March 3th, 05:20 PM - Lamar/Gibbon
        Just preceding and following the occurrence of a total solar eclipse, thin, wavy ribbons of light can be seen on the ground. These shadowy patterns known as “shadow bands” have an interesting historical story with regard to the explanation of their cause, but it is now generally accepted that they are a result of the sun’s light propagating through atmospheric turbulence as the solar crescent thins to a narrow filament. The narrowing of the source increases the spatial coherence of the light reaching the earth and this, combined with refraction and diffraction associated with turbulence, produces visible intensity variations on the ground. Previous studies have shown that the bands appear to move in a direction perpendicular to their elongation and their contrast increases and band spacing decreases as a function of decreasing wavelength. In addition, as totality approaches the bands become more linear and aligned, their spacing decreases and their contrast increases. Using weak scintillation theory and an atmospheric turbulence model, Codona [Astronomy and Astrophysics 164(2), 415-427, 1986] presented a theoretical investigation that explains these observed features and suggests the turbulence mainly responsible for shadow band is found to be within the bottom 2-3 kilometers of the atmosphere. In the work presented here, we propose a novel approach to model the shadow band phenomena using a numerical wave optics simulation where atmospheric turbulence is modeled with a set of phase screens. A crescent-shaped source is modeled as many independent point radiators and the field from each point is assumed to be a plane wave when it reaches the earth’s atmosphere. These waves are individually propagated with a split-step Fresnel diffraction algorithm through phase screens that model the lower part of the atmosphere and the results are combined incoherently at a ground plane. The simulation produces intensity patterns, structures, motion and evolution of shadow bands that agree well with Codona’s theory and with actual observations during an eclipse. For example, the band orientation is parallel with the crescent and as the crescent narrows the patterns become more linear and organized while the contrast increases. This work is significant in being the first report of the modeling of the shadow band phenomena using a numerical wave optics simulation. The simulation is useful for studying the effects of specific atmospheric conditions on shadow bands but the approach can also be applied to other problems involving extended sources observed through turbulence.
      • 05.0404 Estimation of Fried’s Coherence Diameter from Differential Motion of Features in Time-lapse Imagery Santasri Bose Pillai (Air Force Institute of Technology), Jack Mc Crae (Air Force Institute of Technology), Steven Fiorino (AFIT) Presentation: Santasri Bose Pillai - Sunday, March 3th, 09:00 PM - Lamar/Gibbon
        A method has been developed at the Air Force Institute of Technology to estimate atmospheric turbulence parameters from the turbulence-induced random, differential motion of features in the time-lapse imagery of a distant target. The variance of differential motion is a path-weighted integral of the refractive index structure constant, Cn2. The path weighting functions drop to zero at either ends of the path, their peak locations depending on feature sizes and separations. Sub-aperture sized features and separations have weighting functions with peaks close to the source end, while weighting functions for larger features and separations peak towards the camera end of the path. The weighting functions form a rich set and can be linearly combined to generate a desired weighting function, such as that of a scintillometer or Fried’s coherence diameter, r0. The time-lapse measurements can thus mimic the measurements of any turbulence measuring instrument. Since this is a phase-based technique, it has the potential to estimate turbulence over long paths where irradiance based techniques suffer from saturation issues. Hence the method is of value to the directed energy community. Estimates from this method agreed very well with scintillometer measurements over a 7 km path. In the present work, the method is adopted to estimate r0 from the time-lapse imagery of a LED array. The 10 x 10 array of 5 mm LEDs is mounted on a tripod and positioned on the ground. It is imaged from the top of a tower 1.8 km away using an imaging system that comprises of a telescope with an aperture of 57 mm and focal length 925 mm and a science camera. Images are captured at different times during the day with 1ms exposure times at 32 frames/ second. r0 is estimated by linearly combining the variances of differential motion corresponding to pairs of LEDs of different separations. The r0 estimates are compared to those obtained from co-located turbulence profiling instruments.
      • 05.0405 Assessing Free-Space Optical Communications through 4D Weather Cubes Steven Fiorino (AFIT), Santasri Bose Pillai (Air Force Institute of Technology), Josiah Bills (Radiance Technologies), Brannon Elmore (Air Force Institute of Technology), Jaclyn Schmidt (Air Force Institute of Technology), Kevin Keefer () Presentation: Steven Fiorino - Sunday, March 3th, 09:25 PM - Lamar/Gibbon
        This study investigates use of a novel data aggregation and interrogation tool, 4D Weather Cubes, and High Performance Computing (HPC) to further enlighten the ongoing debate regarding the potential for terrestrial laser free space optical (FSO) communications and benefits that might accrue on implementation of hybrid FSO architectures with a millimeter wave backup link. The 4D Weather Cubes were originally developed to accurately assess Directed Energy weapons and sensor performance (at any wavelength/frequency or spectral band) in the absence of field test and employment data. 4D Weather Cubes are the product of efficient processing of large, computationally intensive, National Oceanic and Atmospheric Administration (NOAA) gridded numerical weather prediction (NWP) data coupled to the verified and validated Laser Environmental Effects Definition and Reference (LEEDR) atmospheric characterization and radiative transfer code. The 4D Weather Cubes, inclusive of both conventional meteorological parameters, as well as optical features such as atmospheric transmission and turbulence, initialized the High Energy Laser End to End Operational Simulation (HELEEOS) propagation code. HELEEOS provided an additional tier of aggregation through development of comparative percentile performance binning of FSO communication bit error rates as a function of wide-ranging azimuth/elevation, earth-to-space uplinks. The aggregated, comparative bit error rate binning analyses for different regions, times of day and seasons are relevant to point‐to‐point as well as evolving multi‐layer wireless network concepts.
    • 05.05 Image Processing Matthew Sambora (USAF)
      • 05.0501 Hyperspectral Image Classification Based on Logical Analysis of Data Ayman Ahmed (), Sara Ibrahim (Zagazig University), Soumaya Yacout (École Polytechnique) Presentation: Ayman Ahmed - Wednesday, March 6th, 04:30 PM - Elbow 1
        The Hyperspectral image is a relatively new technique in the remote-sensing advancement. Earth observation technology and applications are migrating from just plane imaging in few spectral bands toward intensive spectral imaging. The hyperspectral image is composed of very narrow continuous spectral band with hundreds of bands. This spectral band, usually, covers all visible light, near-infrared, mid-infrared and thermal infrared areas. Hyperspectral imaging spectrometer mostly adopts the scanning type or push broom type, and can collect the data of hundreds of spectral bands. Unlike traditional imaging spectrometer, the Hyperspectral spectrometer provides more intensive spectral reflectance values for each pixel in the image, rather than having only the interval between the bands. Classification of the Hyperspectral data aims at acquiring spectral information to distinguish between land cover types or material in all pixels in the image. The classification of hyperspectral remote sensing image is divided into supervised and unsupervised classification, parametric and non-parametric classification, crisp and fuzzy classification. In this paper, we use the Logical Analysis of Data (LAD) approach in order to classify the spectral signature for each spatial pixel in the image. LAD is a supervised classification technique, which is based on combinatorial and Boolean logical analysis, and optimization theory. It can classify data into either two or more classes; LAD generates patterns for each observation in order to distinguish one class from the others. The approach is based on three stages: 1) the transformation of all types of data into a binary form, 2) the generation of patterns that characterize and distinguish each class, and 3) the theory formation that establishes the model to be used for future classification of new unclassified observations. In our experiment, the hyperspectral data is divided into two sets for training and testing. To illustrate the procedure of using LAD for hyperspectral data classification, we used limited number of classes. The software cbmLAD is used to extract knowledge from the first set of data, and to generate patterns that characterize and distinguish each class. The second set of hyperspectral data is used to test the accuracy of the model that was developed based on the generated patterns. Finally, the accuracy of the classification model is evaluated by calculating the classification accuracy that is based on K-folded cross-validation. The results showed that LAD has a classification accuracy that is comparable to other well-known machine learning techniques, such as the artificial neural network (ANN). Moreover, LAD’s generated patterns offer a unique explanatory power and an interpretability of the obtained results. For future research, this property will be used in order to consider the hyperspectral data as partial data due to the environmental changes or due to change of the spectral reflectance with time, and to classify new observations based on learning from partially observed phenomena or objects.
      • 05.0504 Correction of Etaloning Effects in Ground-based Hyperspectral Image Cubes of Jupiter Erandi Wijerathna (New Mexico State University), Emma Dahl (New Mexico State University), David Voelz (New Mexico State University), Nancy Chanover (New Mexico State University) Presentation: Erandi Wijerathna - Wednesday, March 6th, 04:55 PM - Elbow 1
        The New Mexico State University Acousto-optic Imaging Camera (NAIC) at the Apache Point Observatory 3.5-m telescope is collecting narrowband hyperspectral image cubes of Jupiter from 470-950 nm during the perijove passes of the Juno spacecraft. The NAIC observations of Jupiter’s uppermost cloud deck complement Juno’s infrared and microwave observations. NAIC utilizes an acousto-optic tunable filter which is a narrowband filter with an electronically adjustable center wavelength. The average spectral resolving power of the filter is R = 242. For operations prior to 2018, the focal plane used for NAIC was a 512x512 pixel^2, backside illuminated, high quantum efficiency CCD. However, the narrowband images show evidence of ‘fringing’, due to ‘optical etaloning’, which is an interference effect that occurs when some of the light incident on the detector penetrates the backside thinned sensor material and reflects off a rear surface or structure. The fringing is exacerbated at longer wavelengths because the silicon becomes more transparent. For much of our collected data, a flat field correction successfully removes the fringing from the science images. The flat field in this case is produced by quartz lamp illumination within the closed dome. However, for some absorption features, especially in Jupiter’s prominent CH4 bands at ~727 nm and ~890 nm, differences in the illumination spectrum of the flat field source and Jupiter leave residual fringing in the images. One approach for removing the fringing is to obtain a detailed thickness profile of the sensor and use it to model the fringing as a function of wavelength and pixel position. It is then possible to simulate the fringe patterns created by the science and flat field sources and remove the residual difference in the science images. In some circumstances, the sensor manufacturer can provide the physical attributes of the device to help determine the thickness function but the specific details for our commercial sensor are not available. However, because the NAIC instrument provides a sequence of images at small wavelength steps, it is possible to deduce the thickness function from the flat field image spectral data. Observation of the fringe pattern in the flat field images as a function of wavelength suggested we could assume a single layer thickness function. The thickness of the sensor has a parabolic-like variation as a function of pixel position in addition to finely spaced surface polishing marks. Using a mathematical interference (fringe) model, we were able to solve for the physical thickness profile at each pixel by minimizing the mean square error between the fringe model and the pixel spectral data. Simulated fringe frames created with the thickness function while accounting for the average spectral weighting of the science absorption features, were applied to correct the Jupiter images. This work is supported by Research Support Agreement 1569980 from the Jet Propulsion Laboratory, as a subaward of a NASA/Solar System Observations grant.
      • 05.0505 A Deep Learning Framework for Automatic Airplane Detection in Remote Sensing Satellite Images Wessam Hussein (MTC), Ehab Abouobaia (Military Technical College), Ahmed Abdelrhim (mtc) Presentation: Wessam Hussein - Wednesday, March 6th, 05:20 PM - Elbow 1
        ــــAutomated object detection in high-resolution remote sensing satellite images is a proper solution for this task rather than manual detection using professional specialists. However, it is more complex due to the varying size, type, orientation, and complex background of the objects to detect. Utilizing artificial intelligence using deep learning is the state of the art technique to achieve this task. The amount of labeled satellite images is limited for training a deep neural network, therefore, transfer learning techniques were adopted for this task. This paper proposes a framework for airplane detection based on Convolution Neural network (CNN). Faster Region Based CNN (Faster R-CNN) framework is used to perform automatic airplane detection through transfer learning. Inception v2 is added to the network for feature extraction to enhance detection accuracy. The problem of information reduction of the objects due to the resizing of large size satellite image in test phase has been solved by adding a split layer before the input layer, together with a mosaic layer after detection output layer. Dataset is used to build and test the model is collected from Google Earth. Experimental results prove that the proposed developed model is extremely accurate for satellite images object detection.
    • 05.06 Optical Detection and Analysis for Space Situational Awareness (SSA) Michael Werth (Boeing Company)
      • 05.0602 SNR Modeling for Ground-based Daytime Imaging of GEO-satellites in the SWIR Grant Thomas (Air Force Institute of Technology) Presentation: Grant Thomas - Monday, March 4th, 09:00 PM - Elbow 1
        This research outlines the expected performance and limitations of a ground-based shortwave infrared (SWIR) sensor in performance of the daytime geosynchronous satellite (GEO) custody mission with a generalized signal-to-noise ratio (SNR) approach model. Ground-based SWIR imaging is a low-cost and informative method of daytime GEO detection and characterization. Previous research has shown that the observed daytime signal from GEO’s persists in the SWIR through twilight. Imaging in the SWIR requires only moderately sized telescopes (< 1m) and relatively low-cost sensors, therefore large numbers of imaging assets are potentially available. Extending the daytime custody window further into twilight hours by even a few minutes increases vital space situational awareness (SSA) and provides valuable information for future SSA architecture development. This research shows the projected benefits of SWIR-band imaging over visible by GEO-belt position and season for daytime GEO satellite custody. Radiometric models of the satellite signal are developed assuming Lambertian reflectance and generalized satellite geometry. Sky radiance estimates are modeled using atmospheric scattering effects from the Laser Environmental Effects Definition and Reference (LEEDR) tool. The satellite spectral signal is compared to the sky background assuming background limited detection to determine SWIR bands of interest in terms of SNR. SNR trends for daytime custody of a generalized GEO satellite are presented for a full year using Dayton, OH as a representative ground site.
      • 05.0603 Imaging GEOs with a Ground-Based Sparse Aperture Telescope Array Michael Werth (Boeing Company) Presentation: Michael Werth - Monday, March 4th, 09:25 PM - Elbow 1
        Ground-based imaging of GEO satellites is a major area of technical interest in the field of Space Situational Awareness (SSA). To date, proposed GEO imaging systems have estimated costs of a hundred million dollars or more and often require some form of active illumination, a large number of large apertures, or a space-based asset. We propose a novel imaging array configuration of small optical telescopes with a number of key technical innovations for a relatively low-cost amplitude interferometry approach to the passive GEO imaging problem.
    • 05.07 Photonics and Lasers David Peters (Sandia National Laboratories) & Aleksandr Sergeyev (Michigan Technological University)
      • 05.0701 Experimental Simulations on the Laser Visualization of Flow Vortices Krishna Thakkar (SRM University), Akanksha Kesarwani (SRM Institute of Science and Technology), Karar Khan (Srm ), Rahul Sunil (SRM IST), Kannan B T (SRM Institute of Science and Technology ), Vinayak Malhotra (SRM University) Presentation: Krishna Thakkar - Wednesday, March 6th, 09:00 PM - Elbow 1
        Current classical hybrid engines suffer from low solid fuel regression rate, low volumetric loading and relatively low combustion efficiency. The combustion occurs in a boundary layer flame zone, distributed along the length of the combustion chamber about the fuel surface .Portions of the propellant may pass through the chamber without reacting thus secondary combustion chamber are often employed. These chamber add length and mass and may serve as a potential source of combustion instability. These drawbacks can be avoided in Vortex Hybrid Rocket Engine (VHRE).This injection methods generates a bi-directional, co-axial vortex flow field in the combustion. The swirling high velocity rate enhances heat transfer to the fuel surface which in turn drives high regression rate. Vortices are a major component of turbulent flow. The dynamics of vortices depends majorly on the nozzle geometry which in turn drives the mixing properties. The stream wise vorticity drastically alters the mass entrainment of a jet, and the efficiency of this vorticity in entraining fluid increases as the jet evolves downstream. An attempt was made to study the effects of various orifice geometries under different operating flow velocity on characteristics of vortices ,created by smoke, using a laser visualization technique. The nozzle geometries studied include circular and non-circular (square, triangle). The characteristic features of non-circular ones include improved large and small scale mixing in low and high speed flows, and enhanced combustor performance by improving combustion efficiency, reducing combustion instabilities and undesired emissions. For square and triangular sections the effect of different angles were also observed. Further, straws and meshes were fixed inside the setup such that it ensured a uniform distribution of flow and reduced turbulence to avoid possible variation.Visualization of the flows were carried out in the vicinity of the orifice exit in order to identify flow regimes and to study coherence. Other than this many other potential space applications of vortex dynamics have been cited by different researchers such as in space debris removal system, injectors, HVAC (Heating, ventilation, and air conditioning), nozzles etc. But for its extent use in space technology, supersonic aspects of vortex have to be considered. .
      • 05.0702 Fundamentals and Applications of Resonant Leaky-mode Photonic Lattices Robert Magnusson (University of Texas at Arlington) Presentation: Robert Magnusson - Wednesday, March 6th, 09:25 PM - Elbow 1
        Nano- and microstructured films with subwavelength periodicity represent fundamental building blocks for a host of device concepts. Whereas the canonical physical properties are fully embodied in a one-dimensional lattice, the final device constructs are often patterned in a two-dimensional slab or film in which case we may commonly refer to them as photonic crystal slabs or metasurfaces. These surfaces are capable of supporting lateral modes and localized field signatures with propagative and evanescent diffraction channels critically controlling the response. Local Fabry-Perot and Mie mode signatures are observable by computations within the structural geometry. Indeed, there is a current controversy as to whether these local modes, or lateral leaky Bloch modes, generate the functional response. The subwavelength restriction of periodicity is usually maintained for effective devices; however, it is also possible to generate interesting spectral behavior when this is not satisfied leading to unexpected device concepts. The dominant second leaky stopband exhibits many remarkable physical properties including band-edge transitions and bound states in the continuum. The Fourier harmonic content of the spatial modulation is key to understanding the band dynamics of these lattices. Multi-resonance effects are observed when Bloch eigenmodes are excited with more than one evanescent diffraction channel with the resulting spectral response clearly understood by invoking this process. We show how materially sparse leaky-mode photonic lattices may be nearly completely invisible to one polarization state while being opaque to the orthogonal polarization state with this property existing over significantly wide spectral bands. We will discuss these key properties of leaky-mode lattices and present relevant device examples. These include wideband reflectors, nonfocusing spatial filters, ultra-sparse reflectors and polarizers, single-layer bandpass filters, and resonant sensors with representative fabricated example devices described as well. Interesting for aerospace applications including imaging and sensing, experimental results on wideband reflectors operating in the mid-IR spectral region spanning from 3 to 13 μm are presented along with demonstration of their design and fabrication.
    • 05.08 Microscopy for Life Detection Chris Lindensmith (Jet Propulsion Laboratory, California Institute of Technology)
      • 05.0801 Development of a Light-field Fluorescence Microscope for in Situ Life Searches in the Solar System Gene Serabyn (Jet Propulsion Laboratory), Kurt Liewer (Jet Propulsion Laboratory), Chris Lindensmith (Jet Propulsion Laboratory, California Institute of Technology), Jay Nadeau (Portland State Universty), J. Kent Wallace (Jet Propulsion Laboratory) Presentation: Gene Serabyn - Thursday, March 7th, 05:20 PM - Lake/Canyon
        With oceans and energy sources present on several outer solar system moons, it is natural ask whether life might also be present on such bodies. To search for microbial life, 3-d imaging microscopes can efficiently sample large volumes of liquid, and can provide information on cellular morphology and structure, as well as on cellular motility. On the other hand, a 3-d fluorescence imaging microscope can provide complementary chemical composition information. Our approaches to 3-d holographic microscopy have been described earlier; here we describe our fluorescence imager concept. The ultimate goal is to combine both types of microscope into a lander instrument package. Light-field imaging is a technique that can be used to focus to different sample depths without the need for any mechanical focusing element. This technique generally has lower resolution than direct classical imaging, but makes up for it with the ability to reconstruct sample planes well beyond the normal imaging depth of field. As the ultimate spatial resolution is not required for the fluorescence imaging of sparse samples in the case that one plans to correlate the fluorescence signal with a simultaneous higher-resolution 3-d holographic microscope image, a light field fluorescence imager should be able to provide adequate resolution in this combined microscope case. We first examine the parameters defining the light-field microscope, in order to optimize them to our application. Image reconstruction to different sample planes is effected with a very simple ray-trace algorithm that is also described. Our goal is to reach a suitable compromise between field of view, depth of field, and spatial resolution, within coincident volumes of view for the fluorescence microscope and the digital holographic microscope. Finally, we use a prototype laboratory light field microscope to carry out performance demonstrations and to compare to performance predictions. Experimentally, with a microscope magnification in the range of 20 to 50, and without the use of additional resolution-enhancement techniques, one can provide a resolution of roughly 5 microns across a field of view on the order of a mm and a depth of field of a few hundred microns, thus meeting our performance goals.
      • 05.0802 A Multiwavelength Digital Holographic Microscope Architecture for Enhancing Life Detection J. Kent Wallace (Jet Propulsion Laboratory), Jay Nadeau (Portland State Universty), Manuel Bedrossian (California Institute of Technology), Gene Serabyn (Jet Propulsion Laboratory) Presentation: J. Kent Wallace - Thursday, March 7th, 09:00 PM - Lake/Canyon
        Digital holographic microscopy is a powerful method for microbial life detection in extreme environments. Holography greatly increases the diffraction-limited depth-of-focus of a sample by many orders magnitude over classic microscopy. This is done via numerical reconstruction resulting in an instrument with no moving parts, thereby making it an excellent match to flight instruments and harsh environments where mechanisms are both costly and risky. Our previous monochromatic designs have been architected to also be simple and very robust without any loss of sensitivity. Field tests of these instruments have borne out the benefits of these engineered attributes. Here we add a new functionality – the ability to measure the sample in three different wavelengths simultaneously. This capability allows us to generate pseudo-color images of the microbiological samples and thereby enhance our ability to identify and characterize species as well as sub-cellular features. The wavelength dependent phase also allows us to bound the index-of-refraction of the samples which can be used as a discriminator between animal and mineral. In this talk, we will describe the instrument design and build in detail, and illustrate its performance with measurements of microbial samples.
      • 05.0803 Digital Holographic Microscope Trades for Extant Life Detection Applications Chris Lindensmith (Jet Propulsion Laboratory, California Institute of Technology), Gene Serabyn (Jet Propulsion Laboratory), J. Kent Wallace (Jet Propulsion Laboratory), Jay Nadeau (Portland State Universty) Presentation: Chris Lindensmith - Thursday, March 7th, 09:25 PM - Lake/Canyon
        Optical microscopy is one of the key technologies needed for detection of extant life on other solar system bodies. Micro¬scopic images can be used to identify the presence of cell-like objects and discriminate probable cells from other abiotic particles of similar scale through observations of morphology. Image sequences can be used to determine particle density through observation of Brownian motion, enabling discrimination of liquid-filled vesicles from solid mineral grains; non-Brownian motion that is also inconsistent with background flow can also indicate biotic particles. Phase-sensitive imaging modes allow measurement of index of refraction and can be used to image transparent cells that might otherwise require the addition of stains. Because of the likely limited energy available for replication on the moons of Jupiter and Saturn, potential unicellular life would likely be present only at very low concentrations requiring a search through substantial volumes of material at very high resolution. We have been developing digital holographic microscopes (DHM) that addresses the need for high resolution search at low concentrations. Our DHM designs provide both the sub micrometer resolution necessary to detect the smallest forms of life and the high throughput needed to do so at very low concentrations. A significant feature of the holographic recording is that all objects in a large volume can be recorded simultaneously, without the need for focus or tracking to image individual objects. We have demonstrated two promising DHM architectures for possible use in potential future life detection missions – one using conventional optics and one using gradient index optics in a “lensless” arrangement. We compare the two designs, their trade spaces, and the features that might make each preferable for specific applications.
  • 6 Remote Sensing Jordan Evans (Jet Propulsion Laboratory) & Darin Dunham (Lockheed Martin)
    • 06.01 Systems Engineering Challenges and Approaches for Remote Sensing Systems Todd Bayer (NASA Jet Propulsion Lab) & Karen Kirby (JHU-APL)
      • 06.0102 Impact of Simultaneous Movements on Perception of Safety, Workload and Task Difficulty in MRTO Maria Hagl (), Maik Friedrich (German Aerospace Center - DLR), Joern Jakobi (), Sebastian Schier Morgenthal (), Christopher Stockdale () Presentation: Maria Hagl - Monday, March 4th, 04:30 PM - Lamar/Gibbon
        Providing air traffic service to more than one aerodrome is a key concept within Remote Tower. So-called Multiple Remote Tower Operations (MRTO) are expected to be more cost-efficient and user-friendly. On the one hand, their anticipated benefit is to maintain smaller airports that are currently non-profitable due to low traffic numbers, high staffand tower maintenance costs. On the other hand, MTRO offer equally distributed and constant activity for air traffic controllers (ATCOs), with the expectation to lower risks of human error due to boredom or sleepiness at work. However, multiple tasking challenges arise if one ATCO needs to handletraffic at three airports simultaneously. Thus, combinations of visual, audio, vocal and haptic tasks need to be performed for more than just one location. Therefore, this paper addresses the impact of simultaneous movements on perceived safety, workload and task difficulty. Descriptive results show that with the increase of simultaneous movements, providing ATC is perceived as being more efficiency-critical, more demanding in workload and task difficulty increases as well. It was not tested if the differences were significant, since statistical conditions haven’t been met. Results show that no situation containing simultaneous movements was perceived as a threat to safety, good workload or the ability to provide ATC. The discussion shows why the impact of simultaneous movements might not only affect MRTO but also single remote or conventional tower environments.
      • 06.0103 Challenges and Solutions for Precision Solar Pointing on the ISS for the TSIS Instrument Patrick Brown (LASP, University of Colorado) Presentation: Patrick Brown - Monday, March 4th, 04:55 PM - Lamar/Gibbon
        The Total and Spectral Solar Irradiance Sensor (TSIS) measures total and spectral solar irradiance in order to continue the multi-decade-long records of these important physical quantities. TSIS was installed onto Site 5 of ELC-3 on the ISS in December 2017 and has been operating continuously since then. In order to collect its scientific measurements, TSIS requires precision solar pointing every orbit that is accomplished via a 2-axis gimbaled pointing system. When the ISS was initially selected in 2014 as the accommodation for TSIS, two main ISS challenges were identified that could have significant impacts on precision solar pointing: 1) the poorly understood base motion jitter environment, and 2) the complex structural obscurations that limit solar viewing opportunities. Both of these issues were of interest to not only TSIS but also to other external instruments that require stable pointing at external targets. This paper will briefly explain how these two challenges were accounted for in the TSIS design, and then the majority of the paper will explain how on-orbit measurements by the TSIS High-rate Fine Sun Sensor (HFSS) have been used to quantify each of these effects. For on-orbit base motion jitter, the following will be discussed: the observation approach, data reduction approach, example time domain and frequency domain results, and a summary of all the measurements. These results are significant, because they are nearly an order of magnitude smaller than previous estimates, yet they agree well with other recent ISS measurements including OPALS. It is hoped that providing these results in a published format will help to inform future payload providers during their design process. For on-orbit structural observations, the following will be discussed: the measurement and analysis approach, detailed 2D obscurations maps with comparisons to actual ISS photos, and descriptions of how this information aids in commanding and science data processing. These results are significant, because they clearly show how complex the ISS viewing environment is, and just as importantly, the measurement and analysis techniques may be valuable for other payloads. Finally, the TSIS on-orbit pointing performance will be presented and will show how the design approaches were able to successfully account for the ISS challenges. Specifically, it will be shown that the angular pointing performance of 4 arcseconds 1-sigma meets the 60 arcsecond requirement with ample margin in the presence of ISS base motion jitter. Lastly, it will be shown that sufficient viewing time per orbit has been achieved in the presence of ISS structural obscuration.
      • 06.0106 Orbit Maintenance Module for Tradespace Analysis Tool for Constellations Andris Slavinskis (Tartu Observatory/NASA Ames Research Center), Joel Mueting (NASA Ames Research Center), Sreeja Nag (NASA Goddard Space Flight Center / Ames Research Center (BAERI)) Presentation: Andris Slavinskis - Monday, March 4th, 05:20 PM - Lamar/Gibbon
        This paper presents an orbit maintenance module for Tradespace Analysis Tool for Constellations (TAT-C), a software package to explore a wide range of tradespaces to design constellations for Earth observation. As the tool is primarily meant for rapid pre-Phase A analysis, it has to be able to estimate trade-offs and overall performance parameters with simplified models on a personal computer in a reasonable time frame, i.e., without propagating orbits. The orbit maintenance module estimates the secular drift of relative orbital elements between pairs of satellites due to the gravitational 'J2' effects and the drift of the altitude due to the atmospheric drag. The J2 is a predominant term in the gravitational zonal harmonics which affects the right ascension of the ascending node, the argument of perigee and the mean anomaly. We estimate the drift of these elements between pairs of satellites using a fourth-order polynomial which is trained using machine learning and which depends on the inclination, the altitude and the initial angular separation in the true anomaly and the right ascension of the ascending node. An analytical model is used to predict the deorbiting rate depending on the initial altitude, the solar cycle, the satellite's mass, drag coefficient and area. In order to maintain a topology of a constellation, the drift of orbital elements is compensated using emulated orbital maneuvers, when satellites breach a user-defined threshold percentage of their nominal values. We assume simple orbital maneuvers (e.g., orbit phasing and Hohmann transfer) to determine the required delta-v, propellant consumption, frequency of maneuvers and time required to perform them. These parameters are provided as outputs of the TAT-C's orbit maintenance module which advises the user on trade-offs between various orbital regimes. The maneuver metrics can be used to determine the time available for observations, the contribution to the satellite mass, the influence on the cost, etc.
      • 06.0109 A Framework for Heterogeneous Satellite Constellation Design for Rapid Response Earth Observations Ibrahim Sanad (University of British Columbia (UBC)) Presentation: Ibrahim Sanad - Monday, March 4th, 09:00 PM - Lamar/Gibbon
        Earth Observation (EO) satellite constellation design deserves further investigation for optimizing configurations that enhance space mission performances. Most of the constellation design methodologies focus on methods for deploying multiple satellites in a manner that guarantees continuous Earth-space or inter-satellite coverage, for communications, or fast revisit times and short response times over a given region, for EO. In recent years, there has been a big interest to reduce the System Response Time (SRT) by using EO satellites from international organizations such as the United Nations for Disaster Risk Reduction (DRR) and from national organizations for defense and national security. This key performance indicates to the user when, after the request submission, the image produced will be available to him. The best way to improve this performance metric is using heterogeneous constellations, where two different functional constellations are cross-linked; one is mainly for imaging and the other is a communication constellation that is dedicated to relaying commands delivery from Earth station to imaging satellites and data collection back to Earth. Previous works have proposed this scheme to explore its potential enhancement of system performance or to evaluate the network performances by comparing candidate relay constellations for servicing remote sensing satellites. However, optimization and configuration of this scheme have not been introduced. Since the best heterogeneous configuration may require studying several constellation combinations, this paper presents a framework capable of generating thousands of heterogeneous constellation configurations based on predefined design variable ranges and sizing those configurations in terms of the pre-defined measure of performances. Using Systems Tool Kit (STK) and its various add-on modules, we introduce multiple solutions to configure imaging and relay constellations that are optimal for their objectives. One of these solutions is an imaging constellation of 8 satellites equally distributed in 2 planes, a Sun-Synchronous Orbit (SSO) and a mid-inclination orbit. We select this constellation based on the global daily coverage percentage and the satellite sensor parameters. In order to reduce the maximum SRT, we select the best constellation parameters of a relay constellation on an Equatorial Medium Earth orbit (MEO) plane and the best location of a ground station (GS) as a receiving and transmitting GS.
      • 06.0110 VISIONS-3: Using Sounding Rockets and 3D Tomography to Analyze Ion Outflow Sophia Zaccarine (Embry-Riddle Aeronautical University) Presentation: Sophia Zaccarine - Monday, March 4th, 09:25 PM - Lamar/Gibbon
        VISualizing Ion Outflow via Neutral atom Sensing (VISIONS) and VISIONS-2 are sounding rocket missions with the goal of analyzing the auroral wind and cusp ion outflow using low energy neutral atom (ENA) imaging at low altitudes of the ionosphere (P.I. Dr. Douglas Rowland, NASA Goddard). VISIONS-3 is the proposed follow-up mission, intended to use tomographic reconstruction with two launch vehicles. The overlap of line-of-sight regions from the two viewpoints of the sounding rockets provides boundary conditions to mathematically constrain data analysis, similar to a CAT scan. Each payload will house at least two Energetic Energy Neutral Atom Imager (E-ENA) instruments, oriented to point opposite directions out of the rocket. A data simulation is being created to model the line-of-sight regions from each rocket to deduce the intake of energized particles by each instrument detector. This data simulation will more fully define the benefits of launching two rockets rather than just one, as well as to model the use of tomography. The Norway launch range was selected for the launch, with two Black Brant XII Sounding Rockets from Wallops Flight Facility launched between 1-3 minutes apart with a launch azimuth separation of 6-16 degrees. The launch will be during the day to analyze the poleward moving auroral forms. The significance of this research is that insight into our ionosphere, magnetosphere, ion outflow, and the “boiling off” phenomenon of our atmosphere may provide understanding of how the atmosphere of other planets (e.g. Mars) disappeared. This would provide further insight into the life cycle of planets and atmospheres. Additionally, a heightened understanding of space physics could lead to the ability to predict which exoplanets may have an atmosphere and magnetic field like Earth using deep space telescopes.
    • 06.02 Instrument and Sensor Architecture, Design, Test, and Accommodation Keith Rosette (Jet Propulsion Laboratory) & Matthew Horner (JPL)
      • 06.0201 Ocean Color Instrument Integration and Testing Susanna Petro (), David Sohl (NASA - Goddard Space Flight Center) Presentation: Susanna Petro - Sunday, March 3th, 04:30 PM - Dunraven
        This paper describes the plans, flows, key facilities, components and equipment necessary to fully integrate, functionally test, qualify and calibrate the Ocean Color Instrument (OCI) on the Plankton, Aerosols, Clouds, and oceans Ecosystem (PACE) observatory. PACE is currently in the design phase of mission development. It is scheduled to launch in 2022, extending and improving NASA's twenty-year record of satellite observations of global ocean biology, aerosols and clouds. PACE will advance the assessment of ocean health by measuring the distribution of phytoplankton, which are small plants and algae that sustain the marine food web. It will also continue systematic records of key atmospheric variables associated with air quality and the Earth's climate. PACE's primary sensor, the OCI, is a highly advanced optical spectrometer that will be used to measure properties of light over portions of the electromagnetic spectrum. It will enable continuous measurement of light at finer wavelength resolution than previous NASA satellite sensors, extending key system ocean color data records for climate studies. The color of the ocean is determined by the interaction of sunlight with substances or particles present in seawater such as chlorophyll. By monitoring global phytoplankton distribution and abundance with unprecedented detail, the OCI will contribute to a better understanding of the complex systems that drive ocean ecology and it’s impacts on global fisheries. This paper will focus on the Integration and Test (I&T) activities for OCI while it is at the NASA Goddard Space Flight Center. The OCI integration consists of assembly and alignment of the rotating telescope, electronics box integration, fixed deck assembly integration, thermal systems integration and the final assembly and testing. This I&T phase will be followed by the OCI calibration and characterization, environmental tests which include electromagnetic interference (EMI)/electromagnetic compatibility (EMC), vibration with sine sweep, acoustics, shock, thermal balance, thermal vacuum, mass properties and center of gravity. This paper will briefly discuss OCI shipment and delivery to the spacecraft vendor for observatory level I&T as well as some launch preparation activities.
      • 06.0202 Overview of the TROPICS Flight Segment Andrew Cunningham (MIT Lincoln Laboratory) Presentation: Andrew Cunningham - Sunday, March 3th, 04:55 PM - Dunraven
        The Time-Resolved Observations of Precipitation structure and storm Intensity with a Constellation of Smallsats (TROPICS) mission was selected by NASA in 2016 as part of the Earth Venture–Instrument (EVI-3) program. MIT Lincoln Laboratory TROPICS constellation will provide rapid-refresh microwave measurements (median refresh rate of approximately 40 minutes for the baseline mission) over the tropics that will observe the entire tropical storm lifecycle. The constellation consists of six CubeSats, two in each of three low-Earth orbital planes with a nominal orbit is 550 km and inclination of 30 degrees. Each CubeSat hosts a high performance radiometer with twelve microwave channels. The radiometer measurements provide atmospheric temperature profiles, water vapor profiles, and rain rate. The radiometer design is similar to two other MIT Lincoln Laboratory cubesat programs, MicroMAS and MiRaTA, with improvements to meet stricter performance requirements and longer mission life. The bus is under contract with a commercial vendor who will integrate and test the space vehicle. This paper will describe the mission, present various aspects of the radiometer and bus design, as well as assembly, integration and test.
      • 06.0203 Omniscopic Vision for Robotic Control. Dominique Meyer (), James Strawson (), Falko Kuester (University of California, San Diego) Presentation: Dominique Meyer - Sunday, March 3th, 05:20 PM - Dunraven
        Autonomous and semi-autonomous platforms increasingly leverage vision as a reliable, cheap and appropriate sensing method to understand the state of themselves and the surrounding environment. Manufacturers desire to maximize operation capabilities while safeguarding limits to protect the robotic platforms and the environment. Consequently, we want to simultaneously maximize the information we have of the surroundings as well as the confidence of this information, while minimizing the processing complexity. This paper explores the capabilities high resolution, fully spherical camera systems serve for robotic control. We highlight the effect of camera parameters(sensory and optical), array layout(sensor count, placement, monoscopic/stereoscopic and stereo disparity) on spatial resolution, Field of View (FoV) and depth estimation with associated confidence. We propose a conceptual camera system that uses an array of 60 individually driven cellphone sensors that achieve a combined resolution of up 780 MegaPixels with 360 x 120 degreee coverage. This system demonstrates stereoscopic pairs which serve to easily derive depth, while maintaining a resolving power of 3 cm at 100m with a framerate of up to 30 Hz. The assembly highlights a novel vision capability for ground vehicles, where object detection and odometry is enabled for “far-ahead” planning and safe operation of vehicles in all conditions. This paper illustrates design validations for the full system and test results of a single ring of the 4 ring (16 camera array subset) design that demonstrates these vision capabilities. We evaluate system reliability, sensing performance in high dynamic range lighting conditions, and illustrate the data handling of this data-intensive workflow. The physical system is tested in indoor and outdoor scenes with varying light conditions, while stationary and in motion. Various applications for this ultra-high resolution omniscopic array and other conceptual designs are proposed.
      • 06.0207 Improving UAVSAR Results with GPS, Microwave Radiometry, and QUAKES Topographic Imager Andrea Donnellan (Jet Propulsion Laboratory, California Institute of Technology), Yunling Lou (Jet Propulsion Laboratory), Curtis Padgett (Jet Propulsion Laboratory), Alan Tanner (Jet Propulsion Laboratory), Brian Hawkins (), Jay Parker (Jet Propulsion Laboratory), Adnan Ansar (Jet Propulsion Laboratory), Michael Heflin (Jet Propulsion Laboratory), Ronald Muellerschoen (JPL) Presentation: Andrea Donnellan - Sunday, March 3th, 09:00 PM - Dunraven
        UAVSAR is NASA’s airborne interferometric synthetic aperture radar (InSAR) platform. The instrument has been used to detect deformation from earthquakes, volcanoes, oil pumping, landslides, water withdrawal, landfill compaction, and glaciers. It has been used to detect scars from wildfires and damage from debris flows. The instrument performs well for large changes or for local small changes. Determining subtle changes over large areas requires improved instrumentation and processing. We are working to improve the utility of UAVSAR by including GPS station position results in the processing chain, and adding a topographic imager to improve estimates of topography, 3D change, and damage. We are also exploring the benefit of microwave radiometry to mitigating error from water vapor path delay. A goal is to determine 3D tectonic deformation to millimeters per year at ~100 km plate boundary scales and to understand surface processes in areas of decorrelated radar imagery.
      • 06.0208 Estimation of Stellar Instrument Magnitudes Based on Synthetic Photometric Spectrum Rui Lu (Beijing Institute of Control Engineering) Presentation: Rui Lu - Sunday, March 3th, 09:25 PM - Dunraven
        Star trackers are widely used to determine the orientation (or attitude) of the spacecraft with respect to the stars in the inertial space. Stellar instrument magnitudes are of great importance for star trackers. Many methods have been proposed to estimate stellar instrument magnitudes so far, normally making assumption of the dependence of color indices of one of standard photometric systems .The dependence among color indices is supposed to be in form of a linear, quadratic or fourth order polynomial fitting function, with one or multiple variables of color indices. Generally speaking, there are three drawbacks in fitting methods. First, it is not the case especially for stars of all spectral classes over the whole wavelength.Second, neither of the methods can be universal, as not all spectral photometric data is available. Moreover, fitting methods may lead to large error with higher order item abandoned. To overcome these drawbacks, a novel method of stellar instrument magnitudes computation is proposed in this paper by convolving the synthetic photometry spectrum with the optical transimissivity of star trakers, taking account of the photon-counting nature of modern imaging detectors. This procedure is consistent with the definition of stellar instrument magnitudes, without any assumption. As no high order item is abandoned, the proposed method can achieve much higher accuracy than all existing method. To validate the proposed method, synthetic spectrum data is used, presented in the Next Generation Spectral Library (NGSL) catalogue. In this dataset, flux, flux error, and dispersion of 144 spectral classes are given with step of 1.5Å. Therefore, the stellar instrument magnitude can be computed accurately by the integration function. Furthermore, some uncertainty is added on synthetic photometry spectrum data in order to verify the robustness of the proposed method. Experiments on the synthetic spectrum dataset show that with the deviation increment of synthetic spectrum data the computation accuracy of the proposed method decreases. The root-mean-square deviation error is 0.00165 for deviation 5% while 0.010123 for deviation 30%. Results of methods using linear fitting function of I-V, B-V and R-I color indices are 0.08001m, 0.17417m and 1.04777m individually, which is still worse than results of the proposed method using spectrum data with 0.3 deviation. Normally, deviation of the synthetic spectrum data is less than 10%. In this case, the root-mean-square deviation error is about 0.00343, which is only 4.2% of result of the best method using a linear fitting function of I-V. In a word, the proposed method outperforms the-state-of-art stellar instrument magnitudes computation methods, with an improvement of almost 95.8%, which is meaningful for developing high accuracy navigation catalog especially for star trackers with accuracy less than 1" . As no assumption is needed, the proposed method can be widely used to compute stellar instrument magnitude for stars of various spectral classes with high accuracy.
    • 06.04 Radar Signal Processing Thomas Backes (Thomas D. Backes) & Donnie Smith (Waymo)
      • 06.0402 Design of Multilayer Airborne Radar Data Processor Narasimhan R S (Center for Airborne Systems, DRDO), Aparna Rathi (Electronics and Radar Development Establishment Bangalore) Presentation: Narasimhan R S - Friday, March 8th, 11:00 AM - Dunraven
        The paper proposes design of multiple-layer robust intelligent airborne radar target tracking system. The design discusses the algorithms that aid in airborne tracking process, different techniques of clutter mitigation at data processor, false tracks suppression and techniques for improvising track maintenance functionality. The objective of the design is to develop a practical, ready to deploy, modular airborne multiple target tracking system. The design is based on layered approach wherein, each layer of tracking system is designed to meet specific function. The innovation of the paper is evolving and partitioning of airborne target tracking algorithms into different layers and seamless integration of these layers. The proposed approach has three layers with each layer performing a specific function. The first layer is called pre-processing layer with main functionality of clutter mitigation and suppression of unwanted plots. The second layer is targeted to track non-maneuvering targets and detection of target maneuvers. The main functionality of third layer is to track maneuvering targets. The paper also discusses about the interaction between these tracking layers. The airborne radar considered here is pulse Doppler coherent radar employing Medium Pulse Repetition Frequency (MPRF) waveforms and capable of measuring target range, azimuth, elevation and range rate. The radar also provides additional information about the plots such as signal to noise ratio (SNR) and estimates of measurement accuracy. The plot quality is based on cluster goodness factor computed as intra cluster dispersion during range-range rate ambiguity resolution and target detection criterion. This additional information is the byproduct of the signal processing techniques adopted in pulse Doppler airborne radar system. The tracker design proposed here is based on effectively utilizing the above information from signal processor. Some of the salient issues addressed in the paper are suppression of false tracks arising from clutter leaks, ground moving targets, windmills, ghosts and multipath detections. The false tracks could clutter air situation picture and penalize radar resources. The algorithms focus on reduction of false and unwanted tracks in airborne radar, improvement of detection performance through feedback from non maneuver tracking layer and maneuver target tracking layer. The layered design provides advantages in terms of reusability, configurability, maintainability and is scalable and adaptive. The concept of layering facilitates efficient usage of all algorithms. The layered software concept alleviates the burden placed on combat mission commanders since some of the tasks intelligently collate all information along with historical data and hence resolving ambiguous tracks whenever necessary, rejecting the improbable data and adapt automatically to environmental changes.
      • 06.0406 A Time-varying Subcarrier Phase Encoded Radar Waveform Thomas Backes (Thomas D. Backes) Presentation: Thomas Backes - Friday, March 8th, 11:25 AM - Dunraven
        Direct Digital Synthesis (DDS) technology has allowed for nearly arbitrary radar waveform design. In this paper, a pulsed radar signal having multiple time-varying subcarriers with phase encoding is described. The subcarriers are frequency-separated by the inverse of the duration of a phase element or chip. However, the duration of each phase encoded chip and its associated separation varies with time over the entire pulse window. We discuss the use of phase sequences that improve the periodic autocorrelation function of the signal and demonstrate the resulting ambiguity function for these signals. The design goal of low variance in the power envelope are also described.
      • 06.0407 A Volterra-series Based Time-frequency Fusion Strategy for Sea-surface Weak Target Detection Min Zhang (Xidian University), Zhaohui Cai (Xidian University) Presentation: Min Zhang - Friday, March 8th, 11:50 AM - Dunraven
        In this work, we put forward a Volterra-series based time-frequency distribution (TFD) fusion strategy for the sea-surface weak targets detection, which achieves more accurate detection than those of prior art. In order to enhance the performance of time-frequency techniques and suppress signal-dependent cross-term artefacts, the first two terms of the Volterra series expansion is utilized as the fusion rule to construct the fused TFD (FTFD). Herein, the outputs of the available time-frequency algorithms is treated as the variables of weighted averaging fusion model, and the optimal fusion coefficients are estimated through training sets inspired by the structure of convolutional neural network (CNN). In the pattern classification phase, the CNN trained by culture algorithm (CA) aided simulated annealing resilient propagation (SARPROP) algorithm is utilized as a classifier. Experimental results demonstrate that the FTFD constructed by the proposed scheme be adjusted to work for a wide range of signal-to-noise ratios and, furthermore, increases the detectability of sea-surface floating weak targets under any environment circumstances.
    • 06.05 Information Fusion Stefano Coraluppi (Systems & Technology Research) & Craig Agate (Toyon Research Corporation)
      • 06.05 6.05 Keynote Presentation: - - Dunraven
      • 06.0501 Unifying Multi-Hypothesis and Graph-Based Tracking with Approximate Track Automata Lucas Finn (BAE Systems) Presentation: Lucas Finn - Thursday, March 7th, 05:20 PM - Dunraven
        We present an approach to the Multi-Target Tracking problem that occupies a ``middle'' ground between Multi-Hypothesis Tracking (MHT) and Graph-Based Tracking (GBT) (or ``Tracklet Stitching''), reducing to each as a special case. To do this, we represent the current hypothesis-set as all those report-strings accepted by some automaton: Depending on input statistics (to what extent assignment probabilities are path-independent) and on an approximation-fidelity parameter, the automaton will naturally be either an MHT forest, or a min-cost-flow graph, or some intermediate structure. We introduce the formulation, describe algorithms to construct so-called Track Automata, give an Integer Linear Program (ILP) to extract globally optimal tracks from these automata, illustrate key special cases (including where the problem is solvable in polynomial time), and show results for simulated sensor data. In exchange for some (specified) approximation error and (polynomial) increase in ILP size, the technique is able to delay pruning and improve track purity, by implicitly representing many more hypotheses than an MHT forest can.
      • 06.0503 Ground Emitter Localization in the Presence of Multipath Craig Agate (Toyon Research Corporation), Matthew Varble (Toyon), Kenan Ezal (Toyon Research Corporation) Presentation: Craig Agate - Thursday, March 7th, 09:00 PM - Dunraven
        We address the problem of ground-based stationary emitter geolocalization in the presence of multi-path in which an emitter’s signal reaches the receiver through reflection from a surface. A particle filtering algorithm is applied to estimate the location of emitting sources in which a source could be the emitter itself or a mirror image of the emitter across a reflective boundary. Each reflective surface creates an ‘image’ of the emitter based on the relative geometry of the reflective surface, the emitter, and the receiver; hence, the receiver will measure the angle-of-arrival (AOA) to each image and possibly to the actual emitter, depending on the geometry. Time difference-of-arrival signals are also processed by the estimation algorithm to further constrain the locations of the images and emitter. We consider only single-bounce reflections of the emitter signal and do not assume that the direct signal is received. The algorithm does not assume any knowledge of the reflective surfaces. The algorithm is evaluated within a simulated environment on a variety of different scenarios, comparing results obtained with AOA measurements only as well as with both AOA and TDOA. Scenarios include cases in which the direct emitter signal is received (along with reflected signals) and cases in which the direct signal is not received. Additionally, both single and multiple receiver scenarios are considered. Results indicate that the algorithm performs well and handles the large uncertainty in emitter locations early in the estimation, which is challenging when using an estimator based on a single Gaussian representation of the emitter state probability density function.
      • 06.0506 Track-to-track Data Fusion for Unmanned Traffic Management System Krzysztof Cisek (Norwegian University of Science and Technology), Edmund Brekke (Norwegian University of Science and Technology), Mohammed Jahangir (Aveillant Limited), Tor Johansen (Norwegian University of Science and Technology) Presentation: Krzysztof Cisek - Thursday, March 7th, 09:25 PM - Dunraven
        This paper considers data fusion for Unmanned Traffic Management System (UTMS). It presents a track-to-track data fusion system made using multi-target tracking modification of recursive random sample consensus (R-RANSAC) algorithm. The system was developed to work with two independent data sources. The first source of data is cooperative unmanned aerial vehicle (UAV) tracker, the second source is non-cooperative L-Band staring radar, where both trackers main target are UAVs. Tracking data from both sources is delivered to the data fusion system without covariance matrices, which is the reason for using a robust non-deterministic algorithm, such as recursive random sample consensus (R-RANSAC) algorithm. The main goal of the system is to assign tracks from both data sources to a particular target. There are three most likely scenarios with one target: Target has tracks from both sources, track only from non-cooperative radar and track only from the cooperative tracker. Another group are scenarios with multiple targets where their tracks are close to each other, or crossing paths, where each target can have both or just one track from the source. Data from each source have a different rate, latency and noise level, which is also considered in the data fusion. In this paper, we show results of data fusion from simulated and field experimental tests using multi-target tracking modification of recursive random sample consensus (R-RANSAC) algorithm. The experiments conducted in this paper includes experimental tests made at Deenethorpe, UK. Tests were done with the use of cooperative tracker, non-cooperative radar and several small UAVs which were performing flights according to previously planned scenarios. The experimental results show that the presented data fusion method performance has the acceptable level of matching error. The method is considered as a suitable candidate for real-time operation.
      • 06.0507 Tracking Very Low SNR Targets with the Quanta Tracking Algorithm Darin Dunham (Lockheed Martin), Terry Ogle (Georgia Tech Research Institute), Peter Willett (University of Connecticut) Presentation: Darin Dunham - Thursday, March 7th, 09:50 PM - Dunraven
        The Quanta Tracking (QT) algorithm is a fairly new algorithm that is showing very promising results tracking unresolved, dim targets in highly cluttered environments. Traditional detection and tracking approaches use thresholding and signal processing to declare measurements that are then fed into the tracker. The QT algorithm does this all organically in an optimal manner, called, “track-before-detect”. The algorithm requires no thresholding of the data such that all of the data is utilized. The question that always arises is what is the lowest Signal-to-Noise-Ratio (SNR) target that can be tracked by this algorithm? Initial results showed that dim targets down to -15 dB could be tracked reasonably well. Furthermore, it was found that the QT algorithm’s performance against ever dimmer targets decreased gracefully without falling off of “a cliff.” In this paper, we explore a slightly different aspect of low SNR tracking of targets on a focal plane array (FPA). That is, we vary the parameters of the first SNR equation in a different manner that we believe is more controlled.
    • 06.06 Multisensor Fusion Laura Bateman (Johns Hopkins University/Applied Physics Laboratory) & William Blair (Georgia Tech Research Institute)
      • 06.0601 Cubature Kalman Filter and Analytic Solution for Emitter Geolocation via Time Difference of Arrival Joel Dunham (Georgia Institute of Technology), Samuel Shapero (Georgia Tech Research Institute), Jimmy Simmons (Georgia Tech Research Institute) Presentation: Joel Dunham - Friday, March 8th, 08:30 AM - Dunraven
        Kalman filters have routinely been applied to geolocation problems using Time Difference of Arrival (TDOA) measurements due to the difficulty of obtaining analytic solutions to the hyperbolic isochrons. Extensive testing has been performed with Extended and Unscented Kalman Filters (EKFs and UKFs) for typical TDOA problems, and to a lesser extent Cubature Kalman Filters (CKFs). This paper expands that testing through further simulation to test the limits of the CKFs and UKFs in TDOA applications focusing on Time of Arrival (TOA) measurement noise and reduced dimensions for constraining the problem when a limited number of receivers are available. Analytic solutions to TDOA measurements offer potential performance increases, especially when coupled with Kalman Filters. A recent paper detailing an analytic solution to ellipsoid intersections for multistatic radar by Dr. Samuel Shapero is applied to this problem of passive TDOA geolocation of a emitter. This paper develops the analytic solution for passive TDOA geolocation of an emitter and applies this solution through simulations, coupled with a Kalman filter. Limitations of this solution are analyzed, both for dimensionality and robustness with respect to noise in the TOA measurements. Further, performance evaluations for computational time versus accuracy for this technique compared to direct use of Kalman filters on TDOA measurements are detailed. Together, the techniques evaluated in this paper provide data to aid in choosing the appropriate Kalman Filtering method for geolocation through TDOA measurements.
      • 06.0602 Nonlinear Algorithms for Combining Conflicting Information in Multisensor Fusion Jeffery Hurley (Georgia Tech Research Institute), Daniel Johnson (Georgia Tech Research Institute), Joel Dunham (Georgia Institute of Technology), Jimmy Simmons (Georgia Tech Research Institute) Presentation: Jeffery Hurley - Friday, March 8th, 08:55 AM - Dunraven
        One of the most difficult problems in advanced systems today is handling all the identification information that is received from an array of sensors. Dealing with all this information becomes even more of an issue when there is conflicting information. Historically, evidence theory has been used to combine information from different sensors, but handling conflicting information in an intuitive manner continues to be a challenging problem. Often when conflicting information is detected the set of information is processed for outliers in order to remove the conflicting information before fusing the remaining set. This paper describes the use of a nonlinear algorithm in conjunction with the foundations of evidence theory to handle the combination of sensor data in an intuitive manner while also managing conflicting information. The results from a novel nonlinear algorithm is contrasted and compared with results from other modern day evidence theory algorithms.
      • 06.0604 Georectification of Imagery Taken aboard the ISS Utilizing Various Lightning Datasets. Skye Leake (Western Michigan University) Presentation: Skye Leake - Friday, March 8th, 09:20 AM - Dunraven
        The crew aboard the International Space Station capture many images of the Earth’s surface and atmosphere with handheld digital single-lens reflex (DSLR) cameras. These images can contain interesting lightning and meteorological phenomena, however contain little metadata past basic parameters; camera angles are generally unknown and timing is not known to high precision. As a result, the scientific potential of many of these images goes unrealized. Current lightning sensing technologies used to produce the datasets referenced for geolocation have high sample rates – in the millisecond range; but comparatively poor spatial resolution to standard high-resolution imagery attained by DSLR cameras. Cross-referencing Lightning Locating Systems (LLSs) sensor data and high resolution imagery the orientation of the image can be determined, allowing for the calculation of latitude and longitude at each pixel. The process for stand-alone images was then extended to video of severe storms. Observations of the Chiba University Meteor Camera (METEOR) project containing high frame-rate video severe storms were segmented following standard image processing methods. Lightning flashes were identified with high accuracy. The video was orientated using ground control points; allowing for computationally inexpensive approximations. Accurate timing allowed for the matching of geometries between datasets and the subsequent conversion of individual frames to highly accurate latitude/longitude-coordinate-space. Lightning flashes were measured up to the pixel, or ~30-meter accuracy level. This geolocation effort now provides Marshall Space Flight Center’s Earth Science Branch and Johnson Space Center’s Earth Science and Remote Sensing Unit with tools for analysis of images and video from the International Space Station for scientific analysis of severe thunderstorms and fundamental processes associated with lightning propagation.
      • 06.0605 Don’t Be Greedy, Be Neighborly, a New Assignment Algorithm Bryan O'leary (Northrop Grumman Corporation) Presentation: Bryan O'leary - Friday, March 8th, 09:45 AM - Dunraven
        This paper proposes a new algorithm, the Neighborly algorithm, for solving the assignment problem. The new algorithm achieves much better results than the Greedy algorithm while still running in order N log2 (N). The most efficient algorithm to optimally solve the assignment problem, the JVC algorithm, requires order N^3 operations. However, the Greedy algorithm is still in wide use today by target tracking algorithm practitioners due to its speed. The biggest problem with the Greedy algorithm is that the algorithm makes irrevocable, short sighted, assignments without regard to the global optimal solution. To overcome this weakness the Greedy algorithm is modified by incorporating some of the steps used by the Auction algorithm. To the author’s knowledge, no new sub-optimal algorithms for solving the assignment problem have been proposed in recent years. Simulation results for the Neighborly algorithm compare favorably to optimal algorithms in the sparse target environment, but perform poorly, as expected, in very dense target environments. In both sparse and dense target environments, the Neighborly algorithm outperforms the Greedy algorithm in both computational efficiency and assignment results.
      • 06.0608 Fractional Floodwater-pixel Fusion for Emergency Response Flood Mapping Using SAR Data Youngjoo Kwak (ICHARM-UNESCO-PWRI) Presentation: Youngjoo Kwak - Friday, March 8th, 10:10 AM - Dunraven
        As part of contribution to flood disaster risk management and its reduction, it is very important to provide a rapid-response emergency map in water-driven catastrophe. During a flood season, multiple satellite-based flood mapping is an imperative process despite cloud cover and data latency. The purpose of this study was to provide a rapid and accurate flood mapping by maximizing the use of space-borne Synthetic Aperture Radar (SAR) data fusion for reliable flood mapping. With the proposed algorithm of floodwater detection, this preliminary study as an effort to propose a good case study in operational emergency response of Bangladesh flooding, suggested that three main steps, i.e., classifying, decision fusing, and floodwater fractional fusion (F3), should be adopted to perform water change detection. At first, unsupervised machine learning for land classification was employed to ensure the backscattering characteristic of water and land surface referring from Landsat-8 data. Next, we investigated backscattering intensity for clustering from two different coinciding space-borne SAR data, the Japan Aerospace Exploration Agency (JAXA)’s Advanced Land Observing Satellite-2 (ALOS-2: L-band, HH polarization PALSAR-2 scan mode, 25 m spatial resolution) and the European Space Agency (ESA)’s Sentinel-1 (S1: C-band, VV polarization, 20 meters of the Ground Range Detected (GRD) image). The corresponding flood maps are obtained by comparing images acquired on June 15 (ALOS-2), and June 18 (S-1), 2018, with pre-flood images acquired on May 7 (ALOS-2), and June 3 (S-1), 2018, respectively. Finally, the pixel-based F3 was conducted by comparing the wavelet-based image fusion with a hierarchical split-based approach from the different coinciding space-borne SAR data. With comparison of similarities and differences between the ALOS-2-derived and S-1-derived flood maps, the F3 resultant maps are good enough to detect large flood affected areas but not good enough to identify vegetated area and individual buildings of urban area on the floodplain. The F3 with wavelet-based fusion approach has showed the possibility of a major contributor in flood detection by means of the integration of multiple SAR data, despite different spatial and temporal resolutions. By comparing different space-borne SAR sensors for flood detection, the water fractional fusion using a wavelet-based approach will be employed to maximize the utilization of final resultant flood maps which complement each other in order to overcome earth observation limitations in frequent revisit time and remove SAR water-like ambiguities.
      • 06.0609 End-to-End Performance Evaluation of Sensor Fusion and Bias Estimation for Multi-Sensor Hand-off Terry Ogle (Georgia Tech Research Institute), William Blair (Georgia Tech Research Institute), John Glass (Raytheon Company) Presentation: Terry Ogle - Friday, March 8th, 10:35 AM - Dunraven
        A method of evaluation is provided for the performance of an end-to-end simulation of sensor fusion and bias estimation in a multi-sensor scenario with hand-off between fusion centers. The simulated scenario includes a multi-level distributed fusion system in which two sensors are fused and then handed off for fusion with a third sensor. Joint bias estimation and fusion are performed at each fusion node. Track correlation is performed with Murty's k-best hypothesis. Performance is assessed at each fusion node with pattern accuracy, pattern consistency, pattern containment, and the probability of correct correlation. A joint track and bias containment test over the pattern is developed to determine if the track picture is sufficient for hand-off to another sensor or fusion node. To demonstrate performance, simulations were performed to compare the probability of correct correlation at each fusion node with and without pattern containment for the various methods of joint bias estimation methods at each fusion node. The baseline method used for comparison is to inflate the track covariance by an a specified sensor bias statistic without estimating the actual sensor biases. Results show that the pattern containment metric is an acceptable performance indicator of successful sensor hand-off in terms of probability of correct correlation.
    • 06.07 Applications of Target Tracking John Glass (Raytheon Company) & Yaakov Barshalom (University of Connecticut)
      • 06.0701 Informative Path Planning for Active Tracking of Agile Targets Per Boström Rost (Linköping University), Daniel Axehill (Linköping University), Gustaf Hendeby (Linköping University) Presentation: Per Boström Rost - Wednesday, March 6th, 08:30 AM - Dunraven
        This paper proposes methods to generate informative trajectories for a mobile sensor that tracks agile targets. The sensor is assumed to have limited field of view and sensing range, and to be capable of obtaining noisy measurements of a target's position. The goal is to generate a sensor trajectory that maximizes the tracking performance, captured by a measure of the covariance matrix of the target state estimate. Since the targets maneuver, it is necessary to re-plan the sensor trajectory online when new measurements are obtained. This is done in a receding horizon fashion. The active target tracking problem hence is a combination of estimation and control, which is often referred to as informative path planning (IPP). When using sensors with limited field of view, the tracking performance depends on the actual measurements obtained as well as the trajectory of the target. This is a complicating factor for IPP, as the objective function of the optimization problem hence depends on future measurements and the true target trajectory, both which are naturally unavailable in the planning stage. Due to this uncertainty, the planning problem solved in each iteration of the receding horizon control loop becomes a stochastic optimization problem, where the expectation is taken with respect to the measurement noise and the target state. This paper proposes methods to solve the stochastic planning problem using approximations based on sampling of the predicted target distribution in different ways. An extended Kalman filter (EKF) is used to estimate the state of the target and by sampling the predicted target distribution, a number of plausible trajectories are obtained. These candidate target trajectories are then used to approximate the expectation with respect to the target state. This is in contrast to prior work, where only the most likely target trajectory is considered. The proposed methods are evaluated using Monte Carlo simulations of different relevant tracking scenarios. A conventional IPP method is used as baseline. It is shown how the proposed methods greatly improves the ability to track agile targets, with retained tracking accuracy of the tracked targets, compared to the baseline method. The difference in performance is less prominent for slowly maneuvering targets. The simulations are also used to give some insight into the properties of the different suggested sampling schemes and other design variables.
      • 06.0705 Current Challenges in Labeled Random Finite Set Based Distributed Multisensor Multi-object Tracking Augustus Buonviri (Sandia National Laboratories), Matthew York (Sandia National Laboratories), Keith Le Grand (Sandia National Laboratories), James Meub (Sandia National Laboratories) Presentation: Augustus Buonviri - Wednesday, March 6th, 08:55 AM - Dunraven
        In recent years, increasing interest in distributed sensing networks has led to a demand for robust multi-sensor multi-object tracking (MOT) methods that can take advantage of large quantities of gathered data. However, distributed sensing has unique challenges stemming from limited computational resources, limited bandwidth, and complex network topology that must be considered within a given tracking method. Several recently developed methods that are based upon the random finite set (RFS) have shown promise as statistically rigorous approaches to the distributed MOT problem. Among the most desirable qualities of RFS-based approaches is that they are derived from a common mathematical framework, finite set statistics, which provides a basis for principled fusion of full multi-object probability distributions. Yet, distributed labeled RFS tracking is a still-maturing field of research, and many practical considerations must be addressed before large-scale, real-time systems can be implemented. For example, methods that use label-based fusion require perfect label consistency of objects across sensors, which is impossible to guarantee in scalable distributed systems. This paper discusses the significant challenges that distributed tracking using labeled RFS methods brings. An overview of labeled RFS filtering is presented, the distributed MOT problem is characterized, and recent approaches to distributed labeled RFS filtering are examined. The problems that currently prevent implementation of distributed labeled RFS trackers in scalable real-time systems are identified and demonstrated within the scope of several exemplar scenarios.
      • 06.0707 Time-lapse Imaging for Studying Atmospheric Refraction: Measurements with Natural Targets Wardeh Al Younis (), Christina Nevarez (), David Voelz (New Mexico State University) Presentation: Wardeh Al Younis - Wednesday, March 6th, 09:20 AM - Dunraven
        Abstract: The bending of light rays as they travel through the atmosphere due to the change in air density is a well-known phenomenon called atmospheric refraction. This behavior of the rays is due to the dependence of air on temperature and pressure which leads to a change in air density as a function of height, and hence leads to a refractive index gradient in the atmosphere. Although this gradient is very small, it affects the performance of optical imaging as it causes image-point offsets, stretching, and compression of objects when they are viewed through atmosphere. For several years at New Mexico State University, we have been developing low-cost, mobile time-lapse camera systems to study atmospheric refraction. Our current system consists of a battery-powered single-lens reflex camera with a 400mm focal length lens that is protected in a weather-proof case. The camera collects sequences of hundreds of images of distant scenes and targets over periods of days and months at a typically interval of 5 minutes. The image features are analyzed to determine the atmosphere’s refractive structure over the image path. Refractive effects are a function of local weather; therefore, meteorological measurements are provided by a commercial Davis weather system that is set up near the camera. In past work, using man-made targets of opportunity (buildings, towers), we developed image processing correlation methods for determining the refractive gradient of the atmosphere as a function of height based on image displacement. Second order refractive curvature effects that stretch or compress target features were also investigated. In this paper we discuss developments in using natural targets (terrain and vegetation features) for refractive studies. Natural targets provide for the opportunity to study refraction in rural settings. One camera system was deployed at a remote location in the White Sands Missile Range in New Mexico, and was pointed at a natural desert landscape focusing on a mountain range. Day and night images from this system were collected from January 2018 to February 2018. A second camera system is currently stationed in the desert of the Jornada Experimental Range in New Mexico, focusing on a mountain range and desert valley. This system was set up in May 2018 with a planned operation of one year. We describe a point tracking image processing approach and present refraction analyses of the images from the time lapse systems. We discuss corrections for camera motion and correlations with meteorological data. Keywords: Atmospheric refraction; Time-lapse imaging; Remote mobile station; Natural targets.
      • 06.0710 Note on Sensor Resource Allocations: Higher Rate or Better Measurements? Yan Wang (Georgia Institute of Technologies) Presentation: Yan Wang - Wednesday, March 6th, 09:45 AM - Dunraven
        When tracking maneuvering targets with an electronically-scanned sensor, the revisit rate and the waveform energy is selected and regulated during a tracking episode. When it is desired to improve tracking, one is faced with the decision of doubling the data rate or measurement accuracy ({\it i.e.,} Signa-to-Noise Ratio). Track accuracy and data association are two measures of performance for consideration. Similarly, when integrating two sensors, one may have the choice of synchronizing the sensors for coincident measurements ({\it i.e.,} doubling the accuracy or halving the variance of the effective measurement) or noncontemporary measurements ({\it i.e.,} doubling the rate of measurements). When choosing to increase the measurement accuracy or rate, the answer is found by assessing the impact on track accuracy or data association. The error in the filtered state estimates reflects the track accuracy, while the error in the one-step ahead predicted state estimate reflects the potential improvement in the measurement-to-track association. In this paper, the tracking of maneuvering targets with a nearly constant velocity (NCV) Kalman filter and the maximum mean squared error (MMSE) will be utilized to study the impacts of doubling the measurement accuracy or rate. For each measurement case and maximum acceleration of the maneuvering target, the process noise variances that minimizes the MMSE in the filtered position and the one-step ahead predicted position are used to assess the impacts of doubling the measurement accuracy or rate. The analysis shows that doubling the measurement accuracy gives the greater reduction in filtered estimation error, while doubling the measurement rate gives the better reduction in the error in the one-step predicted position ({\it i.e.,} better improvement in the measurement-to-track association). Selection of the process noise variance that minimizes the MMSE in the one-step ahead predicted position is new and presented in the paper. Monte Carlo simulation results are given to verify the findings.
    • 06.08 Guidance, Navigation and Control Terry Ogle (Georgia Tech Research Institute) & Christopher Elliott (Lockheed Martin Aeronautics Company and University of Texas at Arlington)
      • 06.0803 In-Flight Adaptive PID Sliding Mode Position and Attitude Controller Hailee Hettrick (Massachusetts Institute of Technology), Jessica Todd (Massachusetts Institute of Technology) Presentation: Hailee Hettrick - Wednesday, March 6th, 10:10 AM - Dunraven
        This paper describes the development and validation of a mass property-resilient controller for position and attitude control of a free-flying satellite. Specifically, the proportional-integral-derivative (PID) sliding mode controller (SMC) was developed to account for inaccurate mass properties of small satellites, using sliding mode variables for each axis to adaptively determine the appropriate integral and derivative gains to achieve a commanded motion. The controller was validated both in simulation and in ground-based tests on the SPHERES (Synchronized Position Hold Engage Reorient Experimental Satellites) platform, small free-floating autonomous satellites used to study precision navigation and maneuvering. The derivation for the controllers was completed for a six degree-of-freedom environment, given SPHERES has a microgravity testbed aboard the International Space Station. This derivation yielded a PID control law with three adaptation laws for both position and attitude control, resulting in a total of eight laws implemented in the aggregated control system. Each law has a tunable, positive parameter selected via repetitive experimentation to determine which value provided rapid convergence. Stability of the control and adaptation laws was verified by applying Barbalet’s Lemma to a candidate Lyapunov function. To validate the control and adaptation laws, translation and rotation tests were conducted on a ground-based testbed with three degrees of freedom (two translational and one rotational). The translation test (T1) required the SPHERES to move around the glass table along a commanded path while maintaining the desired attitude. The rotation test (T2) commanded the SPHERES to move to a specific starting position and then to spin -180° about its z-axis. During the five commanded maneuvers in the T1 simulation, the sliding variable converged to zero and the integral and derivative gains converged to a constant value in 10 seconds or less. T1 hardware results indicated a slower convergence rate, requiring approximately 20 seconds to complete a translation maneuver and for the sliding variable to converge to zero and the adaptive gains to converge to a constant value. Contrastingly, T2 results indicate that further refinement is required. In simulation, the SPHERES took 20 seconds to converge to its commanded initial position and 25 seconds for it to complete the commanded -180° rotation. The sliding variables for each axis took 30 seconds to converge to zero for the initial position. For the commanded rotation, the yaw variable and adaptive control gains took 20 seconds to converge to zero. On the testbed, SPHERES again took roughly 25 seconds to converge to the initial position, but it failed to converge to the commanded -180° rotation. This issue can be resolved with further tuning of the aforementioned parameters. These initial results indicate the feasibility of PID sliding mode position and attitude controllers providing resilience to inaccurate mass properties; however, further investigation and experimentation are required to lower convergence rates. Lowering these rates will make the controllers more realistic to apply on SPHERES outside of this software validation scenario in addition to small satellites and free-flyers experiencing uncertain mass properties.
      • 06.0807 Distributed Localization and Control of Quadrotor UAVs Using Ultra-Wideband Sensors Jing Wang (Bradley University) Presentation: Jing Wang - Wednesday, March 6th, 10:35 AM - Dunraven
        Recent years have seen a significant progress in the technology development of unmanned aerial vehicles (UAVs). Particularly the usage of multiple UAVs has received a lot of attention in both industry and military. For instance, a group of small UAVs could be used in a search and rescue mission covering a dangerous region for humans to search. In general, the individual UAV in a group has its own sensors and actuators. To fully coordinate the motion of multiple UAVs for certain common tasks, the fundamental questions become how to deal with sensing/communication among individual UAVs and how to design simple yet efficient local control strategies for each UAV. There exists extensive distributed cooperative control algorithms for general linear dynamical systems. Some results may be applicable to the distributed control of multiple UAVs. However, few results are available in terms of experimental implementation of algorithms in real UAVs, especially in the presence of various uncertainties including system dynamics uncertainty, sensing and communication uncertainties. In this paper, we propose a new distributed control algorithm for Quadrotor UAVs based on the use of Ultra-Wideband Sensors for localization. The main objective is to design practically implementable distributed localization and hierarchical control algorithm for coordinated formation flying of UAVs and to implement them using a number of nano Crazyflie quadrotors. The Crazyflie 2.0 quadrotor UAVs produced by Bitcraze are adopted for experimental implementation. The realistic communication environment is modeled using an outage probability function, and accordingly a new communication outage probability-dependent collective potential function is utilized in the derivation of the distributed navigation control algorithm. Upon convergence, it can be shown that all agents achieve formation flocking and maintain the desired inter agent distances determined by allowable maximum outage probability. The proposed distributed hierarchical strategy and its experimental validation provide a useful design guideline for applications of UAVs swarms in real situations. The detailed control algorithms and experiment results will be reported in the final version of the paper.
      • 06.0808 Three-dimensional Impact Angle Guidance Law for Precision Guided Munition Daniel Lee () Presentation: Daniel Lee - Wednesday, March 6th, 11:00 AM - Dunraven
        This paper proposes three-dimensional guidance laws for impact efficiency enhancement of precision guided munitions. This requires a vertical impact angle while the maneuverability and overall flight time of a munition are limited. For that, nonsingular terminal sliding mode control theory is applied which uses two switching surfaces of power-rate reaching law. The proposed guidance law allows a projectile a trajectory with desired terminal impact angle in a finite time with allowable control inputs while handling the coupled dynamical behavior of munitions. Furthermore, it prevents the singularity problem of the conventional terminal sliding mode controller. To verify the performance of the proposed guidance law against a stationary target, it has been applied on a realistic ballistic model which considers drag and gravity effect. Simulations results demonstrates the performance and the limit under various launch angle conditions are investigated.
      • 06.0809 Application of Star Tracker for Angular Rate Estimation by Local Adaptive Thresholding & PIV Method Fahime Barzamini (K N Toosi University) Presentation: Fahime Barzamini - Wednesday, March 6th, 11:25 AM - Dunraven
        The main objective of this paper is application augmentation of star trackers as most accurate attitude determination devices to estimate spacecraft angular rates. In order to calculate spacecraft angular velocity from sequence of images in dynamic condition, two main issues have been addressed. First of all, image quality improvement through adaptive thresholding is considered. In this method a threshold level for image filtering against additional noise is used that will be most effective in improving the results of second part which is Particle Image Velocimetry (PIV) technique using Delaunay triangulation method to calculate spacecraft angular velocity. In order to use star tracker in dynamic conditions and in presence of angular rates, blurring of images will be a major challenge in detecting star centroids. In this paper, a locally adaptive thresholding method is applied that removes the image background noise by using local mean and mean deviation to improve the star tracker centroiding accuracy even in high slew rates. This method uses the image produced from integral sum of the main image as a reference for calculating the local mean, which does not include the computations of standard deviation existing in other local adaptation techniques and is independent of the window size which results in its accelerated implementation. Simulation results indicate that the performance of this algorithm is significantly high for noisy images, particularly when the signal to noise ratio is severely low. In the next step, after having denoised the images sufficiently, PIV method is applied to estimate the angular velocity. This method is based on tracking similar star patterns containing three or more star centroids in consecutive images without actually having to recognize the stars individually which is considered computationally laborious, so noise removal is an important issue affects in accuracy of satellite angular velocity estimation. The main issue of this paper is applying local adaptive thresholding algorithm on star tracker’s images to use in the future methods.
      • 06.0810 Overhead Detection, Identification, and Tracking of Multiple Surface-based Exploration Vehicles Wolfgang Fink (University of Arizona), Qasim Mahmood (University of Arizona) Presentation: Wolfgang Fink - Wednesday, March 6th, 11:50 AM - Dunraven
        As NASA and other space agencies venture out to explore planetary bodies of high interest (Mars, Titan, Europa, Enceladus, etc.), especially from an astrobiological point of view, i.e., the quest for extant/extinct life beyond Earth, planetary field geologists will have to be replaced and emulated by robotic spacecraft, at least for the foreseeable future. As such, these robotic explorers will have to be equipped with observation, analysis, and reasoning capabilities of a field geologist. Moreover, to mimic the geologic approach of local to regional to global reconnaissance in an integrated, mutually informing fashion, these robotic explorers will likely have to operate as part of multi-tiered robotic mission architectures. Several precursors to such mission architectures have been proposed, such as the introduction of an overhead perspective either through a balloon, blimp, airship, or helicopter/rotorcraft. Using an overhead perspective provides many advantages for exploration and reconnaissance, as well as for guidance, navigation, and control (GNC). A real-world instantiation of an overhead perspective is the use of the HiRISE camera aboard Mars Reconnaissance Orbiter for GNC support of the Mars Exploration Rovers. In this context, this paper focuses in particular on the challenge of detection, identification, and tracking of multiple deployed ground-agents, such as rovers on Mars or lake landers on Titan. The devised framework comprises the use of distinct, highly rotation, transformation, and scaling invariant templates that are matched to similar markings on top of the respective deployed ground-agents through rotation, transformation, and scaling operations. This allows the spatial detection and identification of the respective ground-agents. The centroids of the detected templates are subsequently tracked simultaneously through the repeated use of this template-matching procedure. This detection, identification, and tracking framework enables the GNC of multiple, and thus expendable, ground-agents from an overhead perspective(s), e.g., as part of multi-tiered exploration mission architectures to access high(er)-risk, but high(er) science payoff regions.
    • 06.09 Fusion Integration of Sensor Harvesting Peter Zulch (Air Force Research Laboratory) & Erik Blasch ()
      • 06.0901 Taking Advantage of Group Behavior When Tracking Multiple Threats in Cluttered Surveillance Data Peter Willett (University of Connecticut), Andrew Finelli (University of Connecticut), Yaakov Barshalom (University of Connecticut) Presentation: Peter Willett - Wednesday, March 6th, 04:30 PM - Cheyenne
        Nowadays, surveillance data is continuously being collected on groups of people who are behave suspiciously. This data can come from numerous sources, and we assume that it is organized into a single stream for processing. This data, while hopefully containing the evidence of malicious plans forming, is mostly innocuous transactions. We assume that, in order to be detectable, a threat needs to evolve over time according to a plan laid out by a subject matter expert. Additionally, we assume that the culminating event of the threat (i.e. the attack) will occur at the end of the process. A threat that evolves too quickly is not feasible to be detected in time for intervention, so they will not be considered. The process by which we can observe threats lends nicely to the use of a Hidden Markov Model (HMM). An HMM is a Markov process whose states are not directly observable, but observations are related to the state in some way (generally probabilistically). For our applications, the threat process will be described as a Markov Process and the observations are filtered into a single stream (along with clutter) at certain points in during the threat. As stated above, we assume that there are mostly clutter observations and when a target emanated observation exists, it is randomly inserted into the same data stream. Observations are a combination of entities (people, places, and things) and transactions (mostly verbs). One example could be "John flies to Detroit". In this, "John" and "Detroit" are the entities and "flies" is the transaction. It has previously been shown that a single threat can be detected by assigning a probability of involvement to each entity observed such that observations with suspicious entities are taken more seriously by the filter. Furthermore, it has been shown that multiple threats can be tracked using a bank of Bernoulli filters (BF) from a single stream assuming entities act independently. In this work, we will show that grouping entities into "cliques" and assuming that entities within the same group are more likely to act together (a more realistic assumption than independent actors) can also successfully detect threats. Observed entities are assumed to be grouped (a priori), similar to a clustering algorithm, into small but significant collections called cliques. Along with the grouping, it is assumed that we know (again, a priori) the conditional probability of involvement for each entity given the clique involvement. Therefore, we can track the involvement of groups probabilistically using the bank of BFs. In this work, we will assume that each entity has membership in a single clique and that cliques act independently of one another. This is done so that an entity in a suspicious observation can be directly related to it's clique. The rationale behind this modification is that suspicious actions of an entity should cause suspicion of related entities (i.e., within its clique).
      • 06.0902 Joint-Sparse Decentralized Heterogeneous Data Fusion for Target Estimation Ruixin Niu (Virginia Commonwealth University), Peter Zulch (Air Force Research Laboratory), Marcello Di Stasio (AFRL), Genshe Chen (Intelligent Fusion Technology, Inc), Dan Shen (Intelligent Fusion Technology, Inc), Jingyang Lu (International fusion technology), Zhonghai Wang (IFT) Presentation: Ruixin Niu - Wednesday, March 6th, 04:55 PM - Cheyenne
        Most traditional surveillance systems use decision- or feature-level fusion approaches to integrate heterogeneous sensor data, which are sub-optimal with information loss. In our recent work, we developed an approach, joint-sparse data-level fusion (JSDLF), to integrate heterogeneous sensor data for target discovery. In the proposed approach, the target state space is disretized and the heterogeneous data fusion problem is formulated as a joint sparse signal reconstruction problem. The problem was solved by finding the common support of the heterogeneous joint sparse signals using simultaneous orthogonal matching pursuit (SOMP). The JSDLF approach was applied to fuse signals from multiple distributed passive radio frequency (RF) sensors and from a video sensor. Numerical results showed the excellent performance of the JSDLF approach for situational awareness. In this paper, we continue our work on joint sparsity based data fusion. In our previous work, the joint sparse signal recovery approach has been implemented in a centralized manner. Namely, all the raw sensor data are transmitted to a fusion center, where they are fused to detect and estimate the targets. The drawback of the centralized network is its high communication cost and its lack of robustness, since the global information is stored and processed at a single point, the fusion center. Communication cost is a crucial factor, due to limited communication bandwidth and limited battery power at sensors/platforms. In this paper, several decentralized JSDLF approaches have been developed, that provide exactly the same estimation result at each sensor node as the centralized algorithm does. Further, two distributed database query algorithms, Threshold Algorithm (TA) and Three-Phase Uniform Threshold (TPUT) have been combined with the SOMP algorithm to reduce communication cost. Numerical examples are provided to demonstrate that the proposed decentralized JSDLF approaches obtain excellent performance with accurate target position and velocity estimates to support situation awareness, while at the same achieving dramatic communication savings.
      • 06.0903 Joint Manifold Learning Based Distributed Sensor Fusion of Image and Radio-Frequency Data Dan Shen (Intelligent Fusion Technology, Inc), Jingyang Lu (International fusion technology), Peter Zulch (Air Force Research Laboratory), Marcello Di Stasio (AFRL), Genshe Chen (Intelligent Fusion Technology, Inc), Zhonghai Wang (IFT), Ruixin Niu (Virginia Commonwealth University) Presentation: Dan Shen - Wednesday, March 6th, 05:20 PM - Cheyenne
        In many site-monitoring scenarios using multi-sensor modalities, the data streams not only have a high dimensionality, but also belong to different phenomena. For example, a moving vehicle may have an emitter that transmits radio-frequency (RF) signals, its exhaust system sends acoustic signals, and its perspective observed which may be collected by passive radars, acoustic sensors, and video cameras; respectively. These cases demonstrate that a target (the moving object) observed by three different modalities (data streams collected by acoustic sensors, passive radars, and cameras) could benefit from sensor fusion to increase the tracking accuracy. This paper presents a joint manifold learning based distributed sensor fusion approach for image and radio frequency (RF) data. A typical scenario includes several objects (with RF emitters), which are observed by a network of platforms with Medium Wavelength Infrared (MWIR) cameras and/or RF Doppler sensors. Based on a joint manifold learning (JML) sensor fusion approach, we propose to design and implement a distributed heterogeneous data fusion approach for improved Detection, Classification, and Identification (DCI) of targets and entities in dynamic environments with constrained communications. We design and implement distributed JML using diffusion and consensus approaches. In our distributed mechanism, we first partition the JML matrices into submatrices for each platform. For every platform, these submatrices represent the mapping from the sensor data to its contribution in the final fused result. Each node processes the local measurements using submatrices and shares the results with a limited number of neighbors. A prototype is constructed that includes drones, onboard processing capabilities (Intel NUC), cameras, and radars to demonstrate the proposed distributed data fusion approach. To explore the system robustness, supportive results are achieved from simulated radio-frequency interference (RFI) and imperfect communication links.
      • 06.0904 Multi-scale Geometric Summaries for Similarity-based Sensor Fusion Christopher Tralie (Duke University), Paul Bendich (Geometric Data Analytics, Inc.), John Harer (Duke University) Presentation: Christopher Tralie - Wednesday, March 6th, 09:00 PM - Cheyenne
        In this work, we address fusion of heterogeneous sensor data using wavelet-based summaries of fused self-similarity information from each sensor. The technique we develop is quite general, does not require domain specific knowledge or physical models, and requires no training. Nonetheless, it can perform surprisingly well at the general task of differentiating classes of time-ordered behavior sequences which are sensed by more than one modality. As a demonstration of our capabilities in the audio to video context, we focus on the differentiation of speech sequences. Data from two or more modalities first are represented using self-similarity matrices(SSMs) corresponding to time-ordered point clouds in feature spaces of each of these data sources; we note that these feature spaces can be of entirely different scale and dimensionality. A fused similarity template is then derived from the modality-specific SSMs using a technique called similarity network fusion (SNF). We investigate pipelines using SNF as both an upstream (feature-level) and a downstream (ranking-level) fusion technique. Multiscale geometric features of this template are then extracted using a recently-developed technique called the scattering transform, and these features are then used to differentiate speech sequences. This method outperforms unsupervised techniques which operate directly on the raw data, and it also outperforms stovepiped methods which operate on SSMs separately derived from the distinct modalities. The benefits of this method become even more apparent as the simulated peak signal to noise ratio decreases.
      • 06.0905 Multimodal Fusion Using Deep Directional Unit Networks for Event Behavior Characterization Denis Garagic (BAE Systems), Fang Liu (BAE Systems, Inc.), Peter Zulch (Air Force Research Laboratory), Brad Rhodes () Presentation: Denis Garagic - Wednesday, March 6th, 09:25 PM - Cheyenne
        The increasing availability of many sensing modalities (imagery, radar, radio frequency (RF) signals, acoustical, and seismic data) reporting on the same phenomena introduces new data exploitation opportunities. This also creates a need for fusing multiple modalities in order to take advantage of inter-modal dependencies and phenomenology, since it is rare that a single modality provides complete knowledge of the phenomena of interest. In turn, this raise challenges beyond those related to exploiting each modality separately. Traditional approaches centered on a cascade of signal processing tasks to detect elements of interest (EOIs) within locations/regions of interest (ROIs), followed by temporal tracking and supervised classification of these EOIs over a sequence of observations, is not able to optimally exploit inter-modal characteristics (e.g., spatio-temporal features that co-vary across modalities) of EOI signatures. This paper presents an end-to-end spatiotemporal processing pipeline that uses a novel application of dynamic deep generative neural networks for fusing ‘raw’ and / or feature-level multi-modal and multi-sensor data. This pipeline exploits the learned joint features to perform detection, tracking, and classification of multiple EOI event signatures. Our deep generative learning framework is composed of Conditional Multimodal Deep Directional-unit Networks that extend deep generative network models to enable a general equivariance learning framework with vector-valued visible and hidden units called directional units (DUs). These DUs explicitly represent sensing state (sensing / not sensing) for each modality and environmental context measurements. Direction within a DU indicates whether a feature (within the feature space) is present and the magnitude measures how strongly that feature is present. In this manner, DUs concisely represent a space of features. Furthermore, we introduce a dynamic temporal component to encoding the visible and hidden layers. This component facilitates spatiotemporal multimodal learning tasks including multimodal fusion, cross-modality learning, and shared representation learning, as well as detection, tracking, and classification of multiple known and unknown EOI classes in an unsupervised and/or semi-supervised way. This approach overcomes the inadequacy of pre-defined features as a means for creating efficient, discriminating, low-dimensional representations from high-dimensional multi-modality sensor data collected under difficult, dynamic sensing conditions. This paper presents results that demonstrate our approach enables accurate, real-time target detection, tracking, and recognition of known and unknown moving or stationary targets or events and their activities evolving over space and time.
      • 06.0906 ESCAPE Data Set for Multi-INT Fusion Erik Blasch (), Peter Zulch (Air Force Research Laboratory) Presentation: Erik Blasch - Wednesday, March 6th, 09:50 PM - Cheyenne
        The ESCAPE data collect brings together electro-optical (EO), infrared (IR), distributed radio-frequency (RF), acoustic and seismic data towards developing advanced estimation methods for aerospace systems. The multi-modal data collection was developed to increase joint techniques in information fusion, machine learning, and signal processing. The paper details scenarios, data collects, and general research ideas associated with ground target tracking based on behavior characteristics. Variations of research exist from the exploring the data over disparate moving emitting vehicles, various patterns, differing noise profiles; as well as airborne systems unmanned aerial systems (SUAS) flight profiles. Future challenge problems and research areas are detailed to engage the community for advanced heterogeneous analytics, design, and data fusion. For example, mixed data collects require joint analytics; while leveraging common knowledge of the scene, geometric theory, and target movements towards comparative analysis.
  • 7 Avionics and Electronics for Space Applications Harald Schone (Jet Propulsion Laboratory) & John Samson (Morehead State University / Aerospace Technologies Plus) & John Dickinson (Sandia National Laboratories)
    • 07.01 High Performance Computing, Data Processing, and Interconnects for Space Applications Joseph Marshall (BAE Systems) & Jamal Haque (Honeywell)
      • 07.0101 High Performance Computing for Precision Landing and Hazard Avoidance and Co-design Approach David Rutishauser (NASA - Johnson Space Center) Presentation: David Rutishauser - Sunday, March 3th, 04:30 PM - Madison
        The Safe and Precise Landing Integrated Capabilities Evolution (SPLICE) project continues NASA’s technology development for Precision Landing and Hazard Avoidance (PL&HA). The High-Performance Space Computing (HPSC) project manages a contract to build a multi core processor that is intended to be NASA’s computing platform for future human and robotic spaceflight missions. This paper describes the flight computer for the PL&HA payload that will be used in SPLICE flight testing onboard suborbital rockets. This computer is being designed as a surrogate architecture for the HPSC chip, using a Xilinx Multi-Processor System on a Chip (MPSoC). Field testing with the surrogate architecture facilitates cross-agency experience with the HPSC and positions projects for future technology infusion opportunities. The MPSoC is hosted on a custom baseboard and interfaced to a second custom board that provides the sensor and vehicle interfaces. Early design trades for the SPLICE surrogate implementation are described. Preliminary performance testing on the surrogate platform using an optical navigation algorithm developed for the Orion vehicle is described, and shows a speedup proportional to the number of processing cores after minimal modifications to the original code. In general, High Performance Computing/Embedded Computing (HPC/HPEC) required to address computational challenges in a wide range of NASA missions has consistently had challenges in implementation due to the diversity in the disciplines required to develop a solution. Typically, algorithm designers with expertise in the physics of the problem, and numeric approaches to solving the relations that model the physics, do not have expertise in processing architectures. There are strong dependencies between the overall performance of the system and the choices made in the modeling of the physics, the numerical approaches to solutions for the physical relations, and how these operations are mapped to computational resources in an architecture. To address this concern, a Model-Based Systems Engineering (MBSE) based concept for conducting multi-disciplinary co-design of the Guidance, Navigation, and Control (GN&C) algorithms, processing hardware configuration, and system software is introduced.
      • 07.0102 Emulation-based Performance Studies on the HPSC Space Processor Benjamin Schwaller (University of Pittsburgh) Presentation: Benjamin Schwaller - Sunday, March 3th, 04:55 PM - Madison
        With increasing computational demands in the defense, science, and commercial sectors, future space missions will require new high-performance computer architectures. Extensive research, benchmarking, and analysis of new and emerging architectures is required to identify and evaluate mappings of space apps to them. In this research, we develop and employ hardware testbeds to emulate and predict performance of the High-Performance Spaceflight Computing (HPSC) processor, a device being developed by Boeing, and sponsored by AFRL and NASA, for future space missions. Boeing was chosen because of their proposed “chiplet” design. Each chiplet will feature two quad-core ARM Cortex-A53 CPUs connected by an Advanced Microcontroller Bus Architecture (AMBA). These chiplets can be connected by different serial interfaces which provides AFRL and NASA with a flexible platform to serve a variety of potential mission needs. Two hardware testbeds are used to emulate and conduct studies on HPSC. The first is an octa-core ARM Cortex-A53 device containing two quad-core processors connected by AMBA. This platform emulates a single HPSC chiplet. The second testbed consists of two quad-core ARM Cortex-A53 processors connected by Gigabit Ethernet (GbE). This platform provides insight into how two chiplets might perform an app together. Using kernel and Ethernet performance results from both platforms, we create a model to project the performance of the HPSC processor across a suite of space-related benchmarks. The benchmarking suite includes several linear-algebra kernels, space-navigation kernels, 1D and 2D Fast Fourier Transforms, and a synthetic-aperture radar (SAR) application. The suite is parallelized across multiple ARM cores using OpenMP and is optimized for the ARM platforms. We project that SAR, the most compute-intensive application in the suite, will scale well on a multi-chiplet platform. When using two connected chiplets, SAR is forecasted to have a speedup of 2.94 versus a single quad-core ARM processor. This gain comes with a resource utilization of 73.5%. Other kernels also project well on the HPSC platform. The model forecasts Kepler’s equation to have a speedup of 3.34 from a single quad-core to two connected chiplets. Smaller kernels, such as matrix addition that is predicted to have a speedup of 0.61, suffer from parallelization overhead across multiple chiplets and sometimes across AMBA on a single chiplet. This slowdown is due to the communication overhead on Ethernet and AMBA, contributing a major part of the total runtime. Additionally, this research uncovered a performance optimization for the 2D-FFT kernel within the popular FFTW library. This optimization led to an average speedup of 1.44 for larger FFT sizes. Overall, the work presented in this paper will forecast and evaluate the benchmarking performance of the HPSC processor for a variety of space-related kernels and reveals techniques to optimize apps for this system.
      • 07.0103 Volatile Register Handling for FPGA Verification Based on SVAs Incorporated into UVM Environments Kai Borchers (German Aerospace Center - DLR), Sergio Montenegro (University Würzburg), Frank Dannemann (German Aerospace Center - DLR) Presentation: Kai Borchers - Sunday, March 3th, 05:20 PM - Madison
        FPGAs are frequently used within the space domain, whereas different methodologies are applied to verify the correct behavior of designs. By now, Universal Verification Methodology (UVM) is the de-facto standard for functional verification of Register Transfer Level (RTL) designs. A typical UVM environment consists of agents that drive interfaces of the Device Under Test (DUT) and also applies automated comparisons inside scoreboards to verify expected behavior. However, these automated self checks require a model that can predict the behavior of the DUT. These basic self checking mechanisms do not work any longer if models are incomplete and not able to predict the entire behavior. In this case it is possible to exclude particular checks inside UVM scoreboards. Instead, these excluded checks are covered on different and mostly nontrivial ways, whereas white box knowledge of the system is often required. Volatile register comparisons are an example for this kind of problem. This paper demonstrates how volatile registers can be integrated into scoreboard comparisons in parallel to predictable registers. This is done by saving expected volatile register values directly at the time they are accessed inside the DUT rather than determine their expected values dependent on DUT interface activities. For this, an interface is provided to the scoreboard that provides the white box access and encapsulates structural information about the DUT. Additionally, the approach utilizes SystemVerilog Assertions (SVA). SVAs are generally used to describe and check system properties during functional simulation or by formal verification. However, they can also be used to interact with the UVM environment directly. This allows the scoreboard to take advantage from the strong signal tracing capabilities and is used to capture the values of volatile registers at the right time. The introduced approach is applied on an FPGA design that provides interface capabilities for a satellite on-board computer, whereas the access is provided over the ECSS standardized RMAP protocol.
      • 07.0104 11b/14b Encoding - a Fault Tolerant, DC-Balanced Line Code for AC-Coupled Channel Link Transceivers Jeffrey Boye (JHUAPL), Adam Mizes (Johns Hopkins University/Applied Physics Laboratory), Laurel Funk (JHU) Presentation: Jeffrey Boye - Sunday, March 3th, 09:00 PM - Madison
        This paper presents a fault tolerant, DC-balanced line encoding scheme for use with AC-coupled Channel Link transceivers which feature 7-bit frame alignment as opposed to a more typical 8-bit alignment. Channel Link is a popular high-speed interface for aerospace applications (such as image streaming via Camera Link), popularized by Cobham-Aeroflex's radiation tolerant UT54LVDS217/8 3-lane serializer/deserializer buffer pair which is capable of 1.575 Gbps of throughput. Use of Channel Link transceivers in AC-coupled link topologies imposes stringent requirements on the run length (maximum sequence of repetitive '1's or '0's) and running disparity (difference between total number of '1's and '0's) of the transmitted line encoded data. Existing line encoding schemes such as 8b/10b meet these requirements, however, none of them support a 7-bit aligned data frame that is imposed by Channel Link. We present a line encoding scheme that is an extension of 8b/10b encoding theory, including an adaptation of optional control (comma) codes, that supports the mapping of 11-bits of unencoded data into two 7-bit aligned encoded data frames in support of AC-coupled Channel Link interconnects. Functional verification of the encoding scheme is demonstrated through rigorous simulation and hardware prototyping featuring a UT54LVDS217 transmitter AC-coupled into an RTG4's built-in SERDES receiver.
      • 07.0105 Comparative Benchmarking Analysis of Next-Generation Space Processors Evan Gretok (NSF SHREC Center - University of Pittsburgh), Alan George (University of Pittsburgh) Presentation: Evan Gretok - Sunday, March 3th, 09:25 PM - Madison
        Researchers, corporations, and government entities are seeking to deploy increasingly compute-intensive workloads on satellites and other spacecraft. This growing need is driving the development of two new radiation-hardened, multi-core space processors, the BAE Systems RAD5545(TM) processor and the Boeing High-Performance Spaceflight Computing (HPSC) processor. The transition from single- to multi-core processor architectures hardened for space opens the door for significant increases in performance through parallelism. The ability to parallelize space apps, coupled with architectural improvements and fast clock speeds, equips new space platforms with the capability to perform much more compute-intensive tasks than previous radiation-hardened processors. The deployment of image-processing, compression, and machine-learning algorithms on these systems can allow for the rapid identification of actionable data as well as help overcome the limitations of downlink speed, congestion, and security. In comparing these two space processors currently in development, facsimiles of similar commercial architectures were selected for evaluation. The Freescale P5040 shares the PowerPC e5500 core of the RAD5545(TM). Similarly, the Hardkernel ODROID-C2 shares the ARM Cortex-A53 architecture of the HPSC. Several image-processing kernels and machine-learning apps were parallelized with OpenMP and benchmarked on these processing platforms to evaluate performance and power consumption. These kernels and apps included a color search, Sobel filter, Mandelbrot set generator, hyperspectral imaging target classifier, image thumbnailer, genetic algorithm, and particle swarm. Towards a comprehensive comparison between both platforms, measurements of execution time, speedup, parallel efficiency, power consumption, and performance per Watt were gathered for each app. Results from studies on these facsimiles were scaled down to forecasted frequencies of the radiation-hardened devices in development. This method for comparative benchmarking analysis allows for the study and prediction of the strengths and weaknesses of each option. In these studies, the RAD5545(TM) achieves the highest and most consistent parallel efficiency, up to 99%. The HPSC achieves lower execution times, averaging about half that as its counterpart, with much lower power consumption. The apps employed reached speedups up to 3.9 over four cores. The frequency-scaling methods used are validated by comparing the set of scaled measures with data points from an underclocked facsimile. This comparison yielded an average of 96% accuracy between estimated and actual results. These performance outcomes help to establish the capabilities of both the RAD5545(TM) and the HPSC for on-board parallel processing of computationally demanding apps for future space-based experiments.
      • 07.0108 The Development of Standard Controller for Chinese Space Science Experiments Wenbo Dong (Chinese Academy of Sciences) Presentation: Wenbo Dong - Sunday, March 3th, 09:50 PM - Madison
        The orbital environment provides a unique opportunity for microgravity science experiments. Scientific satellites, on-orbit science laboratories, and space stations carry a number of experiment payloads, in disciplines such as life science, material science, fluid mechanics, combustion and fundamental physics. Most of those scientific payloads need automatic operation and control. The automatic function includes the detection of various physics quantity, the drive of various movement mechanism, automatic information transmission and power management. Generally, the payload control system is some complex and diverse, therefore the development cycle may be very long and often postpone the launch of the scientific experiment and delay the important scientific discovery. By decades of experience on payload development, we focus on the standard computer special for payload control. Here a high-reliability computer conformed with open-VPX standard is developed. The computer includes a flexible high-strength structure with arbitrary number of slots which could plug into circuitry boards, such as central control board, power module board, A/D conversion board, interface board and so on. As examples, we introduce three projects using our standard computers, including the controller of the science experiment data handling unit (SEHU) on Tianzhou-1, the controller of evaporation condensation experiment on Tianzhou-1, the controller of microgravity vibration active isolation system (MAIS) on Tianzhou-1, and the controller on a boiling bubble experiment and a material transportation experiment on Practical Satellite 10, as well as some undergoing projects on Chinese Space Laboratory. The control requirements of these experiments are listed and the framework design of the controller and its electronic system is introduced. All products of those controllers are developed rapidly, and have been verified on orbit environment or by ground test. It is a good progress for scientists to realize their idea from imagine to practice. In the future, we expect standard controllers could support more kinds of microgravity experiments and benefit the space utilization.
      • 07.0110 High Performance Computing Applications in Space with DM Technology Aaron Zucherman (Morehead State University), Benjamin Malphrus (Morehead State University), John Samson (Morehead State University / Aerospace Technologies Plus) Presentation: Aaron Zucherman - Monday, March 4th, 04:30 PM - Lake/Canyon
        Dependable Multiprocessor (DM) technology was developed by Honeywell and Morehead State University (MSU) as a way of increasing the computing power available to space applications. A DM system is a scalable cluster of high performance commercial-off-the-shelf (COTS) processors with a high-speed interconnect operating under the control of a reliable system controller with application and hardware independent fault tolerant middleware. At its core DM is an architecture and software framework that enables the latest COTS processing systems to operate in inhospitable environments by providing software-based radiation fault tolerance. A DM system can execute multiple missions sequentially or concurrently based on resource availability and offers easy-to-use, user-configurable fault tolerant options. The system is also capable of autonomous and adaptive fault tolerance in response to its environment, application criticality, and system mode, as to maintain the required availability and computational correctness while optimizing resource utilization and system efficiency. A 5-year probability of incorrect computation less than 0.005% in a LEO environment is achievable. A small, light-weight, low power, low cost DM technology implementation was demonstrated in the space environment on the recent DM7 ISS mission. The DM7 mission demonstrated high throughput, throughput density, availability, and computational correctness performance in a LEO radiation environment. DM7 successfully ran on-orbit checkouts and three experiment missions including capturing and compressing camera images, achieving TRL-7 validation of DM DM has the potential to become a pervasive game changing technology applicable to almost any application where rad hardened or fault tolerant computing is required. Potential applications include: intelligent data mining and data compression, software defined radios, synthetic aperture radar, autonomous navigation, autonomous operation & control, advanced space-based networked communication, sophisticated earth observation, astrophysics, and ground-commanded programmable image compression amongst others. Numerous applications have been identified and demonstrated on a variety of DM platforms. The demonstrated applications show the breadth, flexibility, scalability, ease of use, low overhead, and processing potential of DM technology. This paper will provide a brief overview of DM technology and a summary of the DM7/ISS flight experiment, but will focus on the wide range of applications that have already been demonstrated on DM platforms included DM7.
      • 07.0112 Performance Analysis of Standalone and In-FPGA LEON3 Processors for Use in Deep Space Missions Dmitriy Bekker (Johns Hopkins Applied Physics Laboratory), Minh Quan Tran (JHUAPL) Presentation: Dmitriy Bekker - Monday, March 4th, 04:55 PM - Lake/Canyon
        When considering a new processor for a mission, one of the first questions that comes up is: “How does this processor compare with what we have used in the past?” Manufacturers use benchmarks to compare normalized performance from one product to another, but often this data is incomplete and is missing key parameters such as compiler version, compile options used, memory type, and many others. In this work, we present carefully documented benchmarking results for single-core and multi-core LEON3 processors. We evaluate both standalone (UT699, UT699E, UT700, GR712RC-multicore) and soft-core (single-core LEON3, quad-core LEON3) processors. Our single-core performance metrics are based on popular benchmarks (Dhrystone, Whetstone, CoreMark), on our own memory tests (utilizing standard memcpy and SPARC-optimized memcpy), and on a representative APL-developed deep-space application (Terrain Relative Navigation). We investigate the impact of different memory types (SRAM, SDRAM) as well as memory-targeted optimization options. In evaluating multi-core processors, we present speedup and overhead metrics from using OpenMP shared memory multiprocessing in RTEMS-5.0. We deploy community-recommended benchmarks suited for small embedded systems (EPCC, NPB3.3.1, and SPEC OMP2012) as well as an implementation of a simple 2D-FFT engine. We show that using OpenMP is a low-effort way to accelerate computationally intensive and parallelizable code sections when migrating to emerging RadHard multi-core processors. With increasing demand for high-performance FPGAs on space missions, the option to drop in a soft-core processor into the system design should be on the table, provided there are available FPGA resources. We show that the soft-core LEON3 implementation has comparable frequency-normalized performance and is a good alternative (or supplement) to a dedicated standalone hard-core processor. By adding one or more soft-core processors into the FPGA design, the data systems architect can potentially eliminate additional hardware, save on power, and improve overall performance via optimal HW-to-SW data transfers. On-board processing, communications, and data management tasks (e.g. DMA) can benefit from a tightly coupled processor inside an FPGA. We present various architectural design considerations that impact the performance and resource utilization of LEON3 processors inside an RTG4 FPGA.
      • 07.0113 Improving a Successful Space Electronics High Performance Fabric-Based Standard Joseph Marshall (BAE Systems), Patrick Collier (AFRL), Clifford Kimmery () Presentation: Joseph Marshall - Monday, March 4th, 05:20 PM - Lake/Canyon
        As this current decade dawned, space electronics faced an oncoming change in its architecture in order to provide more onboard processing and data bandwidth connectivity. Bus structures such as 1553 and CompactPCI had reached their capacities. Several private, public and institutional organizations began looking forward toward systems that would be connected by various interconnect fabrics instead of busses. In 2011, the Next Generation Space Interconnect Standards (NGSIS) group was formed to attempt to standardize the new interconnection technology and avoid the more expensive prior method of point solutions. The group began pooling information, examining use cases for future processing and choosing the standards necessary to advance this goal. One of these was SpaceVPX™, a space version of the popular OpenVPX standards. The group started work on SpaceVPX in early 2012 and in 2015, the SpaceVPX System standard ANSI/VITA 78.00 was released as the result. Since then, SpaceVPX has seen increased use and interest in the onboard space electronics arena. An errata list was published in 2016, picking up a small set of obvious mistakes in the greater than 400-page standard. The organization of the original 2015 standard document closely followed the OpenVPX standards while inserting several new sections defining features needed by a single-fault-tolerant space electronics solution. The 2015 document presented difficulties in finding all the material needed to understand and create SpaceVPX standard-compliant products. Because of the document organizational and clarity issues, the VITA 78 working group has been holding weekly telecons since 2016 toward the completion of a more user-friendly revision of the standard. Among its improvements are to update the SpaceVPX standard to correct errata and editorial errors, to reorganize the Space Utility Management sections to better match other OpenVPX standards and to fix inconsistencies within the standard. Additionally, mechanical drawings for 3U modules and for 1.6" pitch modules were added. A more overhead-efficient 3U SpaceUM module has also been defined. SpaceFibre has been added as an alternate data or control plane. Control plane access to the Direct Access Protocol Registers has been added for systems not using the utility plane fabric. RF and Optical options are being added to the module profiles building on changes in OpenVPX. Slot and module profiles are being added and simplified. The revision is expected to begin balloting toward standard approval in the fourth quarter of 2018. This paper reviews the history and capabilities of the released SpaceVPX standard and describes the updated sections, improvements and changes in process. Use cases illustrate potential advantages of the changes and range of potential new usage. Potential follow-on standards to this revision are also described.
      • 07.0114 SpaceFibre Interfaces and Architectures Steve Parkes (University of Dundee), Alberto Gonzalez Villafranca (STAR-Dundee Ltd) Presentation: Steve Parkes - Monday, March 4th, 09:00 PM - Lake/Canyon
        SpaceFibre is the next generation of SpaceWire network technology for spacecraft on-board data-handling. It runs over electrical or fibre-optic cables, operates at very high data rates, and provides in-built quality of service (QoS) and fault detection, isolation and recovery (FDIR) capabilities, providing high-reliability and high-availability. An instrument interface is straightforward to design with SpaceFibre. The primary data from the instrument can be allocated to one virtual channel, while configuration and control commands and housekeeping data can be allocated to a separate virtual channel. There is then no need for a separate control bus, improving reliability and reducing mass and power consumption. A single-lane interface can be used for moderate data rate instruments and this can be replaced by a multi-lane link for high data rate instruments, the number of lanes being matched to the instrument data rate. Additional lanes can be added, when the instrument data is critical, to provide hot or cold lane redundancy, allowing rapid recovery in the event of a lane failure. Nominal and redundant interfaces can be added to the instrument, with support for autonomous redundancy switching if required. SpaceFibre can be used to provide both the interface to a mass-memory unit and the high-data rate network for interconnecting memory modules. Virtual channels can be used to separate different classes of traffic within the memory system, for example, data to be stored, data to be retrieved and control and housekeeping information. The interface to the downlink telemetry unit can also benefit from SpaceFibre, with separate virtual channels being allocated to different classes of traffic. SpaceFibre can also be used in payload data processing systems, proving a very efficient backplane interconnect. SpaceVPX uses several physical planes (data plane, control plane and utility/management plane) to separate different classes of backplane traffic, to avoid one class interfering with another class. Using SpaceFibre, these different planes can be run over separate virtual channels on a single physical network. This improves reliability and reduces mass and power consumption. It also frees up pins on the backplane. The overall payload processing and instrument control and monitoring network is straightforward to design with SpaceFibre. For many requirements a single SpaceFibre routing switch is all that is required, or two when nominal and redundant networks are required. When a single routing switch does not provide enough interfaces, two or more routing switches can be cascaded to provide the required number of interfaces. This network can also distribute time information, pulse-per-second signals, event signals and network health status information. This paper provides an introduction to SpaceFibre, then describes how SpaceFibre can be used as instrument interfaces, as the interface and memory interconnection network in mass-memory units, as the interface to downlink telemetry units and as the backplane for payload processing units. The paper also describes an overall payload processing architecture based on SpaceFibre and explains how existing SpaceWire equipment can be readily integrated into a SpaceFibre network.
      • 07.0116 UVM Based Verification for HPSBC-FPGA of the Dream Chaser's Fault Tolerant Flight Computer Khurram Kazi (Draper) Presentation: Khurram Kazi - Monday, March 4th, 09:25 PM - Lake/Canyon
        Sierra Nevada Corporation’s flagship Dream Chaser spacecraft will use a Fault Tolerant Flight Computer (FTFC) for its missions to the International Space Station. One of the integral components of the FTFC is a Field Programmable Gate Array (FPGA) that assists with the control of mission critical functions of the flight. The complexity and rigor in verifying the functionality of the FPGA warranted the use of the Universal Verification Methodology (UVM). In this paper we provide details of the main features of the FPGA as well as the verification architecture and strategies. We explore how we are able to achieve 100% functional coverage of the design’s intended functionality and state transitions in order to thoroughly vet the capabilities of the system.
    • 07.02 Peripheral Electronics and Data Handling for Space Applications Mark Post (University of Strathclyde) & Patrick Phelan (Southwest Research Institute)
      • 07.0201 Design and Analysis of RTOS and Interrupt Based Data Handling System for Nanosatellites Akshit Akhoury (Manipal Institute of Technology), Arun Ravi (MIT), Krishna Birla (Manipal Institute of Technology), Shaleen Kalsi (MIT, Manipal), Subhojit Ghorai (Manipal Institute of Technology), Rohit Sarkar (Manipal University) Presentation: Akshit Akhoury - Monday, March 4th, 08:30 AM - Amphitheatre
        In this paper, we describe the design and working of the data handling system of a Nanosatellite which contains three interconnected microcontrollers with each microcontroller present on a different PCB. The three microcontrollers used are namely: STM32F207IG, MSP430F5438A, and STM32L431CC. A brief description of the evolution of the system architecture and the motivation behind the choice of the microcontrollers will be provided. Each of these microcontrollers handles and performs a set of tasks to ensure the smooth and proper functioning of the satellite. An in-depth explanation of the working of these tasks and the distribution of them among the three microcontrollers will be provided. The STM32F207IG microcontroller is present on the primary PCB and is responsible for interfacing with the different sensors, the thermal camera, running the attitude determination and control algorithms, running the power management algorithm for the satellite, and for the processing and transmission of payload data. The MSP430F5438A is present on the secondary PCB and performs the task of controlling all the mechanisms of the satellite which includes the antenna, door and tether deployment, the collection of data regarding the health of the satellite from the other microcontrollers, the transmission of beacon data, and receiving any up links from the ground station. The functioning of the reaction wheels present on the satellite is taken care of by STM32L431CC. The scheduling of tasks on the STM32F2 and the MSP430 is brought about through the use of a Real-Time Operating System (RTOS), Micrium, which allows the system architecture to be sensitive to the priorities and the time requirements of each task. An in-depth qualitative analysis of the application of the RTOS is presented along with a vigorous quantitative analysis through the use of Segger SystemView for STM32F2 and the Sampled Graph feature in IAR for MSP430. In contrast to the application of an RTOS on STM32F2 and MSP430, the STM32L4 is run and controlled purely through interrupts from STM32F2 and MSP430. The paper will give a description of the use of a partial OS based and partial interrupt based task switching model and will also list out the advantages and limitations of such a model. The paper also describes the various stages that are involved in the onboard processing of images obtained from the thermal camera; these stages include image tiling, image compression, and data encoding algorithms (before transmission). The encoding algorithms help in reducing the loss of data during transmission and also help in error detection and correction upon payload data reception.
      • 07.0202 Radiation Hardened High Speed Digitizer Robert Merl (Los Alamos National Laboratory) Presentation: Robert Merl - Monday, March 4th, 08:55 AM - Amphitheatre
        Los Alamos National Laboratory has designed and demonstrated a space-flight quality digitizer board with a compact PCI interface in the popular 6U form factor. The digitizer board is designed to meet the requirements of missions requiring true space-grade radiation tolerance in geosynchronous orbits. It has four 12-bit input channels, four 8-bit input channels, and can digitize input waveforms at speeds up to 2 giga samples per second. It has a reconfigurable radiation tolerant FPGA for on-orbit waveform processing and a 20 Mbyte memory for waveform storage. This module can capture data from sensors in space and analyze that data on-orbit before sending the resulting information back to Earth. This capability enables high-speed waveforms to be collected in space and analyzed in real-time. Immediate applications include direct conversion software defined radio receivers, radars, transient waveform recorders, and other space-based high fidelity data-capture and analysis applications.
      • 07.0203 Implementation of a Generic Payload Interface Unit for Agnostic Space Vehicles Patrick Phelan (Southwest Research Institute), Michael Epperly (Southwest Research Institute) Presentation: Patrick Phelan - Monday, March 4th, 09:20 AM - Amphitheatre
        An enabling technology platform, the Payload Interface Unit (PIU) is a simple, straight forward secure payload processor design that can service a variety of payloads and operate across virtually any orbit. The PIU design uses low-power electronics, low-mass structure and packaging, economical components and simple architecture to provide a highly capable interface unit. The PIU design uses a redundant, cross-strappable hardware architecture that can easily be pared down to a non-redundant configuration to save mass, power, cost & schedule. The PIU is designed for ease of manufacture, integration, and test for minimum cost and lowest risk. The PPIU architecture provides for the encryption and secure transfer of payload commands, telemetry, and mission data (together referred to as “data”) between secure payloads and any untrusted Host Spacecraft. This technology provides the capability to deny the Host Spacecraft unauthorized access to sensitive payload data and allows the payload to conduct its mission without the host operator’s knowledge. Integrity of the Information Assurance (IA) boundary between the payload and the Host Spacecraft is maintained by the PIU architecture and interface design. The PIU incorporates industry-standard digital interface(s) for both the payload and the Host Spacecraft and will be designed for standard spacecraft bus voltage. The PIU supports on-orbit operation in any Earth orbit, including Low Earth Orbit, Medium Earth Orbit, Highly Elliptical Orbit, and Geostationary Earth Orbit. The PIU architecture is capable of both embedding the payload data into the Host Spacecraft command and telemetry stream, or a dedicated communication link. This paper describes the implementation of the PIU, opportunities for further efficiency gains, and potential future capability growth.
    • 07.03 Memory and Data Storage for Space Applications Matthew Marinella (Sandia National Laboratories) & Michael Epperly (Southwest Research Institute)
      • 07.0301 Bringing 3D COTS DRAM Memory Cubes to Space Anthony Agnesina (Georgia Institute of Technology), Jim Yamaguchi (Irvine Sensors Corporation), Christian Krutzik (Irvine Sensors Corporation), Jean Yang Scharlotta (Jet Propulsion Laboratory), Sung Kyu Lim (Georgia Institute of Technology) Presentation: Anthony Agnesina - Monday, March 4th, 09:50 PM - Lake/Canyon
        This paper details the architectural choices and implementation challenges faced in the building and validation of a space-qualified 3D DRAM memory system, in an effort to offer high memory capacity, increased bandwidth, fault tolerance and improved size-weight-and-power characteristics needed for harsh space mission environments. Our novel horizontal 3D stacking technology called “Loaf-Of-Bread” (LOB) is used to integrate multiple Commercial-Off-The-Shelf (COTS) DRAM memory dies into a cube structure (3D-M3). A custom Radiation-Hardened-By-Design (RHBD) controller sitting underneath the cube supplements the 3D-M3 in addressing COTS radiation weaknesses by including advanced SEU and SEFI mitigation features such as error detection and correction, scrubbing, device data rebuilding and die management. We developed a custom DDR physical layer (PHY) for 14 independent dies to connect the 3D-M3 to its controller. Validation and functional evaluation of the ASIC controller will be conducted prior to tape-out on a custom FPGA-based emulator platform integrating the 3D-stack. The selected test methodology ensures high-quality RTL as well as allows to subject the cube structure to radiation testing. The proposed design concept allows for flexibility in the choice of the DRAM die in case of technology roadmap changes or unsatisfactory radiation results.
    • 07.04 Avionics for Small Satellites, Nano-Satellites, and CubeSats James Lumpp (University of Kentucky) & John Dickinson (Sandia National Laboratories)
      • 07.0401 Give Me More: Increasing Output for the Cyclone Global Navigation Satellite System (CYGNSS) Mission Robert Klar (Southwest Research Institute), William Wells (), Jillian Redfern (Southwest Research Institute), Ronnie Killough (Southwest Research Institute) Presentation: Robert Klar - Thursday, March 7th, 08:30 AM - Dunraven
        In December 2016, the National Aeronautics and Space Administration (NASA) launched a constellation of eight spacecraft for the Cyclone Global Navigation Satellite System (CYGNSS) mission in Low-Earth Orbit (LEO) at an inclination of 35 degrees. The mission’s science goal is to understand the coupling between ocean surface properties, moist atmospheric thermodynamics, radiation, and convective dynamics in the inner core of Tropical Cyclones (TCs). CYGNSS uses an innovative technique, Global Navigation Satellite System-Reflectometry (GNSS-R), to derive surface wind speed by measuring the strength of the specular reflection of Global Positioning System (GPS) signals from the surface of the ocean. Despite limited onboard processing resources and relatively short ground station contacts, the CYGNSS data processing system has been effective – it has collected and successfully delivered to the science team hundreds of gigabytes of data in just eighteen months of operation. Since GNSS-R has proven useful, scientists are looking at other applications for observations over water and land. To make accurate measurements over land, it is of considerable interest to increase the data production rate for the Delay-Doppler Maps (DDMs), one of the chief onboard data products. This increase would significantly reduce the smearing effect that results from integrating over a longer time interval. This paper reexamines the flight segment and ground segment data processing for CYGNSS. It considers some of the limiting constraints and explores some changes that have improved performance or may potentially improve it. For the flight segment, we evaluate enhancements such as alternative formatting and additional data compression. For the ground segment, we look at improved planning and increased ground contacts.
      • 07.0402 Solution to Data Congestion in Space Visweswaran Karunanithi (Technical University Delft/ Innovative Solutions In Space B.V) Presentation: Visweswaran Karunanithi - Thursday, March 7th, 08:55 AM - Dunraven
        A major paradigm shift is occurring in the type of missions carried-out by present nano/microsatellite missions. Between the years 2003 to 2014, the number of nano-satellite missions gradually grew, most of the missions during this period were either simple scientific missions from universities or one-off technology demonstration missions by industry startups. With the successful technology demonstrations of nano/microsatellite missions, 2015 to 2017 saw a major change in this field. The number of nano-satellites launched in 2017 nearly tripled compared to 2016, mainly because industries had started to develop constellation missions following the success of the earlier one-off missions. It is predicted that from now to 2022, ~72% of the nano/microsatellites launched will be either Earth observation (EO)/Remote Sensing (RS) constellations or Communication satellite constellations. The major bottle-neck for these missions of the future is the ability to down-link all the data generated onboard these satellites. Also, the present frequency spectrum allocations in S-band are not enough to meet the needs of future missions. So, there is a need to start investigating the use of Ka-band (20 to 30 GHz) frequency allocations to meet the communication technology demands of more complex missions lined-up in the coming decade. Also, with over $50B invested in the mega-constellations that intend to provide internet access from LEO (Low earth orbit)/ MEO (Medium Earth Orbit), there is a need to investigate if making use of this network could contribute to the solution for data congestion of nano/microsatellite missions. This work investigates the challenges of using Ka-band frequencies for nano/microsatellite gigabit-rate communications for three use-cases: 1) Direct satellite to ground links: A remote sensing application with a multi-spectral imaging payload is considered as a use-case to determine the requirements on the communication system; 2) Inter-satellite links: a swarm of nano-satellites forming a self-deploying sensor network for earth observation and radio astronomy is considered; and 3) LEO to GEO links and LEO to LEO links: Internet-of-things (IOT)/Machine-to-Machine(M2M) services are considered as examples. Since this technology is intended for nano/micro satellites, power efficiency bandwidth efficiency, and radiation tolerance are very significant factors and this work makes a comparison of the present semiconductor technology to decide suitable candidates.
      • 07.0404 Towards an Integrated GPU Accelerated SoC as a Flight Computer for Small Satellites Caleb Adams (University of Georgia), Allen Spain (University of Georgia ), Jackson Parker (University of Georgia), Matthew Hevert (University of Georgia), David Cotten (University of Georgia), James Roach (University of Georgia) Presentation: Caleb Adams - Thursday, March 7th, 09:20 AM - Dunraven
        Many small satellites are designed to utilize cutting edge technology with the goal of rapidly advancing space based capabilities. As a result, many components take advantage of developments from the miniaturization of smartphone technology. Within the past 2 years, the UGA Small Satellite Research Laboratory has extended this concept into embedded GPUs for high-performance processing in LEO. Here we showcase advances in our research of high-performance space-based computation by integrating a traditional flight computer with existing miniaturized GPU/SoC systems. Such a system paves the way for many of NASA’s goals that require space based AI, neural networks, computer vision, and high performance computing. Our system fits a standard CubeSat PC104+ form factor, and implements many standard protocols such as I2C, SPI, UART, and RS422. The system also has several GPIO pins, 2 USB-C ports, a micro USB port for flashing, an Ethernet port, and a micro SD card slot for development. Additionally, the system is designed to be modular, so that GPU accelerated SoCs can be stacked to form a distributed system. For our primary computer, which handles I/O and initializes processes on the SoC, we choose to use the radiation tolerant Smart Fusion 2 SoC with an ARM Cortex-M3 processor and a FPGA. In addition to this primary computer, we use the Nvidia Tegra X2/X2i as the GPU/SoC workhorse. The primary computer and the TX2i are designed to share memory space with peripherals mounted onto the board, so that no significant file transfer is required between the subsystems. Additionally, Nvidia's Pascal architecture enables GPU-CPU or GPU-GPU communication without PCIe, enabling dense interconnected networks for monitoring and computation. To address thermal concerns, we cap the TX2i’s power draw at 7.5Watts, provide recommendations for thermal interface materials, and ensure that the primary computer only enables the GPU/SoC when parallel computation is specifically requested. Furthermore, radiation mitigation techniques are explored with ECC, software mitigation techniques, and aluminized kapton sheets. In conclusion, this system is a step towards a miniaturized high-performance flight computer well suited for future computational demands.
      • 07.0405 Accurate Star Tracker Simulation with On-Orbit Data Verification Laila Kazemi (Ryerson University), John Enright (Ryerson University) Presentation: Laila Kazemi - Thursday, March 7th, 09:45 AM - Dunraven
        In this study we will develop high fidelity simulation and testing procedures for star trackers to evaluate sensor performance in dynamic conditions. These high fidelity simulations use star trackers calibration parameters, mission trajectory and a star catalog to predict sensor accuracy and availability of an attitude solution. Synthetic star trackers' images are time efficient and cost effective tools for algorithms development and further evolving of small satellite's star trackers. The existing simulations do not consider effects of spacecraft’s motion on star images or use idealized models. The problem arises when the results of the developments with simulations differ from the results of the hardware in the loop tests due to model simplifications. Since small satellite sensor development budget might not allow for frequent hardware and orbital tests, accurate simulations are important and essential part of design. This work will close the gaps between the simulations, ground testing and on-orbit operations. To achieve high fidelity simulations, we consider factors such as star characteristics, lens effects like vignetting and distortion, spacecraft slew rate, and the shape of imaged stars under dynamic conditions. Different intensity models are assessed to simulate the detector intensity count that most closely represents the on-orbit collected data. We use the spacecraft’s trajectory and angular motion to model and simulate the star path on the image frame. We will develop ground testing strategies to verify our models and results. We use ST-16 series star trackers on-orbit data as a benchmark to simulate a sequence of images representing the returned telemetry from the sensor at different angular velocities. Furthermore we will develop and discus lab and field testing set-ups and procedures to verify the simulations models. This work unifies and relates the simulation, lab testing, and on-orbit results such that they can be used for studying and optimizing the star tracker. Validating of our simulations against separate on-orbit telemetry sets, we achieved measurement noise on same order of magnitude in dynamic conditions.
    • 07.05 Power Electronics for Space Applications Christopher Iannello (NASA - NESC ) & Peter Wilson (University of Bath)
      • 07.0501 Mathematical Programming Based Approach to Modular Electric Power System Design Allen Flath (University of Kentucky), Aaron Cramer (), James Lumpp (University of Kentucky) Presentation: Allen Flath - Friday, March 8th, 08:30 AM - Amphitheatre
        Satellite power systems can be understood as islanded DC microgrids supplied by specialized and coordinated solar cell arrays augmented by electrochemical battery systems to handle high-power loads and periods of eclipse. The periodic availability of power, the limited capacity of batteries, and the dependence of almost all mission service on power consumption create a unique situation in which temporal power and energy scarcity exist. Any satellite power system must be properly designed so the power generation and energy storage portions of the system have enough generation potential and storage capacity to reliably meet the load requirements of a given satellite mission. A multi-period model of a small-orbital satellite power system’s performance over a mission’s duration can be constructed. A modular power system architecture is used to characterize the system’s constraints. The periodic and generally predictable nature of a satellite’s mission environment provides a useful opportunity for these techniques. Using mathematical programming, an optimization problem can be posed such that the optimal power and energy ratings for the power system are determined for any load-schedule imposed by a given mission’s requirements. The optimal control path of the electrical power system over a mission’s duration is also determined when the mathematical program is solved. KySat-2, a CubeSat nanosatellite that was designed by the University of Kentucky and launched in 2013, will be presented as a reference configuration with a specific set of mission requirements, and the design solution determined by the new tool can be compared to the actual EPS implemented by the design team.
    • 07.06 Electronics for Extreme Environments Mohammad Mojarradi (Jet Propulsion Laboratory)
      • 07.0601 Cold Survivable Distributed Motor Controller (CSDMC) Gary Bolotin (JPL), Gary Bolotin (Jet Propulsion Laboratory, California Institute of), Don Hunter (Jet Propulsion Labortory), Douglas Sheldon (Jet Propulsion Laboratory) Presentation: Gary Bolotin - Thursday, March 7th, 04:55 PM - Jefferson
        This paper presents the results of NASA’s COLDTECH development entitled “Cold Survivable Distributed Motor Controller (CSDMC)”. This work addresses the need for low mass, power and volume motor control electronics and its associated cabling. Landed payload mass of ocean world missions typically requires a spacecraft launch mass of 7-10x the landed mass due to the required propellant to get the payload to the surface. Reduction of landed mass leads to cheaper, more frequent missions and/or increased science return. This work addresses this need by developing a distributed electronics architecture, which places control and power electronics near or at actuators and instruments. The outcome of this effort will result in a 10X reduction in harness mass, enabling a significant increase in science payload which then enables more capable sample acquisition, delivery and analysis systems on these missions. Placing the control and power conversion electronics at or near the actuators or instruments is the cornerstone of our distributed architecture. To do this, we developed the technology necessary to distribute the electronics and place them on a shared interface and power bus. This enables a significant reduction in cable mass along with its associated complexity. This allows spacecraft designers to take advantage of volume at the extremities that would normally not be utilized. The challenge to meeting these goals is reducing the Space, Weight, and Power (SWaP) of the distributed electronics and adapting them to meet the requirements to survive the extreme temperature and radiation at the exposed extremities. We met these requirements by combining JPL s expertise in cold capable electronics, packaging and power conversion together with the state-of-the-art high density interconnect technology. This combination will result in a unique high density technology that extends the life of landed missions and also allows the missions to do more science through the mass and volume that is made available. In this paper we intend on discussing the technologies and system design to achieve these goals in support of ocean world missions. These technologies include the development of our motor control modules, a point of load regulator and isolated converter modules along with the packaging technology necessary to allow our electronics to survive the extreme temperatures.
      • 07.0602 Thermally-resilient COTS CMOS Sensor Packaging Approach for Mars2020 Enhanced Engineering Cameras Colin Mckinney (Jet Propulsion Laboratory) Presentation: Colin Mckinney - Thursday, March 7th, 05:20 PM - Jefferson
        Electronics designed for NASA planetary missions such as Martian surface environment require wide-temperature survivable electronics packaging designs to ensure high-reliability avionics and instrumentation. Planetary surface temperature range of -135C to +40C dictate that electronics packaging solutions provide resiliency to large thermal excursions to counteract mismatches in the coefficient of thermal expansion in the myriad of materials found within space born electronics. The Mars2020 Enhanced Engineering Cameras (EECAMs) are a collection of medium- and wide-angle cameras used across the Mar2020 Flight System. The EECAMs are a new development for the Mars2020 mission and provide significant improvements for engineering functional imaging and surface productivity use cases when compared to heritage MSL Engineering cameras. A total of nine EECAMs will be used across the Mars2020 Flight System for varying imaging use cases. Eight of these cameras are configured as four stereo pairs and are used to create stereoscopic image meshes used primarily for in-situ blind drive, auto-navigation, robotic arm workspace imaging, and rover localization operations. Additionally, the rover’s Sample Caching System (SCS) intends to use an EECAM to document sample tube operations. The Mars2020 Enhanced Engineering Cameras use a commercial off the shelf (COTS) CMOS image sensor. The CMOS image sensor is packaged in a 143-pin Ceramic Pin Grid Array (PGA), an electronics packaging design that proves challenging to survive in deep thermal cycling environments such as the Martian surface. Early in the EECAM development, breadboard camera electronics that used conventional thru-hole soldering techniques were subjected to limited thermal cycling to investigate survivability in Martian surface thermal environments from -135C to +70C. Functional testing following 1000 cycles showed successful operation of the COTS CMOS detector, however after 2000 cycles the detector was inoperable. Visual inspection of the part showed that the solder joints of the image sensor exhibited severe cracking in a substantial number of pins, and in some cases resulted in complete sheering of the pins from the ceramic package substrate. A novel approach to mounting the COTS CMOS image sensor to the flight EECAM electronics Printed Wiring Assembly (PWA) was used to ensure wide-temperature survivability of an otherwise terrestrial COTS detector packaging design. The thru-hole pins of the ceramic package were isolated from the PWA and a thermally-compliant small diameter wire is used to electrically connect the image sensor package to surface-mount pads on the PWA. Risk-reduction testing proved the EECAM COTS image sensor packaging approach to successfully survive 3015 thermal cycles of -135C to +70C. We will present the steps taken to derive the thermally-resilient electronics packaging design of the Mars2020 EECAM electronics. We will highlight analyses and empirical test results that lead to a wide-temperature-survivable COTS component packaging design. Details of thermal cycle testing, in-process inspections, and final packaging design will be presented.
      • 07.0603 Modelling of Select Mixed-Signal Electronics for Cold Temperature Environments William Norton () Presentation: William Norton - Thursday, March 7th, 09:00 PM - Jefferson
        Future NASA missions will be subject to operating in extreme cold using mixed-signal electronic components. At present, models for mixed-signal components do not include significant temperature dependence, ADCs in particular. By developing comparatively generic models that are cold-capable, the ability to integrate mixed-signal parts into a larger system greatly increases. We have focused on Analog to Digital Converters (ADCs) because of their widespread, critical usage in avionic subsystems and lack of available cold macro models. This paper provides an overview of testing and modelling of a commercial off the shelf (COTS) ADC across cryogenic temperature for the purposes of trend identification, performance prediction and reliability scoping with possible post-correction capability. Testing of the device was made possible through development of an evaluation board that would service the circuitry requirements of multiple devices. Standard performance tests including sine wave FFT sweeps, best-fit sine histograms, and step responses provide information regarding part performance. To augment this information, we will perform additional tests including chirp input and changing clock duty cycle while implementing system identification of device behavior. These tests should permit a better understanding of the part and reveal degradation trends in a more descriptive way. We have not observed bit errors in testing thus far and we feel that these represent a serious degradation of device performance beyond that which a behavioral model can capture, especially in the absence of schematics or IBIS-type models. Because of this, effort focused on inclusion of various converter non-idealities and modeling of the front-end track and hold amplifier while assuming digital quantization to be ideal, a situation borne out so far. At the beginning of the project we constructed a simplistic model with only a priori information gleaned from existing literature, basic understanding of part internals, and datasheets. Since existing information regarding this ADC beyond its normal operating range is not available, there was considerable disagreement between predicted and measured results, and trend contributors were grossly misidentified. Once the model was calibrated according to measurements, there was much better agreement, including enhanced predictive capability regarding reference drift, static distortion, offset and gain error drift. Examination of the derived transfer function, drifting reference, and INL measurements present the possibility of post-correction for a given converter. The end result is a cold-capable behavioral model for an ADC that is user-tunable and allows one to examine the effects of different error sources on the device’s overall performance. Some of these error sources, such as jitter, aperture delay, and integral nonlinearity (INL) incorporate dynamic effects of the conversion process that are important in high-frequency communications applications. Others, such as offset error, gain error and reference drift, incorporate static effects that are of greater import in measurement applications. The development of this model will hopefully permit greater integration of commercial components in larger avionics systems, decreasing development time while increasing flexibility of designers when trying to meet mission objectives.
      • 07.0604 GaN Photodetector Measurements of UV Emission from a Gaseous CH4/O2 Hybrid Rocket Igniter Plume Hannah Alpert (Stanford University), Ashley Karp (Jet Propulsion Laboratory), Jason Rabinovitch (Jet Propulsion Laboratory, California Institute of Technology), Elizabeth Jens (Jet Propulsion Laboratory) Presentation: Hannah Alpert - Thursday, March 7th, 09:25 PM - Jefferson
        Owing to its wide (3.4 eV) and direct-tunable band gap, GaN is an excellent material platform to make UV photodetectors. GaN is also stable in radiation-rich and high-temperature environments, which makes photodetectors fabricated using this material useful for in-situ flame detection and combustion monitoring. In this paper, we use a GaN photodetector to measure ultraviolet (UV) emissions from a hybrid rocket motor igniter plume. The GaN photodetector, built at the Stanford Nanofabrication Facility, has 5 µm wide regions of AlGaN/GaN two-dimensional electron gas (2DEG) electrodes spaced by intrinsic GaN channels. In most applications, the ideal photodetector would exhibit a high responsivity to maximize the signal, in addition to a low dark current to minimize quiescent power. A performance metric which simultaneously captures these two values is the normalized photocurrent-to-dark current ratio (NPDR), defined as the ratio of responsivity to dark current, with units of W-1. The NPDR of our device is record-high with a value of 6 x 1014 W-1 and the UV-to-visible rejection ratio is 4 x 106. The high rejection ratio is essential as it eliminates cross-sensitivity of the detector to visible light. The spectral response can be modeled as a rectangular window with a peak responsivity of 7,800 AW-1 at 365 nm and a bandwidth of 16 nm. The photodetector was placed at three radial distances (3", 5.5", and 7") from the base of the igniter plume and the oxidizer-to-fuel ratio (O2/CH4) was varied to alter the size and strength of the plume. The current measured from the device was proportional to the intensity of the emission from the plume. The data demonstrates a clear trend of increasing current with increasing fuel concentration. Further, the current decreases with larger separation between the photodetector and the plume. A calibration curve constructed from the responsivity measurements taken over four orders of magnitude was used to convert the current into incident optical power. By treating the plume as a black body, and calculating a radiative configuration factor corresponding to the geometry of the plume and the detector, we calculated average plume temperatures at each of the three oxidizer-to-fuel ratios. The estimated plume temperatures were between 1000 and 1500 K, with the higher values corresponding to higher fuel concentration. Further, the temperature is roughly invariant for a fixed fuel concentration for the three tested distances. These data demonstrate the functionality of GaN as a material platform for use in harsh environment flame monitoring.
      • 07.0607 A New Paradigm for Computing for Digital Electronics under Extreme Environments Naveen Kumar Macha (University of Missouri Kansas City), Bhavana Tejaswini Repalle (), Md Arif Iqbal (University of Missouri-Kansas City), Mostafizur Rahman (University of Missouri Kansas City) Presentation: Naveen Kumar Macha - Thursday, March 7th, 09:50 PM - Jefferson
        Digital CMOS based Integrated Circuits are susceptible to both permanent and transient errors under extreme environments such as space due to radiation. The prime causes of failures are the junction breakdowns and latchups in MOSFET, which are expected to worsen with technology scaling. We propose a new computing fabric that not only provides a scalable alternative to traditional CMOS but also incorporates intrinsic features for radiation resilience. At the core this fabric are metallic nano-lines are organized in a compact manner such that whenever signal transitions take place in these lines, the sum of their Crosstalk interference gets induced through virtual coupling capacitance in another metal nano-line that was floating (not connected to any signal/VDD/GND); the transitioning signals are inputs and the net induced charge is the output. The coupling strength between the input and output nano-lines, and the net charge induced determine what logic is being computed. Since the computation relies on the interference between signals, metal lines are primary components and very few devices are required. The lesser device required implies lesser susceptibility. A byproduct of using interference for computing as opposed to device switching dependence is, when a high voltage spike is induced on interconnects due to radiation, the charges will get shared in coupling capacitances and prevent extra charge build up in device nodes. In addition to these, the devices used in this fabric are ultra-thin body SOI Junction-less nanowire transistors where CMOS like latchup mechanism is impossible since the substrate is not conductive. Our simulation results validate the concepts for radiation hardening and also indicate potentials for huge improvements. For a full adder design, density benefits were over 5x. Under worse case radiation scenarios, the simulations indicated superior resilience compared to CMOS.
    • 07.07 Fault Tolerance, Autonomy, and Evolvability in Spacecraft and Instrument Avionics Tom Hoffman (Jet Propulsion Laboratory) & Didier Keymeulen (Jet Propulsion Laboratory)
      • 07.0702 Dynamic Fault Tree Analysis for a Distributed Onboard Computer Kilian Hoeflinger (German Aerospace Center - DLR), Sascha Müller (German Aerospace Center - DLR), Ting Peng (), Moritz Ulmer (German Aerospace Center (DLR)), Daniel Lüdtke (German Aerospace Center - DLR), Andreas Gerndt (German Aerospace Center) Presentation: Kilian Hoeflinger - Monday, March 4th, 11:25 AM - Gallatin
        Future space missions will demand greater capabilities regarding the processing of sensor data on onboard computers of satellites, than current space technology can provide. Earth observation and robotics are important drivers for this trend. Limited downlink bandwidth, high resolution sensors and more rigid real-time control algorithms, dedicated to increase satellite autonomy, drive the need for growing onboard computing performance. To overcome these challenges, new high-performance onboard computers are necessary, leading to an increased consideration of Commercial-Of-The-Shelf (COTS) components. The DLR project Scalable Onboard Computing for Space Avionics (ScOSA) targets these challenges with a complex onboard computer design, consisting of space-qualified and COTS computing devices, arranged as heterogeneous SpaceWire-interconnected cluster in space. However, the utilization of COTS components in the harsh space environment imposes new challenges on the system. Radiation of various wavelengths has diverse degrading effects on a spacecraft in a celestial or transfer orbit. The remoteness of the embedded systems and the inherent impossibility to conduct physical maintenance, demand a dependability-driven and fault-tolerant-centric development of the system. Therefore, Fault Detection Isolation and Recovery (FDIR) mechanisms are important functionalities of systems like ScOSA. These enable the preservation of the demanded dependability levels for an embedded system in space. To ensure this dependability, the FDIR subsystem requires a detailed analysis of potential faults in the system. For this purpose, we employed Dynamic Fault Tree (DFT) analysis, a methodology which is used to model faults and their temporal propagation through the onboard computer. With this paper, we contribute a new building block for showing the applicability of DFT analysis and for closing the gap between theory and practical application of DFTs. Finally, the result of the analysis enables a tailoring of the overall ScOSA FDIR subsystem, locating special points of interest and the configuration for the detection, isolation and recovery functionalities.
      • 07.0703 Automating and Integrating HW/SW Co-Verification with Embedded MPSoC Instrument Avionics Pamela Zhang (California Institute of Technology), Didier Keymeulen (Jet Propulsion Laboratory) Presentation: Pamela Zhang - Monday, March 4th, 11:50 AM - Gallatin
        The emergent technology of System-on-Chip (SoC) and UltraScale+ Multi processors system-on-chip (UltraScale+ MPSoC) devices promises lighter, smaller, cheaper, and more capable and reliable space electronic systems that could help to unveil some of the most treasured secrets in our universe. This technology is an improvement over the technology that is currently used in space applications, which lags behind state-of-the-art commercial-off-the-shelf (COTS) equipment by several generations. Soc and UltraScale+ MPSoC technology integrates all computational power required by next-generation space exploration science instruments onto a single chip. Unfortunately the traditional independent Ground Support Equipment (GSE) systems for testing this new extremely complex environment by acquisition, processing, and visualization of hyperspectral images can be prohibitive in terms of hardware, software and development time. This paper describes the new automation capabilities of hardware/software co-verification tools (LiveCheckHSI) for the Xilinx Zynq-based control and data handling avionics system that have been developed at the Jet Propulsion Laboratory (JPL) for next generation imaging spectrometers (NGIS). The flight NGIS avionics acquires and compresses images in real-time, in addition to programming the spectrometer (frame rate, exposure time), focus step motor, and heaters and reporting telemetry. The first part of the paper describes the automation tools integrated into the remote LiveCheckHSI such as AutoSweep, Record and Script. Beyond the SoC, the more recent emergence of UltraScale+ MPSoC technology combining heterogeneous supercomputing capability with high performance FPGAs allows for the integration of the LiveCheckHSI verification tools into the deployment device itself. This capability permits on-chip verification for flight systems and extends embedded testing and verification tools beyond the current pre-implementation formal verification and post-implementation HardwareDebug/Chipscope with hardware in the loop. The second part of the paper presents this concept by describing how the Yocto build system and the Qt C++ Framework creates an integrated on-chip LiveCheckHSI for hyperspectral image processing and visualization deployed UltraScale+ MPSoC. This paper presents successful leveraging of the quad core ARM 64-bit processor of the UltraScale+ MPSoC to execute real-time data processing and visualization using the Qt LiveView application. The software created in this process also follows the standards necessary to allow expansion and deployment on other devices.
    • 07.08 Guidance, Navigation, and Control Technologies for Space Applications Giovanni Palmerini (Sapienza Universita' di Roma) & John Enright (Ryerson University)
      • 07.0802 MEMS-based Gyro-stellar Inertial Attitude Estimate for NSPO Micro-Sat Program Yeongwei Wu (), Wei Ting Wei (NSPO) Presentation: Yeongwei Wu - Tuesday, March 5th, 08:30 AM - Dunraven
        After successfully completing two space programs, FORMOSAT-5 (FS-5) and PORMOSAT-7 (FS-7) (FS-5 was launched on August 24, 2017, while FS-7 is waiting for launch in summer of 2018), the National Space Organization (NSPO) of the National Applied Research Laboratories (NARL) in Taiwan, Republic of China (ROC) recently initiated the Third Phase of Space Program for next ten years developing a series of micro-satellites (less than 250 Kg) to support future global space missions such as regional navigation/communication, tactical reconnaissance/imaging, and space weather/situation awareness. The near term space mission calls for a constellation of three LEO satellites, each carries a remote sensing device providing 1-meter image resolution in black and white and 2-meters image resolution in colour. In addition to performing the primary remote sensing/scientific missions, the developed micro-satellite will service as a space qualification and demonstration testbed/platform for many NSPO future-built critical space components such as Fibre-Optics Gyros (FOGs), mini-star tracker, reaction control subsystem, etc. Although the Attitude and Orbit Control Subsystem (AOCS) for the micro-sat will take a heritage from the FS-5 bus, many design challenges for the AOCS hardware/software need to be addressed to satisfy the new mission requirements such as smart ability imaging capability with minimum bus size, weight, and volume constraints. One of the major departures from the heritage design would be the use of gyro-stellar (GS) inertial attitude estimate (IAE) instead of original stellar IAE as a primary attitude solution for the AOCS. With this new design architecture we also investigate the potential incorporation of the Microelectromechanical Systems (MEMS) gyro arrays in the IAE design. Recent advances in the construction of MEMS devices have made it possible to manufacture small and light weight inertial sensors. However, because of its low accuracy, the devices have limited their applications to tasks requiring high-precision. In addition to the common methods focusing on the design and fabrication of the device itself, many other methods have been explored to enhance the MEMS device’s accuracy performance at the component levels. Furthermore, the current research and development works on gyro performance improvement using MEMS arrays were mainly applicable for the single-axis MEMS arrays. For the three-axis MEMS gyro arrays, the current methods rely on the precision knowledge of alignments among various MEMS arrays. The purposes of this paper are: (1) to examine the AOCS subsystem level performance (IAE estimate accuracy) using the three-axis MEMS gyro arrays; (2) to address the subsystem level performance due to array misalignments and component temperature-dependent errors; and (3) to investigate various data fusion methods to optimize the IAE performance. This paper will describe: the analytical method used to assess the GS IAE using the MEMS gyro arrays; the lab-tested MEMS gyro’s performance data (radiation effect and temperature effect data) from Invensense MUP6000; the simulation results of various data fusion methods using a Matlab-based IAE simulation model; and our preliminary conclusions and recommendation for the micro-sat IAE design using MEMS gyro arrays.
      • 07.0803 Flight Performance Analysis of the CYGNSS microSatellites from On-orbit Telemetry Leena Singh (Lincoln Laboratory), Matthew Fritz (Charles Stark Draper Laboratory, Inc.) Presentation: Leena Singh - Tuesday, March 5th, 08:55 AM - Dunraven
        The Cyclone Global Navigation Satellite System (CYGNSS) microsatellites were designed to provide precise, reliable attitude determination and control (ADC) capability over long durations spanning all spacecraft flight phases including detumbling, Sun acquisition and hold, and Earth-frame fixed pointing during Normal operations. Normal modes of operation on these spacecraft include Science mode, and a Torque Equilibrium Attitude (TEA) mode, in both of which, CYGNSS must hold a fixed attitude relative to the Earth-aligned, Local-Vertical-Local-Horizontal (LVLH) frame. To realise desired ADCS capability, the spacecraft were equipped with a star tracker, 3-axis magnetometer, sun-sensors, reaction wheel triad assembly and magnetic torque rods. The solar current accumulated by a satellite's solar arrays were additionally used in lieu of sun-angle measurements to aid Sun-relative pointing in Sun-Hold mode. Previous publications had introduced the architecture, Concept of Operations, and hardware and software algorithms for CYGNSS' various ADCS modes. This paper will present the flight performance observed from on-orbit data collected over a year of on-orbit operations. In particular, this paper will present a root-cause analysis behind an anomalous, unexpected, once-per-orbit pitch oscillation observed in all satellites during their Normal operation when they were required to hold a fixed LVLH-relative attitude. Telemetry downloaded during the early mission phases indicated that the attitude determination Kalman Filter, which included estimates of ``disturbance'' torque biases from unmodelled effects, showed (a) significantly higher-than-predicted magnitudes for these unmodelled disturbances and (b) marked correlation to the geomagnetic field measured by a satellite at its current ephemeris. Regression analysis of these torque residuals suggested the existence of a strong, nearly constant (for a given spacecraft) residual magnetic dipole, which, when convolved with the instantaneous geomagnetic field, produces the disturbance torque effect. Introducing a similar disturbance dipole moment in our high-fidelity CYGNSS ADCS simulators perfectly reproduced, in magnitude and frequency, the sustained, once-per-revolution pitch oscillations demonstrated by CYGNSS on orbit. This spurious magnetic dipole is believed to trace to a wiring loop within the bus carrying the spacecraft's main power current. Countering the effect of this disturbance magnetic moment was the key challenge of the flight analysis activity. The paper will summarise the options the team identified to cancelling the disturbance torque effects. For CYGNSS recovery, this was performed by recomputing its control parameters, ensuring signficantly higher low-frequency open-loop gain, sufficient to cancel the disturbance torques being estimated. (For reference, this disturbance torque had been identified as being over two orders of magnitude higher than the 3-sigma bounds normally allocated to on-orbit disturbances). Likewise, the estimator was tuned for a similarly higher ambient disturbance environment. The retuned ADCS parameters proved sufficient to enable the CYGNSS closed-loop pointing system reject the disturbance torques, including from onboard effects, and meet pointing requirements.
      • 07.0805 Fixed-time Attitude Control of Satellite Using Combined Magnetic and magneto-Coulombic Actuators Dipak Giri (), Subham Dey (Birla Institute Technology Mesra) Presentation: Dipak Giri - Tuesday, March 5th, 09:20 AM - Dunraven
        This paper presents a novel fixed time attitude control for a satellite system actuated by hybrid actuators. The hybrid actuators considered in this paper are a combination of magnetic and magneto-Coulombic actuators which provide torques along the three body axes at every instant of time. The magneto-Coulombic torque is generated with the help of spherical charged shells which are placed along the different body axes. These charged shells interact with Earth's magnetic field to produce the Lorentz force which in turn produces the required magneto-Coulombic torque for actuating the satellite. Whereas, the magnetic torque is produced by magnetorquers on interaction with the current flowing through them with the Earth's magnetic field. However, the magneto-Coulombic torque as well as the magnetic torque, if used independently for actuation purposes results in an under-actuation problem because the magneto-Coulombic torque is constrained in a plane containing the local magnetic field and velocity vectors, and magnetic torque lies in a plane perpendicular to the local magnetic field vector. This problem is tackled and addressed in this paper by using the combination of magneto-Coulombic and magnetic actuators, which yields a fully actuated satellite system thereby resulting in a three-axis attitude control or full controllability at all periods of time. The control formulation in this paper is derived from sliding mode control theorem which is a popular robust control algorithm. The sliding manifold considered in this paper is a non-singular terminal sliding manifold designed in such a way that it ensures finite-time convergence to the origin of the sliding surface. Finite-time stability of the satellite is proved using the Lyapunov theorem. The expression of convergence time is derived using the Lyapunov theorem and is found to be independent of the initial conditions of the state variables when the satellite system dynamics is modeled in the state-space formulation. As the convergence time is independent of the initial conditions of state variables, therefore the control algorithm is termed as fixed-time attitude control algorithm. Numerical simulations are used to validate the effectiveness of the fixed time attitude control algorithm.
      • 07.0807 General Analysis and Optimal Solution for the Torque Capability of Control Moment Gyroscopes David Elliott (Cornell University), Mason Peck (Cornell University) Presentation: David Elliott - Tuesday, March 5th, 09:45 AM - Dunraven
        This paper presents metrics that represent the performance of control moment gyroscope (CMG) steering laws and provides an analytical solution for the gimbal angle set that maximizes these metrics. Specifically, the paper considers the torque capability of an array of CMGs, a common design criterion for steering laws in spacecraft momentum-control systems. The present study offers a new torque-capability metric, as well as a simplification of the metric that enables quick calculations and analysis for systems-engineering applications. The paper provides an analytical, closed-form solution that maximizes torque capability for array with parallel gimbal axes. The analytical solution defines an optimal gimbal-angle set for a given angular-momentum state that maximizes the torque capability of the CMG array. A combination of analytical and numerical methods are used to analye the optimality of the solution. The optimal solution is proven analytically to be globally optimal. Numerical methods provide further support for the global optimality of the solution. The general analysis of performance metrics informs the design and evaluation of current and future steering laws for CMGs arrays. The optimal solution and the analysis of performance metrics are applicable to any array with parallel gimbal axes and other systems with similar kinematics, such as manipulators and continuum robotics systems. Therefore, the results presented here have many uses in the applications of CMGs for spacecraft dynamics and control, as well as the broader aerospace and robotics communities. The paper’s results can be used to increase the performance of CMG-actuated systems, such as spacecraft, rovers, among others, without modifying hardware. The analytical optimal gimbal angle set is efficient to compute, and it can therefore be incorporated in realtime constraint-based steering laws, increasing the generality and performance of these steering laws.
      • 07.0811 ESTCube-2 Attitude Determination and Control: Step towards Interplanetary CubeSats Ikechukwu Ofodile (University of Tartu), Hendrik Ehrpais (), Andris Slavinskis (Tartu Observatory/NASA Ames Research Center) Presentation: Ikechukwu Ofodile - Tuesday, March 5th, 10:10 AM - Dunraven
        Deployment of satellites to meet requirements for various missions requires a reliable Attitude Determination and Control System (ADCS). In this paper, we presents the robust design of the attitude determination and control system for the ESTCube-2 nanosatellite. The aim of ADCS is to spin up the three-unit CubeSat to $360~deg$ per second about the X-axis (short axis) to deploy a $300~m$ tether used in a plasma brake experiment and to provide accurate pointing for an Earth observation camera and a high-speed communication system. In addition to basic sensors and actuators commonly used on nanosatellites, the design includes a cold gas propulsion system and an inbuilt star-tracker. To achieve the control requirements, a lyapunov based stability function and an optimal Linear–quadratic regulator (LQR) control is implemented. The attitude determination is handled by an Unscented Kalman filter, which is deployed on a Cortex-M7 microcontroller. The ADCS would be tested and up-scaled for the ESTCube-3 mission which is planned to be launched to lunar orbit.
      • 07.0813 Elimination of Parasitic Magnetic Interference over Magnetometer Data Onboard Nano-Satellites Paras Shah (Manipal Institute of Technology ), Vedant Dubey (Manipal University ) Presentation: Paras Shah - Tuesday, March 5th, 10:35 AM - Dunraven
        This paper explains and compares the different methods that could be used to eliminate the effect of the leakage magnetic field on magnetometer sensor onboard on a 2U class Nanosatellite. Many operations conducted in the Attitude Determination and Control System [ADCS] use the values from the magnetometer, proving crucial to its controlling mechanism. The paper focuses on a magnetometer sensor used onboard, specifically, an Anisotropic Magneto-Resistive [AMR] Magnetometer. The magnetometer experiences interference from various magnetically active components in the satellite, most of which can likely be eliminated by implementing a mathematical model for the calculated offsets. Although, the interferences, from specific sources, like, the permanent magnet of the reaction wheel's Brushless Direct Current [BLDC] motor implemented in the ADCS system, is not constant. The paper depicts with a set of graphs, the interference of the rotating permanent magnet, being roughly sinusoidal. A robust explanation describing the methods adopted to quantify this variable interference over time, with a mathematical method to compensate for the leakage magnetic field. A description of how a magnetorquer based attitude control system could interfere with the magnetometer's data is provided. The paper also includes a mechanical approach which could be used to reduce the flux leakage from the permanent magnet, serving as a viable alternative. The paper mainly focuses on an experimental analysis, applied over a theoretical mathematical model, which is validated by incorporating a filter over the magnetometer readings to determine accurate values of Earth's Magnetic field in orbit. The paper describes how the coefficients required for the filter can be deduced experimentally, with the required test setup. For the filter to work, a real-time onboard determination of the orientation of the permanent magnet in the BLDC motor is required, for further calculations. A system using magnetorquers as one of its actuators, requires an accurate supply of the live Earth's magnetic field from the onboard controller along with the flux disturbances from the BLDC motor, for the effective conduct of its operation. On application of the filter above, the magnetometer should be able to derive the Earth's magnetic field accurately enough. Furthermore, a critical analysis of the method is provided along with a thorough failure analysis that would have a considerable impact on the magnetic field readings. Finally, this paper converges onto a method to eliminate the interferences faced by the magnetometer.
      • 07.0815 Adaptive Attitude Control Using Neural Network Observer Disturbance Compensation Technique Junhai Huo (Zhejiang University), Tao Meng () Presentation: Junhai Huo - Tuesday, March 5th, 11:00 AM - Dunraven
        In this paper, the problem of spacecraft attitude control using adaptive neural network disturbance compensation technique is investigated. The suggested disturbance observer is developed based on the Radial Basis Function (RBF) neural network. Firstly, the RBF neural network algorithm and spacecraft dynamic model are given. Then, the RBF neural network observer is developed to estimate the external disturbance moment. Using the estimated information, an adaptive neural network disturbance compensation controller is designed. Meanwhile, the stability of closed-loop attitude control systems is analyzed by using Lyapunov approach. The simulation results show that compared with the traditional PD controller, the developed control scheme can decrease the effect of the external disturbance and have a better control performance.
      • 07.0816 Attitude Tracking LPV Control for Spacecraft with Hybrid Actuators Bingyao Lei (Beihang University), Peng Shi (Beihang University), Hao Zhang (Chinese Academy of Sciences), Yushan Zhao (Beihang University) Presentation: Bingyao Lei - Tuesday, March 5th, 11:25 AM - Dunraven
        Attitude tracking control is a pivotal technique in several space missions, such as rendezvous and docking, space exploration, debris removal, and formation flight. Due to the nonlinearity and coupling of spacecraft attitude dynamics and kinematics, the high precision control objective can hardly be achieved by conventional linearization methods, especially when there are more and more complicated conditions to be considered. In this paper, we focus on the condition when actuator misalignment occurs. Besides, the common external disturbance problem and gain constraints are also taken into consideration. To systematically investigate this problem, we divide the research into three sections. The first section is to deduce the mathematic model of control system. Different from the existing control schemes that directly introduce the nonlinear model, we assume the Modified Rodrigues Parameters (MRPs) as the scheduling parameters and derive a quasi linear parameter varying (Quasi-LPV) model based on the spacecraft relative attitude dynamics and kinematics. Furthermore, actuators composed of four pyramid-shaped single gimbal control moment gyros (SGCMGs) and three orthogonal reaction wheels are employed to achieve high precision control with rapid response. In the second section, an H-infinite guaranteed cost control method based on LPV techniques is proposed to deal with the uncertainties caused by mounting error. To release the control conservativeness, a parameter-dependent Lyapunov function is introduced. The closed-loop system is asymptotically stable due to Lyapunov stability theorem. The introduction of quadratic performance index helps guarantee the control gain boundness and uncertainty robustness. Analogously, external disturbance attenuation can be achieved by introducing the H-infinite performance index. With the bounded real lemma and the Schur complement lemma, the controller synthesis process is converted into a performance optimization problem based on the linear matrix inequalities (LMIs), which can be easily solved by the LMITOOL in MATLAB. The third section is dealt with steering law design. Considering the usage of hybrid actuators, the smoothness of torque output is investigated. Aiming at this requirement, we introduce a weighting function to distribute the torque command. To steering the SGCMGs, we employ a method proposed in the reference. By adjusting the constant parameters, the singularity avoidance ability of the actuators can be regulated. To verify the proposed control scheme, we apply it to a realistic spacecraft attitude control problem. Owing that the scheduling parameter ranges vary with the attitude maneuver ability, we validate 5 different ranges corresponding to 5 rotational Euler angle. The result suggests that the increase of scheduling parameter ranges induces the rise of performance indexes. In the time domain simulation, the result indicates the control scheme can provide high precision tracking performance. Further expanding the scheduling parameters range, the control objective can also be achieved according to the simulation results. Future work includes the application of the proposed technique to attitude coordination control.
    • 07.09 Emerging Technologies for Space Applications Michael Mclelland (Southwest Research Institute) & William Jackson (Sierra Nevada Corp.)
      • 07.0901 High Performance Transmitters for Small Satellites for Data Transmission and Remote Sensing Naresh Deo () Presentation: Naresh Deo - Friday, March 8th, 08:55 AM - Amphitheatre
        Strong interest in the deployment of large constellations of Small Satellites including CubeSats has resulted in the necessity of a new class of microwave transmitters for the transmission of data and/or radar signal from these satellites. The performance characteristics and operational features of these transmitters must comply with the constraints and requirements of the satellite’s payload. Most of these satellites are expected to generate massive amounts of data which needs to be downloaded to their earth stations due to limited storage and processing power on board. Remote-sensing and earth-observation satellites among these SmallSats need low to medium power transmitters for their sensors, such as radars and interferometers. Transmitters for SmallSats must be: compact, conform to specif form-factor, light weight, highly efficient, handle high data rates or waveforms, reliable in space environment, of modular design, and affordable. Driven by these and many other operational factors, we have developed designs for solid-state transmitters (integrated modulators/upconverters and power amplifiers) for future SmallSats. In this paper we will present a design concept that aims to achieve most of the desirable characteristics of the transmitter modules by exploiting many newly emerging technologies and manufacturing methods. Among these are: use of Gallium Nitride-based (GaN) solid-state power amplifiers; novel surface-mounted circuit technologies; use of novel materials and components; 3-D printing (additive manufacturing); miniaturized modular power supplies; standardized electrical and mechanical interfaces. This design combines newly developed highly efficient GaN Microwave Monolithic Integrated Circuits (MMICs) with compact and efficient power combining techniques and application-specific upconverters and receivers. Furthermore, these transmitters can be mass-producible to reduce cost and meet market demands. A versatile upconverter has been designed to work with most transmitter frequency bands. Power amplifiers at some of the specific allocated down-link frequency bands have been designed for direct use with the universal upconverter presented here. This paper describes some of the solid-state power amplifiers at the following frequency bands: K (17.2 to 21.5 GHz), Ka (25.5-27.5 GHz), Deep-space Ka band (31-32 GHz), Q (38-39 GHz), V-band (58 GHz) and E/V/W band (71-76 GHz) for communications, and Ka band (35.75 GHz) and W-band (94 GHz) for radars and remote-sensing applications. Compactness and modularity is achieved by novel use of innovative components, materials and circuit integration techniques. Additive manufacturing and other innovative methods further enhance the desirable attributes of the transmitters. Details of the design and performance of several such transmitters will be provided in the presentation.
      • 07.0902 On-board Wireless Communications for Spacecraft Test and Operations Norman Lay (Jet Propulsion Laboratory) Presentation: Norman Lay - Friday, March 8th, 09:20 AM - Amphitheatre
        This paper discusses recent activities at JPL that are focused on the development of wireless communications for data path connectivity between instruments and subsystems within the confines of a single spacecraft. Anticipated benefits of intra-spacecraft wireless links include reduction of cable mass, improved flexibility in spacecraft design or modifications and increased efficiencies during integration and test. The talk will describe the framework for this effort, plans for retiring key risks and progress to date. Three of the primary risks addressed under this effort are communications link reliability, scalability and electromagnetic compatibility. In this paper, we will discuss analysis and test methods used to investigate and mitigate each of these areas. In addition, we will describe a number of use cases for both operational and test applications that are under current investigation and development.
      • 07.0903 Introduction to Space Dogfighting -- What Matters in Space Engagements Edward Hanlon (United States Navy) Presentation: Edward Hanlon - Friday, March 8th, 09:45 AM - Amphitheatre
        While space is recognized as a “warfighting domain,” the principles of space combat remain underdeveloped. Particularly, despite co-orbital anti-satellite systems offering a politically viable offensive space weapon, space evasive maneuver remains a nascent discipline. This paper explores potential tactical procedures that increase the difficulty of hostile rendezvous and therefore deter and prevent aggression. Co-orbital threats have several advantages over other anti-satellite systems. A co-orbital attack does not need to generate harmful orbital debris, they are difficult to attribute, and they are scalable to provide varying levels of damage. Co-orbital vehicles can be used for show of force operations to deter aggression and to hold other spacecraft at risk. Given these advantages, the co-orbital threat appears a likely attack method for space combat. Both current and future satellite systems must be equipped to defend against these threats. As many high-value spacecraft possess some form of maneuvering capability, evasive maneuvers are a viable option. A Simulink model was created to simulate co-orbital interactions and allow two spacecraft to ``dogfight.'' The results of these interactions were used to develop tactics to prevent rendezvous. These simulations revealed several “rules of thumb” that can help satellite operators protect their spacecraft. The first is the lack of correlation between relative position and ease of attack. This is markedly different than terrestrial warfare. Threats must be categorized based on the difficulty of rendezvous instead of range, which provides a more accurate estimate of threat likelihood and affords the defending spacecraft the opportunity to react appropriately. The second rule of thumb is that evasive direction depends on expected intercept time. When thrust is applied in space, there are typically two effects. There is an immediate velocity change in the direction of the thrust and a secondary velocity change that results from the effect of the velocity change on the shape of the orbit. For long time-spans, in-track thrust proves to be the most efficient evasion technique; however, for engagements lasting less than 15% of the orbital period, radial thrust is more effective at preventing rendezvous. The ultimate goal of each engagement is survival. A large part of survival for spacecraft is fuel conservation; the defensive objective is to maintain separation from the pursuer while minimizing the defender's fuel consumption and maximizing the aggressor's consumption. A proposed four-step tactic accounts for these relationships and results in the defender using 30% to 50% less fuel than the aggressor while still evading. This paper outlines the development of these tactical procedures, developed as the result of recent research conducted at the Naval Postgraduate School, and advocates for further research and simulation on high-fidelity models. Additionally, it provides a parametric analysis of spacecraft engagement performance based on thruster power, sensor update frequency, and maneuver frequency to aid in the design of future systems.
      • 07.0905 Streamlining High Altitude Ballooning Missions: From Payload, to Launch, to Flight Hunter Hall (NASA Jet Propulsion Lab), Rohan Daruwala (University of Wisconsin), Trey Fortmuller (UC Berkeley), Ethan Prober (University of Michigan), Samar Mathur (University of Houston), Kathryn Kwiecinski (University of Minnesota), Makena Fetzer (NASA Jet Propulsion Lab), Ariel Kohanim (JPL), Benjamin Donitz (University of Michigan), William Bensky (University of Southern California), Adrian Stoica (Jet Propulsion Laboratory) Presentation: Hunter Hall - Friday, March 8th, 10:10 AM - Amphitheatre
        Building upon the success of the United States National Aeronautics and Space Administration's (NASA) Jet Propulsion Laboratory's (JPL) Innovation to Flight (i2F) 2017 Zephyrus Missions, the i2F team has continued making high-altitude balloon (HAB) flights more affordable, reusable, and easy to perform by users of all HAB experience levels. With the creation of the automatic HAB launcher, Talos, and the beginnings of an autonomous return-to-home (RTH) payload, the i2F team has taken what was once a 15-20 person job of launching a 5.44 kg (12 lb) payload on a 3000 g latex HAB and has turned it into a single person task. Furthermore, the time from arriving at the launch location to getting a Zephyrus payload airborne via Talos is still under one hour; the majority of that time is consumed by the inflation of the balloon. The i2F team has additionally overhauled the Zephyrus avionics and communications systems. The Zephyrus avionics and communications systems are now able to perform two-way communications with the ground station and video live streaming while supporting several experiments at once, all while contained within a 1.2U cubesat form factor. The team has additionally designed and built two custom directional antenna tracking units for use at JPL and in the Mojave during flights. With this system i2F can uplink and downlink data at a range of 160+ kilometers (~100 miles). The 2018 project campaign for the updated Zephyrus flight vehicle, the Talos automatic HAB launcher, and the long-range two-way telemetry system, was constructed, and tested within a span of 10 weeks and on a budget of <$10,000.
    • 07.10 COTS Utilization for Reliable Space Applications Harald Schone (Jet Propulsion Laboratory) & Douglas Carssow (Naval Research Laboratory)
      • 07.1001 Analysis and Comparison of Calibration Techniques for COTS Sensors Onboard a Nanosatellite Shivika Singh (Manipal Institute of Technology ), Akshit Akhoury (Manipal Institute of Technology), Arun Ravi (MIT), Paras Shah (Manipal Institute of Technology ), Sushmita Gosavi (MANIPAL INSTITUTE OF TECHNOLOGY), Sahil Joshi (), Disha Gundecha (), Nishant Gavhane (Manipal Institute of technology) Presentation: Shivika Singh - Friday, March 8th, 10:35 AM - Amphitheatre
        This paper explains and compares the different methods that could be used to characterize and calibrate COTS sensors onboard a 2U class Nanosatellite. Sensor characterization and calibration form an integral part in the development of the Attitude Determination and Control System for a satellite as any major error will lead to definite failure of attitude control and consequently payload failure. The paper focusses on the two major sensors onboard; the Anisotropic Magnetoresistance (AMR) Magnetometer and a Micro-Electro-Mechanical Systems (MEMS) Gyroscope. The paper explains, in brief, the design and working of the Wireless Sensor Network created to fetch and store raw values from the sensors. A robust explanation of the functioning of a mathematical model built on the various internal sources of error and external stimulants which affect the output of these sensors is provided. The internal sources of error associated with the sensor include sensor offset, drift, intrinsic noise, cross-axis sensitivity, non-orthogonality and scale factor errors. The external stimulants include temperature, external random noise in case of the gyroscope and in case of the magnetometer; hard and soft iron effects and external noise. The paper explains how various mathematical tools like Allan Variance plots, Moore Penrose Inverse and Linear Regression were used to build these mathematical error models. In contrast to mathematical modeling, an offboard neural network uses the method of back-propagation for defining a non-linear relationship between the raw sensor values and the actual values. The raw values obtained from the sensor are given as input to the neural network and through back-propagation, the neural network is trained over numerous sets of sensor readings to achieve maximum possible accuracy. A detailed account is given on the design and working of these networks for the calibration of the sensors and the preference of an offboard trained neural network over an onboard one. The calibrated values obtained from the application of the mathematical error model and the neural network is presented through a series of graphs. Further critical analyses of the plots were done to obtain the best method to calibrate the particular sensor. This systematic calibration aids in improving the attitude estimation and control design of the satellite further leading to enhanced control on payload action and enabling low cost COTS sensors to be used in aerospace applications.
      • 07.1002 FLASHRAD: A Reliable 3D Rad Hard Flash Memory Cube Utilizing COTS for Space Applications Da Eun Shim (Georgia Tech), Amanvir Sidana (Samsung Austin Semiconductor), Jim Yamaguchi (Irvine Sensors Corporation), Christian Krutzik (Irvine Sensors Corporation), Sung Kyu Lim (Georgia Institute of Technology) Presentation: Da Eun Shim - Friday, March 8th, 11:00 AM - Amphitheatre
        Given the rapid rate of growth and scope for space missions, improving computing capabilities of onboard spacecraft and memory systems is vital for future space missions. Currently, space missions are limited by memory capacity as there is not enough onboard memory to store the copious amounts of data obtained in a single space mission. Onboard memory systems must also be capable of providing the necessary operational robustness and fault tolerance based on system and mission requirements. Some mission requirements may include data-intensive operations such as terrain navigation, hazard detection and avoidance, autonomous planning, and onboard science data processing. Thus, space missions require onboard memory that has high bandwidth, high capacity, and high reliability to securely store recorded data when exposed to space radiation. To meet the aforementioned necessities, various designs of memory cubes that make use of horizontal integration of memory dies have been proposed. However, these methods require high design effort and a long lead time. Alternatively, a loaf-of-bread (LOB) design using vertical integration of DDR3 SDRAM dies has recently been proposed. Straightforward access to each individual die from the memory controller is possible in the LOB design due to the vertical integration of the dies which allows Commercial-Off-The-Shelf (COTS) dies to be used, reducing cost and lead time for designers. In this paper, we propose a method to effectively increase data storage for onboard memory while reducing the cost and effort that goes into design by presenting a 3D memory cube design utilizing 24 COTS NAND flash dies in a LOB configuration. The design includes various features that increase the data storage available while considering hazards specifically in space environments such as errors from single event effects (SEE) or single event functional interrupt (SEFI) events. Currently, the preliminary RTL code is ready with support for NAND Flash commands, error-correcting codes (ECC), and scrubbing. Features such as wear leveling, bad block management, data scrambling and a Serial Rapid IO (SRIO) interface to further mitigate errors due to radiation effects in the space environment will be incorporated in the future. The functionality of the memory controller has been verified via simulation of the RTL code. Further validation and testing using a FPGA board are also underway to verify the design at this stage. Therefore the proposed design addresses the need for increased memory storage while also allowing COTS dies to be used. This paves way for reduced design efforts as well as the incorporation of state-of-the-art memory dies in space missions.
      • 07.1003 Non-Radiation Tolerant COTS Power Converters in Low Earth Orbit Timothy Babich (Naval Research Laboratory ) Presentation: Timothy Babich - Friday, March 8th, 11:25 AM - Amphitheatre
        Commercial DC-DC power converters and filters were successfully used in the power supply of a Space Test Program (STP) mission on the International Space Station (ISS). The mission lifetime was designated for one year and the electronics have been operational in space since February of 2017. Resource limitations, particularly cost and time constraints, prompted the decision to use commercial, non-space qualified, components. The particular parts used were Vicor V110A28M400BS DC-DC power converters and FIAM110MS1 filters. This document will summarize the risks and challenges involved with using the commercial parts as well as measures taken to mitigate these risks. The mission requirement was to convert a 120 bus to 28V DC power for STP-H5 science experiments. Baseline designs planned to utilize a secondary bus on the ISS to provide the 28V power. However, performance limitations of the secondary bus were later identified and prompted concern that the scientific objectives of the mission could be subsequently hindered. The best solution was to add a separate 120V to 28V DC-DC power converters to an electronics box already in development, thus eliminating dependence on the existing 28V bus. Conventional flight qualified power supplies could not be obtained within the constraints of the mission. Several vendors were considered, but none could produce the power converters as quickly as needed. A custom designed power supply was considered as well, but resources, packaging challenges, and lead times for individual EEE parts made this approach unviable. Several other organizations were also contacted to consider using parts that had been purchased as flight-spares for other programs. It was determined that flight qualified power converters could not be obtained for this mission and commercial parts became the only viable option. A radiation study was conducted based on the orbit, mission life, and thickness of the electronics box that would house the power converters and corresponding filters. A relatively small total dose was predicted and the decision was made to move forward with commercial parts. As the designed moved forward, other studies and tests were conducted to mitigate risk as much as possible. Disclaimer: The Vicor Corporation does not support using their products in space or vacuum applications. Vicor products are designed and intended for aerospace applications within earth’s atmosphere up to altitudes of approximately 70,000 feet.
      • 07.1004 Evaluating Commercial Processors for Spaceflight with the Heterogeneous On-Orbit Processing Engine Tyler Lovelly (Air Force Research Laboratory), Jesse Mee (AFRL/RVSE), James Lyke (Space Vehicles Directorate), Andrew Pineda (Air Force Research Laboratory), Ken Bole (AFRL Space Vehicles), Robert Pugh (Think Strategically, LLC) Presentation: Tyler Lovelly - Friday, March 8th, 11:50 AM - Amphitheatre
        Legacy spacecraft were designed with the understanding that substantial levels of on-orbit processing were not feasible due to a lack of high-performance computing solutions that could survive in a harsh radiation environment and satisfy requirements for low size, weight, and power. Consequently, the majority of the data generated by spacecraft sensors is typically transmitted to ground stations for processing. Because data transmission rates are not keeping pace with the high data rates being generated by increasingly complex sensor technologies, future space system concepts can be enhanced by increasing on-orbit processing capabilities to reduce the amount of data that must be transmitted. Although massive investments have been made by the commercial integrated circuits industry to advance the state-of-the-art in computing technologies, radiation-hardened processor technology typically lags the commercial state-of-the-art by several technology generations. While current efforts to upgrade the state-of-the-art in radiation-hardened processors offer substantial improvement over their predecessors, they still lack the computational performance desired for many future space system concepts and lag significantly behind commercial products in terms of performance and power efficiency with much higher unit costs in the tens or hundreds of thousands of dollars. This presents challenges to low/moderate budget programs, particularly within the small-satellite community. Due to changes in the spectrum of risk tolerance, increasing numbers of programs are considering and using small satellites to perform their missions, leading to high interest in leveraging commercial electronics. This paper presents a novel concept to study the potential for a variety of state-of-the-art commercial processors, managed by a radiation-hardened supervisory processor, to operate reliably in a harsh radiation environment and to off-load the compute-intensive processing tasks from less capable satellites. The Heterogeneous On-Orbit Processing Engine is introduced as an on-orbit computing resource composed of a variety of state-of-the-art commercial architectures such as multi-core central processing units, graphics-processing units, and field-programmable gate arrays, where each architecture can be called upon depending on its advantages and limitations for the applications being supported. Additionally, the operation of each commercial processor can be continuously monitored, controlled, and examined to determine to what extent these terrestrial technologies can survive in the harsh radiation environment of space. These concepts and technologies will first be developed and analyzed as a flat-sat in a laboratory environment, then as a low Earth orbit spaceflight experiment supporting one or more networked satellites, and finally with planned follow-on experiments in other orbits.
  • 8 Spacecraft & Launch Vehicle Systems & Technologies Bret Drake (The Aerospace Corporation) & Robert Gershman (JPL)
    • 08.01 Human Exploration Beyond Low Earth Orbit Kevin Post (The Boeing Company) & Bret Drake (The Aerospace Corporation) & John Guidi (NASA)
      • 08.01 3.01 Keynote Presentation: - - Cheyenne
      • 08.0101 Summary of Gateway Power and Propulsion Element Studies David Irimies (NASA Glenn Research Center), David Manzella (NASA - Glenn Research Center), Timothy Ferlin () Presentation: David Irimies - Monday, March 4th, 08:55 AM - Jefferson
        Abstract— NASA’s Power and Propulsion Element (PPE) is based on a joint industry/NASA demonstration of an advanced solar electric propulsion powered spacecraft to meet commercial and NASA objectives. The PPE can establish the initial presence in cislunar space for the Gateway through initial operations and the subsequent deployment of additional partner-provided elements for the cislunar platform. Five commercial vendors were selected to conduct PPE studies which addressed key drivers for PPE development and support for the Gateway concept formulation. The study vendors focused on their performance trades and assessing their strategic capabilities, leveraging their existing and planned capabilities for PPE development. The industry studies examined differences between prior Solar Electric Propulsion (SEP) mission concepts, expected industry capabilities, and potential needs supporting NASA’s Gateway concept. These studies provided data on commercial capabilities relevant to NASA’s exploration needs and reduced risk for a new, powerful, and efficient SEP-based PPE spacecraft.
      • 08.0102 The NASA SLS Exploration Upper Stage Development & Mission Opportunities Ben Donahue (Boeing Company) Presentation: Ben Donahue - Monday, March 4th, 09:20 AM - Jefferson
        The new NASA Exploration Upper Stage (EUS) will evolve the Space Launch System (SLS) to a significantly higher performance level than the current SLS Block 1 configuration. The large throw mass of the Block 1B provides a game-changing capability for the exploration of other worlds. By enabling larger margins in the design of exploration platforms and the ability to send multiple copies of atmospheric and surface probes, higher resolution spatial and temporal data can be collected in a single mission. Mission risk can be reduced by increasing the redundancy of each individual system and the architecture by using multiple copies of the same systems. The EUS will provide the SLS the capability of achieving greater human exploration, operations and science objectives for 2020-2040 era Beyond Earth Orbit (BEO) missions, including crewed Cis-lunar missions in the mid-2020s, crewed Lunar Surface missions in the late 2020’s and crewed Mars missions in the mid-2030s. The uprated SLS Block 1B configuration, with the new EUS, will launch, over the next 30 years a variety of human spaceflight, science and astronomy missions, including emplacement missions for the Deep Space Gateway (DSG) platform. The DSG will support Human Space Flight, as an operations node and base for Lunar Surface Missions. In this report the EUS and the Block 1B will be described along with a variety of missions that take advantage of this larger, higher performing stage. Missions that will utilize the EUS include the DSG emplacement and sustainment campaigns, with follow-on Lunar Surface exploration (robitic and crewed) featuring surface volatile prospecting and extraction. These missions would lead to later, mid-2030 crew missions to Mars and its moon Phobos. On a parallel track, the EUS would also push the frontiers of exploration to the outer planets, sending larger, more scientifically robust probes to destinations such as Europa, Enceladus, Uranus, Neptune/Triton and Pluto on trajectories with significantly shorter trip times than are currently possible with the present fleet of launch vehicles. The SLS EUS would also allow for a “Single launch” Mars Sample Return (MSR) mission architecture that would reduce current MSR plans requiring 3 launchers and 9 years. The SLS EUS system would also facilitate the launch of very large monolithic optic space telescope that could provide 300 times the resolution of the Hubble space telescope. Congress has funded a second Mobile Launch Platform (MLP). This new SLS architecture maintains two MLPs, establishing a standalone cargo launch capability and effectively deconflicting the SLS Block 1 and Block 1B launch schedules. This independent cargo launch capability enables launch of the Europa Clipper as early as 2022 and provides the infrastructure necessary to support ongoing science missions. A second MLP ensures that Block 1 cargo and Orion test missions are decoupled from the next generation SLS Block 1B crewed launches. The information in this paper will come from analysis performed by the Boeing Company under its internal research and development (IRAD) tasks.
      • 08.0103 Lander and Cislunar Gateway Architecture Concepts for Lunar Exploration Xavier Simón (Boeing Company), Travis Moseman (Boeing Company), Matthew Duggan (The Boeing Company) Presentation: Xavier Simón - Monday, March 4th, 09:45 AM - Jefferson
        As NASA turns its near-term exploration focus back to the lunar surface, the Boeing team has investigated concepts for lunar surface systems to enable both scientific discovery as well as technology demonstration. In parallel, Boeing has studied methods to enhance the surface exploration capabilities using the orbiting cislunar Gateway spacecraft. One objective during such a campaign is not to lose sight on the ultimate goal for Mars extensibility, including flexibility to incorporate enabling technologies as they become available during the lunar exploration campaign, toward a safe and capable human exploration architecture, while at the same time maintaining a reasonable schedule and technical risk posture for near-term exploration goals. This paper describes a progressive series of increasingly capable lunar exploration missions, as well as the Gateway systems and operation concepts that support them. The series of mission begins with small and medium size robotic landers which a multi-purpose role, including: to provide a platform for scientific investigations and lunar surface scouting, and demonstrate technological and mission capabilities to reduce future mission risks. The Gateway supports these robotic missions in multiple ways via use as a communication link, a telerobotics control platform, a lunar surface sample return hub, and a scientific lab for in-situ analysis of samples not destined for return to Earth in the Orion spacecraft. The series culminates in a human landing mission that supports a crew of four on a two week (one lunar day) surface sortie, but even that mission is further extensible to longer duration missions with the addition of separately delivered surface assets such as a pair of mobile habitats as described in the Global Exploration Roadmap. The human lander is optimized to take advantage of the cislunar Gateway architecture; the lander elements are delivered using the SLS cargo launch capability and the SLS Orion/cargo co-manifest capability. The human lander architecture capabilities capitalize on a minimum-mass ascent module concept, which is supported by short descent/ascent phase durations and by allocating all habitation functions to the descent module. Recurring costs associated with repeated human landings at various lunar surface sites are reduced by making that minimum-mass ascent module reusable and refuelable at the Gateway. The paper also looks at specific lander technologies and discusses the appropriate opportunities to reuse heritage technologies and when to infuse next-generation technology demonstrations. It also explores potential trade studies and describes results of analysis and refinement of the 2017 Boeing human lander, including analysis of the Gateway in a Near Rectilinear Halo Orbit (NRHO) and lunar global access. The paper will show that use of commercial capabilities, the Space Launch System, Orion and the Gateway can significantly enhance a series of progressively complex lunar surface exploration missions, including human landings, and describe how the technologies, systems, and operations can be extensible to the future Mars exploration campaign.
      • 08.0104 NASA’s Exploration Mission Strategy Marshall Smith (NASA HQ) Presentation: Marshall Smith - Monday, March 4th, 10:10 AM - Jefferson
        NASA is embarking on the next step in human exploration by building systems that will allow humans to live and work in deep space to develop the skills necessary to take human presence into the solar system. The initial systems in development are the Space Launch System (SLS) vehicle, necessary to launch payloads to cislunar destinations and beyond, the Orion crew vehicle which will serve as the spacecraft capable of transporting humans to these deep space locations, and the Gateway, a short-term habitation system to be built and operated in cislunar space. Supporting these systems are the ground and flight operations capabilities provided by the Exploration Ground Systems (EGS) at Cape Canaveral, Florida and Flight Operations team in Houston, Texas. These systems will place the first toe-hold in deep space, where we will begin to learn how to live and work in these harsh environments. These systems will also enable the scientific and commercial development of deep space through supporting lunar landers, Lunar and Mars sample return missions as well as lunar habitation and surface operations. Together NASA, along with its international partners will develop, deliver and operate these systems to permanently extend human presence in the lunar vicinity and into deep space. This paper will present an overview of the exploration mission strategy, the initial Orion and SLS test flights and the buildup plans of the Gateway in cislunar space. It will discuss the two flight test missions, Exploration Mission-1 (EM-1) and EM-2 in detail, giving an overview of the uncrewed EM-1 mission and the crewed EM-2 mission, describing the flight phases for each mission, from launch through spacecraft recovery operations. Looking beyond the initial test flights, this paper will also present the concept of operations for missions beyond EM-2, which will serve to assemble the Gateway and then use it for human and robotic exploration. It will also detail the maturation of SLS and Orion capabilities that will support the planned Gateway exploration missions.
      • 08.0105 NASA’s Space Launch System: Enabling a New Generation of Lunar Exploration Stephen Creech (NASA - Marshall Space Flight Center) Presentation: Stephen Creech - Monday, March 4th, 10:35 AM - Jefferson
        After two decades of operational experience in low-Earth orbit (LEO), NASA has turned its focus once again to deep space exploration. The Agency is building the Space Launch System (SLS) to take astronauts and cargo to the moon and send robotic spacecraft deep into the solar system. Offering unmatched performance, departure energy and payload capacity, SLS is designed to evolve into progressively more powerful configurations, enabling a new generation of human exploration of the moon in preparation for missions to Mars. The first build of the Block 1 vehicle is nearly complete for Exploration Mission-1 (EM-1), the first integrated flight of SLS and the Orion crew vehicle. EM-1 will send Orion to a distant retrograde lunar orbit in order to test and verify new systems, and 13 6U-class CubeSats will be released into deep space. As now planned, the first three missions on the SLS manifest utilize the Block 1 vehicle in crew and cargo configurations. Generating more than 8 million pounds of thrust at liftoff, Block 1 will lift at least 26 metric tons (t) to trans-lunar injection (TLI). Following EM-1, the two additional Block 1 missions are expected to be the first crewed mission, Exploration Mission-2 (EM-2), and a mission to launch a spacecraft to fly by Jupiter’s moon Europa, which is expected to utilize the cargo configuration. The greater departure energy that SLS provides will enable the Europa Clipper mission to take a direct trajectory to the Jovian system, arriving at the icy ocean world years sooner than would be possible if the mission were launched on an Evolved Expendable Launch Vehicle (EELV). A more powerful evolved vehicle, Block 1B, will provide the capability to lift 34-40 t to TLI, depending on crew or cargo configuration. The crew configuration will offer as much payload volume as the space shuttle to co-manifested payloads in a Universal Stage Adapter (USA). The cargo variant will potentially accommodate 8.4 m-diameter fairings in 62.7 foot (19.1 m) or 90-foot (27.4 m) lengths; other fairing sizes are being considered. As with the first mission, adding smallsat secondary payloads to ride along with primary and co-manifested payloads is a promising option. Leveraging a flight-proven, well-understood propulsion system, SLS’s flexible architecture, unmatched performance and expansive payload accommodations are going to open up exciting new mission possibilities in deep space. Habitat modules for NASA’s new Gateway lunar outpost, the next generation of robotic spacecraft, deep-space telescopes, probes to interstellar space and the return of astronauts to the moon are all possible with the super heavy-lift capabilities of SLS. The paper will discuss lessons learned in the manufacture of the first Block 1 build; capabilities of the new system; opportunities for primary, co-manifested and secondary payloads; and will look ahead to progressively more powerful vehicles and the capabilities they offer for a generation of exploration and discovery
      • 08.0106 An Introduction to the Concept of a Deep Space Science Vessel Robert Howard (NASA Johnson Space Center) Presentation: Robert Howard - Monday, March 4th, 11:00 AM - Jefferson
        This study introduces the idea of a transit spacecraft whose primary crew mission is found within the transit as opposed to the destination. Prior to this study, Mars missions have generally been described in terms of a transit phase and a surface phase, with the idea that the crew is being transported from Earth to their destination. The transport is a necessary evil to get the crew from point to point – the real mission occurs when the crew is on the surface. But a Mars mission may encompass as great as 1100 days, with only a 500 day surface stay. The human need for meaningful work suggests this in-space period should also be a valid human exploration mission. In other words, it should be an interstellar voyage that contains a surface excursion. Missions to other solar destinations involve similar or greater transits, thereby establishing a reason to consider the idea of a Deep Space Science Vessel (DSSV). This study introduces the concept of a large, multi-purpose, multi-disciplinary science spacecraft intended for human space flight operations in the inner solar system at distances from 0.39 AU to 2.8 AU from the sun (Mercury to Ceres). The DSSV can operate independently and/or rendezvous with pre-deployed landers to deploy/recover crew or cargo for surface excursions or crew rotations. The DSSV is designed to support a wide range of life science, physical science, and engineering technology research, carry in-space excursion vehicles for excursions to asteroids, small moons, or other spacecraft, and support a crew size of forty-eight (48) personnel. The DSSV is designed for onboard repair with major overhauls and/or replacement of modules at pre-determined schedules. The DSSV contains six pressurized elements: two large modules derived from the SLS propellant tank production line and four node modules derived from the ISS nodes. The DSSV can host up to eighteen additional docked elements, including external airlocks, logistics modules, small excursion vehicles, and temporary visiting vehicles such as landers or capsules. As a mobile science outpost, the DSSV includes significant external payload capability, including observatories, remote sensing, and space environment science payloads. The DSSV pressurized elements are mated to a Power-Thermal-Propulsion (PTP) Module that provides nuclear fission power, heat rejection, and hybrid propulsion. This study will define the DSSV onboard crew functions and describe target performance levels for each function. It will then discuss the habitable volume of the spacecraft, providing a rough vehicle layout and high level workstation descriptions. The primary vehicle subsystems will be described, including performance targets for each. Attached and typical visiting vehicles will be described, including their functions, heritage, and examples of mission-specific configurations. Mission concepts will be described for key missions possible within the vehicle’s operating range. Sample crew composition for the DSSV will be discussed, followed by implications of various acquisition models and further research opportunities. Finally, follow-on spacecraft will be described, providing context for human development of the inner solar system.
      • 08.0107 NASA’s Gateway: An Update on Progress and Plans for Extending Human Presence to Cislunar Space Jason Crusan (NASA - Headquarters), Marshall Smith (NASA HQ), Nicole Herrmann (Valador), Erin Mahoney (Stellar Solutions, Inc. ), Kandyce Goodliff (NASA) Presentation: Jason Crusan - Monday, March 4th, 11:25 AM - Jefferson
        As reflected in NASA's Exploration Campaign, the next step in human spaceflight is the development and deployment of a deep space Gateway -- a cislunar outpost to advance America’s human return to the surface of the Moon, and drive exploration and science activity in deep space. Together with the Space Launch System (SLS) and Orion, the Gateway is central to advancing and sustaining human space exploration goals, and is the unifying single stepping off point in our architecture for human cislunar operations, lunar surface access and missions to Mars. NASA will lead this next step and will serve as the integrator of the spaceflight capabilities and contributions of U.S. commercial partners and international partners to develop the Gateway. Through partnerships both domestic and international, the Gateway team will bring innovation and new approaches to the advancement of U.S. Government, industry and global spaceflight goals and objectives. The Gateway will be developed in a manner that will allow future capabilities and collaborations with U.S. Government, private sector companies, and international partners. The current Gateway concept distributes necessary functions across Gateway, including: power and propulsion (and communication), habitation/utilization, logistics resupply, airlock, and robotics. The functional goal is to develop an effective habitation/utilization capability comprised of pressurized volume(s) with integrated habitation systems and components, docking ports, environmental control and life support systems (ECLSS), avionics and control systems, radiation mitigation and monitoring, fire safety systems, autonomous capabilities, utilization, and crew health capabilities, including exercise equipment. Studies of the architecture trade space and potential Gateway configurations have revealed a baseline concept that can satisfy the aforementioned functions as well as achieve U.S. and international partner objectives. Through analysis, several Gateway configurations were identified that could meet these functions and objectives to varying degrees. This paper will provide an update on the Gateway configuration including U.S. and international element providers, status of acquisition activities, progress of the Power and Propulsion Element development; and updates on plans for Gateway utilization activity planning.
      • 08.0108 SLS, the Gateway, and a Lunar Outpost in the Early 2030s on the Way to Mars Terry Haws (Northrop Grumman Corporation), Mike Fuller (Orbital ATK) Presentation: Terry Haws - Monday, March 4th, 11:50 AM - Jefferson
        NASA is currently working towards the goal of landing humans on the surface of Mars and returning them safely, leading to the eventual establishment of a permanent human-tended outpost on the surface of Mars. They are pursuing this goal using an approach that has been dubbed the “Evolvable Mars Campaign” (EMC). A cornerstone of the Journey to Mars is the phased approach for exploration. The initial portion of the Journey to Mars has already begun. These missions are being carried out in low Earth orbit (LEO) at the International Space Station (ISS) and are researching the technologies and solutions that will be needed for deeper space missions. The next phase of the EMC begins in 2020 with the launch of the first Space Launch System (SLS) and Orion flight to the lunar vicinity. Future missions will continue to build up habitable infrastructure in cislunar space – the Lunar Orbital Platform (Gateway) – and test the embedded systems for reliability in that environment. Crewed missions to the Gateway will exercise those systems under realistic conditions that cannot be simulated either on Earth or in LEO in preparation for future missions to Mars. After the Gateway has been established, the following missions further prove out the efficacy of the spacecraft systems in progressively longer duration stays. Recently there has been a renewed call to return astronauts to the surface of the moon, both in the U.S. and from international partners, as an integral part of the Mars campaign. Human missions to the lunar surface would utilize the cislunar habitat as a staging point. Telerobotic exploration of the moon will provide experience in those operations for future Mars missions. Following the lunar missions, there would be further validation of deep space hardware, culminating in a “Mars Shakedown Cruise” beyond the Earth’s sphere of influence. Following the validation of deep space hardware, both on the lunar surface and in space, the initial human forays into the Martian system begin. The initial mission will be a human orbital mission with attendant systems to allow for the direct human exploration of the Martian moons Phobos, as well as the telerobotic exploration of the Martian surface, utilizing either prepositioned robotic assets or assets that are brought along with the crew. Future missions to the Martian system will involve the buildup of a permanent infrastructure on the Martian surface to enable long-term human exploration of the planet. This paper focuses on the crewed lunar missions – how they are enabled by the previous missions and how they enable the efforts to move on to Mars. This paper will present a reasonable lunar campaign, including timelines and components, as well as objectives and opportunities. The information in this paper will come from analysis performed by Northrop Grumman Innovation Systems under its internal efforts.
      • 08.0109 NASA’s Lunar Lander Strategy Greg Chavers (NASA / MSFC), Erin Mahoney (Stellar Solutions, Inc. ), Shanessa Jackson (Stellar Solutions) Presentation: Greg Chavers - Monday, March 4th, 04:30 PM - Jefferson
        In response to Space Policy Directive-1, NASA is working to establish U.S. preeminence on and around the Moon, starting with delivery of payloads to the surface and assembly of the Gateway in lunar orbit. Through a request for Commercial Lunar Payload Services (CLPS), NASA plans to purchase delivery services from multiple U.S. industry providers to place small, but progressively larger payloads on the lunar surface. These deliveries could begin as soon as 2019, and will also be used to validate technologies needed for larger, human-class landers. Those larger landers (with extensibility to reusable, human rated landers) will be developed through partnerships under contracts awarded through the Flexible Lunar Explorer (FLEX) Landers Broad Agency Announcement. This paper will discuss NASA’s strategic development to establish access to the lunar surface through two parallel paths -- one path for reusable, human rated landers and one path for robotic, cargo landers -- resulting in increased activity on the lunar surface.
      • 08.0110 Opportunities and Challenges of a Common Habitat for Transit and Surface Operations Robert Howard (NASA Johnson Space Center) Presentation: Robert Howard - Monday, March 4th, 04:55 PM - Jefferson
        Since the dawn of human space flight there have been visions for human space flight programs that send humans to destinations beyond Earth for extended periods of time. In addition to LEO space stations there has interest documented in lunar surface outposts, deep space transit vehicles, Mars surface outposts, Phobos or Deimos outposts, and even a Venus skyship. NASA is currently committed to the lunar surface, Mars surface, and deep space transport. Unfortunately, instabilities in US space policy have caused NASA to change focus repeatedly between these three architectures since the 1970s. Further, it can be shown that developing lunar, Martian, and transit habitats in series will require inordinately long periods of time, resulting in exorbitant program expenses. A mitigation to these challenges would be to develop a common habitat for transit operations as well as surface operations in both lunar and Martian environments. The Skylab II concept is one means by which a common habitat can be developed. Derived from the SLS core stage’s liquid oxygen tank, the habitat’s pressure vessel is manufactured on the same production line as the SLS. As a common habitat, it provides functionality required for both microgravity transit and surface operations. It is currently an open trade as to whether it is more effective to configure the habitat interior with a horizontal or vertical orientation. The horizontal configuration divides the interior into three decks running perpendicular to the circular cross section. The vertical configuration divides the interior into four decks running parallel to the circular cross section, with two of those decks occupying the upper and lower domes. This paper will describe recent design work completed for both the horizontal and vertical configurations Several unknowns and challenges remain to be resolved regarding whether subsystems can be designed such that a single design can operate in microgravity as well as lunar and Martian gravity. Additionally, given the large size of the habitat, several challenges must be overcome to integrate the habitat with lunar or Martian landers and if the habitat is to be offloaded and repositioned a mechanism must exist to make this possible. Additionally, a means to dock pressurized rovers and resupply logistics modules must be developed. Finally, crew radiation exposure must be made acceptable for all three environments. This paper describes open challenges and current options related to these unknowns and challenges. Finally, this paper considers the sensitivity of the habitat to variations in crew size. While Mars studies in the 1980s and 90s often considered crew sizes of six, the Constellation program and subsequent NASA work has focused on a crew size of four. This paper will discuss potential under-utilizations that may result from a four-person crew and mitigations provided by an eight-person crew. It will also discuss options to accommodate this crew size in a common habitat and challenges the increased crew size will impose.
    • 08.02 Human Exploration Systems Technology Development Stephen Gaddis (NASA - Langley Research Center) & Jonette Stecklein (NASA - Johnson Space Center) & Andrew Petro (NASA - Headquarters)
      • 08.02 8.02 Keynote Presentation: - - Madison
      • 08.0204 Recent Advancements in Modeling and Simulation of Entry Systems at NASA Michael Barnhardt (NASA), Michael Wright (NASA Ames Research Center) Presentation: Michael Barnhardt - Wednesday, March 6th, 08:55 AM - Madison
        This paper describes recent development of modeling and simulation technologies for entry systems in support of NASA’s exploration missions. Technology development is organized and prioritized using a system-level perspective, resulting in four broad technical areas of investment: (1) Thermal protection material modeling, (2) Shock layer kinetics and radiation, (3) Computational and experimental aerosciences, and (4) Guidance, navigation, and control. The paper will highlight key contributions from each of these areas, their impacts from a spacecraft and mission design perspective, and discuss planned future investment. Aspects of each technical area are only briefly summarized here. Investments in thermal protection material modeling are geared toward predictive models capable of handling complex structures and with an eye toward optimizing design performance and quantifying system reliability. New computational tools have been developed to characterize material properties and behavior at the microstructural level, and new experimental techniques (molecular beam scattering, micro-computed tomography, among others) have been developed to measure material kinetics, morphology, and other parameters needed to inform and validate detailed simulations. Advancements have also been made in macrostructural simulation capability to enable 3-D system-scale calculations of material response with complex topological features, including differential recession of tile gaps. Research and development in the area of shock layer kinetics has focused on air and CO2-based atmospheres. Capacity and capability of the NASA Ames Electric Arc Shock Tube (EAST) have been expanded in recent years and analysis of resulting data has led to several improvements in kinetic models, while simultaneously reducing uncertainties associated with radiative heat transfer predictions for spacecraft. First-principles calculations of fundamental kinetic, thermodynamic, and transport data, along with state-specific models for non-equilibrium flow regimes, have also yielded new insights and have the potential to vastly improve model fidelity. Aerosciences is a very broad area of interest in entry systems, yet a number of important challenges are being addressed: Coupled fluid-structure simulations of parachute inflation and dynamics; Experimental and computational studies of vehicle dynamics; Multi-phase flow with dust particles to simulate augmentation of aerothermal environments at Mars during dust storms; Studies of roughness-induced heating augmentation relevant to tiled and woven thermal protection systems; and Advanced numerical methods to automate and optimize computational analyses for desired accuracy versus cost. Guidance and control in the context of entry systems has focused on development of methods for multi-axis control (i.e. pitch and yaw, rather than bank angle alone) of spacecraft during entry and descent. With precision landing requirements driven by Mars human exploration goals, recent efforts have yielded 6-DOF models for flap, shape change, and CG-movement control devices for inflatable decelerators, and propulsive descent of both inflatable and rigid ellipsled-like architectures. Results for both configurations have demonstrated the ability to land within the 50-meter precision requirement demanded by Mars human exploration missions, while also reducing propellant requirements by enabling more efficient control through entry and descent. Ongoing work is developing mechanical specifications for the systems and establishing engineering feasibility.
      • 08.0205 System Integration Comparison between Inflatable and Metallic Spacecraft Structures Gerard Valle (NASA/JSC), Douglas Litteken (), Thomas Jones (NASA - Langley Research Center), John Zipay (NASA - Johnson Space Center), Eric Christiansen (NASA Johnson Space Center) Presentation: Gerard Valle - Wednesday, March 6th, 09:20 AM - Madison
        Inflatable spacecraft structures are an alternative to traditional pressurized metallic structures that provide significant launch volume savings. A flexible primary structure, however, have a number of design and construction details that must be considered when moving from a metallic architecture to one based on softgoods. It is not only necessary to compare the structural mass and volume differences, but also examine the overall system integration changes that are required to implement a large-scale inflatable spacecraft. This paper will provide a comparison between inflatable and traditional metallic spacecraft by reviewing the integration of sub-systems in each vehicle and identifying the key differences. A brief introduction on inflatable spacecraft modules will be provided that reviews the history of inflatables from initial concepts and ground testing, through the Bigelow Expandable Activity Module (BEAM) currently berthed on the ISS. The paper will briefly discuss the basic design details of an inflatable spacecraft structure including the inner liner, structural restraint layer, micrometeoroid and orbital debris protection layers, passive thermal protection layers, and atomic oxygen protection layer required for low earth orbit (LEO). Ground integration and prelaunch considerations will be discussed along with a typical launch-to-deployment and setup scenario for an inflatable spacecraft. The design requirements and constraints from pre-integrated central core, folding and packaging, integrating launch support structure, loading and venting during ascent, launch vehicle on-orbit docking/berthing, extraction from launch vehicle, docking/berthing to space station or service module, deployment and checkout, on-orbit utilization, and end-of-life disposal shall also be examined. The comparison between traditional metallic spacecraft structures and inflatable spacecraft structures is the primary focus of the paper. Mass and volume comparisons will be provided for several spacecraft architectures capable of being launched on a variety of different launch vehicles. Benefits and risks shall be discussed in detail. A comparison of Environmental Control and Life Support Systems (ECLSS), power, avionics, additional secondary structure, and additional requirements associated with an inflatable module versus a metallic module launched from the same launch vehicle will be included. Habitable environment factors for crew members will also be discussed with a focus on potential radiation protection approaches, thermal requirements and performance, electromagnetic requirements, and a detailed discussion on micrometeoroid protection, which is a mass driver for habitats with large surface areas. A comparison between these requirements will be provided. Crew accommodation requirements including those for crew quarters, health care, exercise, galley, and stowage shall also be included and discussed. Finally, the paper will conclude with a discussion of current and potential future applications of inflatable spacecraft from a full-scale inflatable module on ISS or self-supported in Low Earth Orbit (LEO), to a Gateway element in lunar or cis-lunar orbit, to deep space transportation habitats, and Lunar and Mars surface applications.
      • 08.0208 Self-Assembling Space Habitats:TESSERAE Technology and Mission Architecture for Zero-g Construction Ariel Ekblaw (MIT ) Presentation: Ariel Ekblaw - Wednesday, March 6th, 09:45 AM - Madison
        Designs for tessellated, modular, and re-configurable space structures hold great promise for the evolving commercial space station market in LEO (Low Earth Orbit), for supporting lunar Gateway designs, and for facilitating the first manned Mars missions. We propose an extensible self-assembly paradigm for in-orbit space habitat construction, discuss mission architectures uniquely facilitated by this approach to habitat design, and present progress and results from a proof-of-concept prototype. This paper details our holistic habitat design and deployment planning around TESSERAE (Tessellated Electromagnet Space Structures for the Exploration of Reconfigurable, Adaptive Environments). This technology demonstration mission explores several parameters for a self-assembling system (mechanical assembly and actuation processes, ubiquitous sensing and feedback control, materials selection, etc.) and includes a multi-year research effort to engineer and deploy test structures. The first prototype was successfully tested on a zero-gravity flight in November 2017 and is now scheduled for a sub-orbital, enclosed launch in the fall of 2018 (in the progression towards scaled-down, internal ISS deployment and subsequent external orbit tests). Rather than relying on large, pre-fabricated volumes, a self-assembly approaches divides the final structure into many constituent parts that join together autonomously under certain rules and constraints. To decide on the pattern of deconstruction, or definition of each constituent part, we use a mesh representation of the outer surface shell. This mesh representation breaks down the shell of the structure into regular geometric shapes. Our chosen mechanism for self-assembly relies on electromagnetic jointing between the geometric tiles, where each tile is augmented with embedded logic and autonomous tracking for which neighbor tile to bond to next. This approach relies on a pattern of magnet polarity along the bonding edges, on-demand polarity control, a mesh network sensor and communication architecture between tiles, and finely-tuned dihedral bonding angles (i.e., the slope between tiles edges where they meet) that define the target surface topology (e.g., a buckyball, cylinder, torus, et cetera). A supervisory assembly algorithm mediates the use of the electromagnets for power budget conservation. Each TESSERAE unit is self-assembled from 32 tiles to form a buckminsterfullerene, or geodesic dome. These tiles pack flat for launch, thus extensively condensing the payload fairing space required in comparison to the final expanded volume. We extend this self-assembly protocol to a constellation of TESSERAE buckyball units and discuss the resultant structure in a Mars mission context: MOSAIC (Mars Orbiting Self-Assembling Interlocking Chambers). This paper takes our proof-of-concept prototype (as described in our 2018 AIAA SciTech paper and IASS 2018 architectural analysis) and presents technical design considerations and mission architectures for use of the TESSERAE technology as a habitat for human exploration development. Notes: - Author list: Ariel Ekblaw (as listed), and Joe Paradiso (PI, Responsive Environments research lab @MIT). - While initially submitted under 8.02 (Human Exploration Systems Technology Development), this paper will also address 8.06 (Mechanical Systems, Design, and Technologies) and 8.09 (Autonomy for Aerospace Applications).
      • 08.0210 Booster Obsolescence and Life Extension of SLS Boosters David Griffin (Northrop Grumman Corporation), Terry Haws (Northrop Grumman Corporation), Mike Fuller (Orbital ATK), Mark Tobias (Orbital ATK) Presentation: David Griffin - Wednesday, March 6th, 10:10 AM - Madison
        Human exploration beyond low earth orbit (BEO) has been a long-term goal of the United States and the international community since the end of the Apollo program. Congress chartered NASA to support this goal by developing deep space missions that will lead to placing humans on Mars. The current administration has emphasized the BEO mission with Space Policy Directive #1 directing NASA to return to the Moon followed by crewed missions to Mars. To achieve this goal, NASA has been developing the Orion crew capsule and Space Launch System (SLS) as key elements in the architecture designed to advance human spaceflight from our current capability in low earth orbit to eventually landing humans on Mars. This aggressive goal is complicated by the need to optimize the use of precious public financial resources during a period when budgets are challenged. Defining and prioritizing the technology and hardware needed is crucial to achieving the goal of landing humans on Mars. SLS consists of a liquid oxygen/liquid hydrogen powered core with two strap-on solid rocket boosters designed and built by Northrop Grumman Innovation Systems (formerly Orbital ATK). The SLS boosters are based on the reusable/redesigned solid rockets motors (RSRM) designed and built for the Space Shuttle using many of the same technologies, including the heritage steel cases. Eight flight sets of steel cases were retained from the Shuttle Program for use on SLS launches. New cases will then be needed for the ninth and subsequent launches of SLS. Northrop Grumman Innovation Systems is currently studying Booster Obsolescence and Life Extension (BOLE) of the SLS boosters. The study is looking to replace obsolete materials, such as the steel cases and thrust vector control systems, and includes technologies to improve the life and performance of the boosters, including upgrades to the propellant and liner system. This paper will discuss on-going BOLE efforts at Northrop Grumman Innovation Systems. It will discuss the upgraded technologies that have been studied, as well as the changes to performance and will include analysis performed by Northrop Grumman Innovation Systems under its internal efforts and under contract to NASA.
      • 08.0211 Magnetic Shielding Technology Development in Context with the Space Policy Environment Kristine Ferrone (Aerospace Corporation) Presentation: Kristine Ferrone - Wednesday, March 6th, 10:35 AM - Madison
        The space radiation environment outside the protection of the Earth's magnetosphere is particularly harsh and difficult to shield against. The cumulative effective dose to astronauts on a typical Mars mission would likely exceed permissible limits for carcinogenesis without innovative strategies for radiation shielding. There are many potential options for advanced shielding and risk mitigation, but magnetic shielding using superconductors offers several distinct advantages including using the vacuum of space to maintain the superconductor's critical temperature, low sustained power requirements, and low mass compared to passive shielding materials. Despite these advantages, the development of magnetic shielding technology has remained primarily in conceptual stages since the introduction of the idea in 1961. Upon review of studies on the application of active magnetic shielding for space radiation protection, we noticed a non-uniform distribution of publications on the topic over time. Here we compare the number of published magnetic shielding studies over the past 50+ years with the contemporaneous space policy environment, and we found a clear link between space policy trends and publications on magnetic shielding. We conclude that sustained policy-independent support and funding is needed to complete sustained experimental and engineering studies of exploration-enabling technologies, particularly magnetic shielding using superconductors.
      • 08.0212 Artificial Gravity in Mars Orbit for Crew Acclimation Justin Rowe (Jacobs Engineering) Presentation: Justin Rowe - Wednesday, March 6th, 11:00 AM - Madison
        NASA’s current baseline plan for a crewed Mars mission anticipates a transit time of up to 300 days in microgravity and 3-14 days on the Martian surface for gravity acclimation before the crew can safely perform their first Extra-Vehicular Activity (EVA). While there are multiple options for how initial surface operations will be performed, all current designs involve acclimation on the surface and the impacts on the mission schedule, required supplies, and crew lander systems are significant. This paper proposes an alternative option utilizing artificial gravity, which offers benefits in terms of mission scope, mass savings, crew health, and long-term strategic vision. By moving the acclimation requirement to the orbiting habitat and using existing systems rather than adding systems to the lander that are redundant with surface architecture, the crew lander can be scaled to a much smaller, simpler, and lighter design. Rather than the lander being designed to support crew for days, it would be mere hours. While ambitious, the concept of pre-acclimation in orbit can be not only safe and feasible, but done with fairly minimal changes to the planned architecture and overall mass requirements. The data used draws on decades of established research and demonstrates how this capability can be not only used for pre-acclimation but also to support crew during early orbital-only missions, surface abort contingency scenarios, return-to-orbit abort scenarios, and as an early proof of capability into larger and more ambitious artificial gravity designs needed for extended exploration missions in the future.
    • 08.03 Advanced Launch Vehicle Systems and Technologies Melissa Sampson (Ball Aerospace) & Jon Holladay (NASA)
      • 08.0301 Theoretical Design Aspects concerning Launch System Utilizing Electromagnetism for Missile Launch Chhavi Chhavi (Georgia Institute of Technology), Vikram Ramanan (IIT Madras) Presentation: Chhavi Chhavi - Wednesday, March 6th, 04:30 PM - Madison
        The present work is aimed at theoretically designing and optimizing an electromagnetic based propulsion system, to propel missiles at high velocities, before secondary propulsion systems like ramjet/ scramjets can start the operation. The design incorporates the placement of a missile on a pod, with the missile-pod assembly being accelerated through a series of coils, akin to coil gun. The whole of the propelled material is assumed to be ferromagnetic and the constraints on the design pertain to the final velocity, which is comparable to the cruise velocity of a model Inter-continental ballistic missile. To reduce the effects of aerodynamic drag, as well as to improve the inductive coupling, the acceleration of the missile is designed to operate in evacuated annulus. The parameters optimized include the radius of the evacuated annulus, the magnitude of current in primary, the number of turns per coil and the total number of coils. The estimate of the above listed parameters is achieved by solving for geometrical relations and laws of electromagnetism, combined with laws of motion and Joule heating. Finally, a comparison is made with conventional chemical based propulsion method, in terms of reducing the carbon foot-print.
      • 08.0303 SLS with Kick Stages for Outer Planet Science Missions Terry Haws (Northrop Grumman Corporation), Mike Fuller (Orbital ATK) Presentation: Terry Haws - Wednesday, March 6th, 04:55 PM - Madison
        NASA continues to explore the solar system, pushing the boundaries of our knowledge of the planets around us. This is true of the outer planets as well as those closer to Earth. Successful missions to the outer planets include Voyager 1 and 2, Galileo, Cassini-Huygens, and New Horizons. NASA is continuing to look at future outer planet science missions, including missions to Europa and the Uranus system. Additionally, NASA’s Outer Planets Assessment Group is interested in Ocean World missions, especially Europa, Titan, and Triton. Missions to the outer planets are challenging due to the distances and time involved, and the energy required to depart from Earth and journey away from the Sun. In order to generate the energy required, these missions often require a number of planetary flybys, adding complexity and mission time. Additionally, the energy required for these missions reduces the size of the spacecraft. Since science missions to high-energy destinations like the outer planets will continue, we need to find ways to throw larger spacecraft with shorter mission times. NASA is also currently working towards the goal of landing humans on the surface of Mars. The Space Launch System (SLS) and Orion are the first key pieces of that plan. Along with being the cornerstone of a Mars exploration campaign, SLS is also well suited to launching science missions to the outer planets. SLS Block 1 (with the interim cryogenic propulsion stage [ICPS]) can deliver about 1,500 kg directly to Jupiter (no other planetary flybys required). SLS Block 1B with the Exploration Upper Stage (EUS) improves that value to about 6,500 kg. Neither configuration of SLS has sufficient capability to deliver payload directly to Uranus. The payload SLS delivers for the outer planet missions can be improved by including a solid propellant motor as a kick stage as part of the payload. The kick stage augments the energy used to deliver the payload, thereby either decreasing mission time or increasing the size of the payload or both. SLS also has the advantage of an 8.4-meter diameter fairing, allowing sufficient volume to include a kick stage with the payload. Solid propellant motors have several advantages as in-space kick stages. Since the propellants are solid, there is no loss of propellant to boil-off, as with cryogenic stages. Northrop Grumman produces a number of flight-proven solid motors, making development risk low. They can be safely stored and require no fueling on the day of launch. Using existing, flight-proven motors, SLS Block 1 can deliver more than 1,000 kg directly to Uranus. SLS Block 1B can increase the payload to nearly 4,000 kg. This paper will analyze the performance of SLS Block 1 and Block 1B for high-energy outer planet missions. It will then analyze the increased performance from adding existing solid motor kick stages on top of SLS. The information in this paper will come from analysis performed by Northrop Grumman Innovation Systems under its internal efforts.
      • 08.0305 A Spatial Perspective on the Metallized Combustion Aspect of Rockets Aditya Virkar (SRM Institute of Science & Technology), Chitresh Prasad (SRM Institute of Science & Technology), Arvind Ramesh (SRM Institite of Science & Technology), Vinayak Malhotra (SRM University), Mohammed Nizami (GATE PATHSHALA EDUCATIONAL SERVICES LLP.), Karan Dholkaria (SRM Institute of Science & Technology) Presentation: Aditya Virkar - Wednesday, March 6th, 05:20 PM - Madison
        A solid propellant rocket is a rocket which makes use of a combination of a solid fuel and a solid oxidizer. Most conventional solid motor rockets consist of the main engine, along with multiple boosters, that assist in providing additional thrust to the rocket, which is especially crucial during the initial stage of the space-bound vehicle. Solid propellants were widely utilised in the earliest forms of rocket propulsion but were subsequently eclipsed by liquid and hybrid propellants later, due to their better performance characteristics. The addition of burn catalysts, such as iron oxide, can significantly enhance the performance of a solid propellant. A comprehensive understanding of the combustion behaviour of such catalyst-infused propellants is crucial in the design and development of better and more efficient solid rocket motors. This scientific investigation strives to emulate the working of a solid rocket using sparklers and energized candles, with a centrally-placed energized candle acting as the main engine, surrounded by sparklers, substituting for boosters. It is a small-scale replication of the combustion process that takes place during actual rocket propulsion, to understand the inter energy conversions that occur between the booster and the main engine. The energized candle is mainly made of Paraffin wax, with tiny magnesium filings embedded in its wick, and the sparkler consists of a combination of different materials. The incorporation of magnesium in the energized candle, and of the iron-aluminium combination in the sparkler, act as catalysts and enhance the burn rate of both materials. It has already been found out that the overall increment in the regression rate of the magic candle is 115% more than that of a normal candle. The primary performance parameter observed in this work is the regression rate of the energized candle. This is done by exploring various experimental parameters, interspace distance, number of sources, orientation and configuration of sources(both linear & non-linear). This study is carried out to analyse the flame spread rate variations of the energized candles, resembling the solid rocket propellant used in first stage rocket propulsion, as it has a remarkable effect on the specific impulse figures of rocket engines, which in turn has a deciding impact on its time-of-flight. Another objective of this research venture is to determine the effectiveness of the key controlling parameters explored. This investigation also emulates the exhaust gas interactions of the solid rocket through concurrent ignition of the energized candle and sparklers, and their behaviour is analyzed. Hitherto, numerous efforts have been made to improve the overall efficiency of rockets either through a focus on enhancing thermal efficiency, or propulsive efficiency. The main motivation of this study is to enhance rocket performance and to improve their overall efficiency through better designing and optimization techniques. The experiments carried out represent the simplification of a heterogeneous phenomenon. The knowledge gained would be very useful for future space propulsion applications pertaining to combustion and propulsive behaviour.
    • 08.04 Human Factors & Performance Kevin Duda (The Charles Stark Draper Laboratory, Inc.) & Jessica Marquez (NASA Ames Research Center)
      • 08.04 8.04 Keynote Presentation: - - Gallatin
      • 08.0401 Does Workload and Sensory Modality Predict Pilots’ Localization Accuracy? Christopher Brill (U.S. Air Force Research Laboratory), Anthony Gibson (Air Force Research Laboratory), Ben Lawson (U.S. Army Aeromedical Research Laboratory), Angus Rupert () Presentation: Christopher Brill - Thursday, March 7th, 09:20 AM - Gallatin
        Abstract— The current paper expands on previous research that has examined the effectiveness of using multisensory cues to sustain pilot situation awareness (SA) and prevent spatial disorientation (SD) when navigating aerospace environments. Specifically, we investigated whether perceived workload, measured via the NASA Task Load Index (NASA-TLX; Hart & Staveland, 1998), correlates with objective, behavioral performance data. This addresses an important gap in the literature, as prior research investigating cue localization has focused mainly on the effects of cue modality type (e.g., auditory, tactile) on localization accuracy (Brill, Lawson, & Rupert, 2015; Brill, Rupert, & Lawson, 2015). We found that pilots’ ability to monitor current localization performance levels differed across three cue modalities (i.e., auditory, tactile, audiotactile cues). Pilots also showed differential relationships between objective performance and workload across these modalities. The experimental design included a cue modality within-subject variable in which all participants (N = 35) were exposed to the three cue modalities. Subjective workload was assessed for each modality directly after the participant completed the respective block and modality blocks were counterbalanced across participants. We investigated the relationship between workload scores and objective performance across cue modalities using three criteria: (a) percent correct for azimuth cues, (b) percent correct for elevation cues, and (c) percent correct overall. Pilots’ responses were coded as correct for the azimuth and elevation criteria if the responses fell within an absolute value of 15˚ of the azimuth or elevation stimuli locations, respectively. Responses were coded as correct overall if both azimuth and elevation responses were correct. A series of generalized linear mixed effects regression models showed significant interactions with cue modality for mental demand (p < .01), perceived performance (p < .05), effort (p < .05), and overall workload for all three criteria measures (p < .01). Perceived workload consistently showed a negative relationship with localization accuracy for auditory cues. Stated simply, pilots who reported that auditory cues yielded the highest workload also demonstrated the lowest performance. Tactile cues, in contrast, corresponded to consistent, positive relationships with workload criteria (i.e., higher reported workload related to increased performance). Overall, pilots rated localizing tactile cues as less demanding compared to auditory cues and predicted their performance levels more accurately. The relationship between workload and performance was unpredictable across multiple criteria for audiotactile cues. Thus, pilots who experienced increased workload did not necessarily demonstrate decreased performance compared to pilots who experienced lower levels of workload. Implications for these findings include both better understanding of pilots’ ability to monitor performance and whether these relationships remain constant across cue modalities. Pilots displayed better awareness of their own performance when provided tactile and audiotactile cues. This increased insight into performance should facilitate the adoption of effective strategies for maintaining situation awareness. These findings should be considered when developing multisensory cues for displaying flight-relevant information (e.g., spatial orientation, enemy target locations, and navigational cues) in future aerospace systems. (Full references available upon request)
      • 08.0402 Enabling Communication between Astronauts and Ground Teams for Space Exploration Missions Jessica Marquez (NASA Ames Research Center), Steven Hillenius (NASA), Jimin Zheng (NASA - Ames Research Center), Ivonne Deliz (ASRC/NASA Ames Research Center), Bob Kanefsky (), Jack Gale (NASA - Ames Research Center) Presentation: Jessica Marquez - Thursday, March 7th, 09:45 AM - Gallatin
        Historically, human spaceflight operations have only had to contend with at most a couple of seconds of communication latency between astronauts and ground support teams. A complex network of satellites in low Earth orbit (LEO) currently provides an almost constant high bandwidth connection between the crew onboard the International Space Station (ISS) and the Mission Control Center (MCC). Future exploration missions aim to venture further away from Earth. Beyond LEO, communication restrictions and latencies increase proportionally to the destination’s distance from Earth. This poses new communication challenges for which novel technological solutions are required to maintain team performance across latency and intermittent connectivity. This paper describes a multimedia chat interface, Playbook’s Mission Log, which has been used in various Earth analog missions where communication latency has been simulated. Among the diverse set of analogs that have used the Mission Log are Pavilion Lake Research Program (PLRP), NASA’s Extreme Environment Mission Operations (NEEMO), Human Exploration Research Analog (HERA), and Biologic Analog Science Associated with Lava Terrains (BASALT). In these analogs, the Mission Log has provided a means by which MCC and crew have continued to successfully communicate, even across various latencies. Beyond exchanging text messages, this software tool also supports multimedia data exchange, including images, video, and files - a capability that has shown to be of increased importance in operational settings with communication latency. Over the course of six years, we have observed analog mission crews organically use Playbook’s Mission Log for various mission communications. Today, the Mission Log is depended upon during periods of high latency. In order to better support these missions, we have incrementally designed and implemented novel features that support the unique obstacles present when communicating across various latencies. Finally, we enumerate the remaining design challenges and capabilities required to further enable and improve upon seamless communication between astronauts and ground teams for long duration space exploration missions.
      • 08.0403 Comparison of Photogrammetric and Laser Hand Scans to Manual Measurements for EVA Glove Fabrication Bonnie Dunbar (Texas A&M University), Patrick Chapates (Texas A&M University) Presentation: Bonnie Dunbar - Thursday, March 7th, 10:10 AM - Gallatin
        Spacesuits are critical to human survival and exploration outside of the Earth’s protective environment. A number of environmental variables must be considered in design, which vary among the working locations: LEO, Lunar Surface, or Mars Surface. Common to all the environments is the importance of a well-fitting suit, including gloves, in order to effectively and safely conduct EVA operations. During the Mercury, Gemini, and Apollo, and Skylab programs, astronauts wore customized pressure suits and gloves. During the Shuttle era, astronauts were allocated to one of five general suit sizes: XS, S, M, L, and XL which were eventually reduced to 2 or 3 sizes. Shuttle EVA gloves varied between re-flight of standard sizes, to now customized gloves. Our objective is perform research which will again lead to customized suits and gloves which are designed to improve performance and to ensure safety. In spite of the customization of current EVA gloves, astronauts on the International Space Station (ISS) are experiencing strength degradation over 50% when compared to ungloved strength evaluations, and many are experiencing finger injuries. The lack of significant performance improvement, even with customization, is confounding. The cause of these injuries are still unknown. Considerable effort continues to be devoted to improve glove designs: to improve dexterity, to enhance mobility, to satisfy thermal/micro-meteoroid constraints and to provide for grasp retention and force requirements, minimizing fatigue. Although many industries (e.g. aeronautics, automobile and apparel) are moving towards Digital Human Modelling (DHM) in order to design and fabricate with Finite Element Analyses (FEA), customization of current gloves still begins with a manual measurement of each crew member’s hands in accordance with NASA Human Factor’s standards. As many as 21 measurements are made manually with measurement tapes and calipers. A molded cast is also made for the manufacture of the gloves. Manufacture of each glove is labor intensive with hand fabrication and stitching of the multiple layers. As far as the authors can determine, no dynamic digital analyses has been made of different hand configurations to ensure good glove fit from finger extension to tool grasp. The TAMU Aerospace Human Systems Laboratory recently acquired a 3dMD 3D Motion Capture system customized to capture 20 seconds of moving hand images (10 frames/second) in order to develop stitched digital images which can be converted to FEA models. A Vitus laser scanning system has also been added. The objective of the paper is to report the results of comparing manual hand measurements with those produced through the digital imaging from both systems. A FEA model of an image hand has also been created and converted to a 3D printed hand. Comparison of the printed hands to both the manual measurements and the original digital scans will also be discussed.
      • 08.0408 Human-Machine Interactions in Apollo and Lessons Learned for Living off the Land on Mars George Lordos (Massachusetts Institute of Technology) Presentation: George Lordos - Thursday, March 7th, 10:35 AM - Gallatin
        Human-machine interactions underpinned the resilience of Project Apollo to unplanned disturbances and were a critical factor in its success. We briefly describe 19 Apollo-era case studies in human-machine interactions involving unplanned disturbances from the vantage points of four stages of the system lifecycle: conceive, design, implement and operate. Using the System Theoretic Process Analysis (STPA) method on a representative selection of these cases, we explored how emergent system behavior in Apollo was constrained at all times within boundaries consistent with mission success, and abstracted and generalized a key lesson learnt from each of the four stages. These were: (1) Human and machine shall each do what they are best at (2) clarity in the placement of the human-machine boundary enables realization of synergies from their interaction (3) concurrent optimization and testing of functions of humans and machines contributes to successful validation of complex system implementation (4) the costly development of trust between humans and machines underpins operational resilience in the face of internal and external disturbances. We then adapted and applied these lessons to the conceptual design process for future human-robot teams tasked with in situ resource utilization (ISRU) on the Moon or Mars. We find that these Apollo learnings were transferable to Moon and Mars ISRU human-robot interactions because the objective of ‘living off the land’ on other worlds will be as new, as unproven, as mission-critical and as difficult as safely landing humans on the Moon and returning them to Earth was in 1969. We also show that these learnings from Apollo are especially impactful if applied at the earliest stages of a complex system lifecycle, i.e. during the ideation of concept fragments for the future human exploration of Mars. The methods presented here may be applied to systematically extract and transfer best practices in human-machine interaction between superficially dissimilar mission architectures.
    • 08.05 Space Human Physiology and Countermeasures Ana Diaz Artiles (Texas A&M University) & Andrew Abercromby (NASA Johnson Space Center)
      • 08.05 8.05 Keynote Presentation: - - Gallatin
      • 08.0501 Human Physiology and Countermeasures for Spaceflight from the Perspective of an ISS Astronaut Steve Swanson (Boise State University) Presentation: Steve Swanson - Thursday, March 7th, 11:25 AM - Gallatin
        NASA has developed countermeasures to reduce bone loss, muscle atrophy, and to help keep astronauts in good cardiovascular condition while in the microgravity environment. While these countermeasures, such as the advanced resistive device (ARED), the treadmill (T2), and the exercise bike (CEVIS), do help tremendously, there is still room for improvement. This paper will discuss physiological changes experience by the author, not only from microgravity, but also possibly from diet. And correspondingly, discuss how well the counter measures worked for him and discuss the issues he had from these countermeasures such as a weak core, leg muscle contractures, and back pain.
    • 08.06 Mechanical Systems, Design and Technologies Lisa May (Murphian Consulting LLC) & Alexander Eremenko (Jet Propulsion Laboratory)
      • 08.0601 Mechanical Design and Configuration of Penetrations for the Europa Clipper Avionics Vault Structure Nicholas Keyawa (Jet Propulsion Laboratory), Ali Bahraman (JPL), William Hatch (Jet Propulsion Laboratory), Katherine Dang (Jet Propulsion Laboratory), Lou Giersch (NASA Jet Propulsion Lab) Presentation: Nicholas Keyawa - Sunday, March 3th, 04:30 PM - Gallatin
        The main purpose of the Avionics Vault is to shield radiation sensitive electronics for the Europa Clipper Spacecraft. The vault is a box structure made out of aluminum panels. The panels are roughly 10 mm thick in order to shield the electronics from the orbital total ionizing radiation around Jupiter. The vault requires an electromagnetic interference (EMI) shielding effectiveness (SE) of at least 70 dB in order to mitigate EMI with the spacecraft radar receiver. Overall, the vault accommodates four main types of penetrations: receptacle connectors, pass-through cables, fluid lines, and vent holes. More than 150 cables penetrate the vault panels to connect to electronic boxes inside. Fluid pipes enter and exit the vault to transfer heat to the rest of the spacecraft. Vent holes provide a path for air to escape from the vault during launch. Several novel penetrations designs were created to meet EMI and radiation shielding requirements. Receptacle connectors interface to the vault panels using 1.3 mm thick Ta10W plates. Pass-through cables penetrate the vault using aluminum clamshells after being wrapped with Teflon cushion tape, Kapton tape, and copper tape. Vent hole penetrations consist of a copper mesh for EMI shielding and an aluminum radiation shield bracket to direct air out of the vault during launch. Fluid lines terminate at the vault wall using mechanical fittings that resemble a nut and bolt interface. In addition, most mechanical seams and penetrations utilize EMI gaskets to ensure proper EMI shielding. To reduce risk and confirm that the vault penetration designs were appropriate for EMI shielding, an EMI chamber at the Jet Propulsion Laboratory (JPL) was used to test a mock-up vault panel with multiple variations of all four main types of vault penetrations. This EMI SE test also incorporated different methods for bundling pass-through cables, and a comparison of flange mounted connectors versus jam nut connectors. A low noise preamplifier and a Rohde & Schwarz spectrum analyzer measured E-field levels transmitting through the mock-up vault panel. The results showed a shielding effectiveness of 77 dB for the mock-up vault panel, which exceeds the 70 dB target for Europa Clipper. Both the flange mounted connectors and jam nut connectors exhibited similar EMI SE results at the measured frequencies, and all variations of vault penetrations showed favorable EMI SE levels. Since the flight panels will be much larger and include many more penetrations, there will be testing of the flight vault to confirm its EMI SE is compliant with environmental requirements.
      • 08.0602 Solar Radiation Disturbance Torque Reduction for the Parker Solar Probe Observatory Juan Ruiz (Johns Hopkins University/Applied Physics Laboratory), Daniel Kelly (Johns Hopkins University/Applied Physics Laboratory), David Napolillo (Johns Hopkins University/Applied Physics Laboratory) Presentation: Juan Ruiz - Sunday, March 3th, 04:55 PM - Gallatin
        This paper examines the methodology used for reducing solar pressure disturbance torques for the Parker Solar Probe (PSP) Observatory by minimizing the offset between spacecraft’s Center of Gravity (Cg) and Center of Pressure (Cp). The force due to solar radiation pressures encountered by the PSP spacecraft, particularly at the 9.86 solar-radii (Rs) closest approach point in the orbit, are of a sufficient magnitude to produce significant disturbance torques. Inside of 0.25 AU, the Observatory is required to keep its Thermal Protection System (TPS) pointed precisely towards the Sun in order to ensure the survivability of the observatory. It was crucial to reduce disturbance torques encountered during this phase of flight to a low enough level such that the guidance and control system of the spacecraft would be able to control attitude without requiring excessive or untimely propellant usage. We present the process used for proactively packaging a balanced spacecraft and analytically determining the spacecraft’s mass properties throughout the entirety of the mission, including associated Cg and inertia tensor changes due to propellant usage and movement of deployable hardware and mechanisms. We also present the process used for deriving the spacecraft’s Cp based on the geometry and optical properties of the hardware exposed to the full solar environment, as well as its shift due to degradation of those properties throughout the life of the mission. Using both of those data sets, we present the approach used to install ballast masses on the observatory in order to minimize the offset, as well as data collected during spacecraft mass properties testing concluded towards the end of PSP’s assembly, test, and launch operations campaign. Finally, we present test correlated center of gravity and center of pressure data, and examine expected effects for the duration of the Parker Solar Probe mission.
      • 08.0606 Geometry and Joint Systems for Lattice-Based Reconfigurable Space Structures Megan Ochalek (Massachusetts Institute of Technology), Kenneth Cheung (NASA - Ames Research Center), Greenfield Trinh (NASA - Ames Research Center), Olivia Formoso (NASA - Ames Research Center), Benjamin Jenett (), Christine Gregg (NASA Ames Research Center) Presentation: Megan Ochalek - Sunday, March 3th, 05:20 PM - Gallatin
        We describe analytical methods for the design of the discrete elements of ultralight lattice structures. This modular, building block strategy allows for relatively simple element manufacturing, as well as relatively simple robotic assembly of low mass density structures on orbit, with potential for disassembly and reassembly into highly varying and large structures. This method also results in a structure that is easily navigable by relatively small mobile robots. The geometry of the cell can allow for high packing efficiency to minimize wasted payload volume while maximizing structural performance and constructability. We describe the effect of geometry choices on the final system mechanical properties, manufacturability of the components, and automated robotic constructability of a final system. Geometry choices considered include building block complexity, symmetry of the unit cell, and effects of vertex, edge, and face connectivity of the unit cell. Mechanical properties considered include strength scaling, modulus scaling, and structural performance of the joint, including proof load, shear load, mass, and loading area; as well as validation and verification opportunities. Manufacturability metrics include cost and time, manufacturing method (COTS versus custom), and tolerances required. Automated constructibility metrics include local effects of loads imparted to the structure by the robot and assembly complexity, encompassing the ability of the robot to clamp and number of placement motions needed for assembly.
      • 08.0608 Membrane Deployment Technology Development at DLR for Solar Sails and Large-Scale Photovoltaics Tom Sproewitz (German Aerospace Center), Patric Seefeldt (German Aerospace Center - DLR), Jan Thimo Grundmann (DLR German Aerospace Center), Peter Spietz (DLR German Aerospace), Rico Jahnke (), Eugen Mikulz (), Thomas Renger (German Aerospace Center - DLR), Siebo Reershemius (German Aerospace Center - DLR), Kaname Sasaki (German Aerospace Center - DLR), Maciej Sznajder (German Aerospace Center - DLR), Norbert Toth (German Aerospace Center - DLR) Presentation: Tom Sproewitz - Sunday, March 3th, 09:00 PM - Gallatin
        Following the highly successful flight of the first interplanetary solar sail, JAXA’s IKAROS, with missions in the pipeline such as NASA’s NEAscout nanospacecraft solar sail and JAXA’s Solar Power Sail solar-electric propelled mission to a Jupiter Trojan asteroid, and on the back-ground of the ever increasing power demand of GEO satellites now including all-electric space-craft, there is renewed interest in large lightweight structures in space. Among these, deployable membrane or ‘gossamer’ structures can provide very large functional area units for innovative space applications which can be stowed into the limited volumes of launch vehicle fairings as well as secondary payload launch slots, depending on the scale of the mission. Large area structures such as solar sails or high-power photovoltaic generators require a technology that allows controlled and safe deployment. Before employing such technology for a dedicated science or commercial mission, it is necessary, to demonstrate its reliability, i.e., TRL 6 or higher. A reliable technology that enables controlled deployment was developed in the Gossamer-1 solar sail deployment demonstrator project of the German Aerospace Center, DLR, including verifica-tion of its functionality with various laboratory tests to qualify the hardware for a first demonstration in low Earth orbit. We provide an overview of the Gossamer-1 hardware development and qualification campaign. The design is based on a crossed boom configuration with triangular sail segments. Employing engineering models, all aspects of the deployment were tested under ambient environment. Several components were also subjected to environmental qualification testing. An innovative stowing and deployment strategy for a controlled deployment and the required mechanisms are described. The tests conducted provide insight into the deployment process and allow a mechanical characterization of this process, in particular the measurement of the deploy-ment forces. The stowing and deployment strategy was verified by tests with an engineering qualification model of one out of four Gossamer-1 deployment units. According to a test-as-you-fly approach the tests included vibration tests, venting, thermal-vacuum tests and ambient deployment. In these tests the deployment strategy proved to be suitable for a controlled deployment of gossamer spacecraft, and deployment on system level was demonstrated to be robust and controllable. The Gossamer-1 solar sail membranes were also equipped with small thin-film photovoltaic arrays to supply the core spacecraft. In our follow-on project GoSolAr, the focus is now entirely on deployment systems for huge thin-film photovoltaic arrays. Based on the Gossamer-1 experience, deployment technology and qualification strategies, new technologies for the integration of thin-film photovoltaics are being developed and qualified for a first in-orbit technology demonstration within five years. Main objective is the further development of deployment technology for a 25 m² gossamer solar power generator and a flexible photovoltaic membrane. GoSolAr enables a wider range of deployment concepts beyond solar sail optimized methods. It uses the S²TEP bus system developed at the Institute of Space Systems as part of the DLR satellite roadmap.
      • 08.0609 HP3 Instrument Support System Structure Development for the NASA/JPL Mars Mission InSight Tom Sproewitz (German Aerospace Center), Siebo Reershemius (German Aerospace Center - DLR), Kaname Sasaki (German Aerospace Center - DLR), Marco Scharringhausen (German Aerospace Center - DLR) Presentation: Tom Sproewitz - Sunday, March 3th, 09:25 PM - Gallatin
        On May 05, 2018 NASA JPL launched its mission to Mars called “InSight”. Main objective of this mission is to gain more knowledge about the evolution of terrestrial planets and to more precisely determine properties of core, mantle and crust of Mars. One of a number of different scientific instruments onboard the lander there will be HP3 (Heat Flow and Physical Properties Package), which was developed by the German Aerospace Center (DLR). It will be operated on Martian ground to measure the heat flow through the Martian outer crust. It uses a hammering mechanism which will pull a tether approx. 5 m into the soil. The hammering device is equipped with foil heaters on the outer hull and the tether is equipped with temperature elements. Both is needed for the determination of the thermal conductivity of the surrounding regolith and the measurement of the temperature gradients in the ground. There is the need of a separate system to be able to perform those activities on the surface. This system is called the “HP3 Support System”. Its main task is to ensure a stable, nearly perpendicular position of the hammering mechanism relative to the soil on the Martian surface before initial penetration. It furthermore houses the instruments for length measurement and serves as electrical connection to the lander. The paper will give an overview of the development and the qualification of the structure of the Support System. It will focus on the mechanical design, the analysis of the structural dynamics but in particular on the testing which includes standard environmental testing but also numerous development tests that are very mission specific. The mechanical design of the Support System is mainly driven by a unique set of requirements derived from the working environment on Mars, the deployment from the lander deck and the mechanically separated operation on the surface. The instrument design will be explained to show, which design elements were implemented to ensure proper functionality. Various development tests had to be performed during the Support System structure development. Besides the standard qualification tests, special tests were developed to show compliance of the instrument design to the requirements. Such tests are: Separation Tests from the lander deck in cold environment under various tilt angles, Tether Deployment Tests, under various temperatures, foldings and routings, Feet Sliding Resistance Tests on sand with different slopes. The paper will give an overview on all tests necessary for the support system qualification and will describe test setups and the results.
      • 08.0610 IRESA - Intelligent Redundant Spacecraft Actuator Florian Schummer (Technical University of Munich), Robin Roj (Forschungsgemeinschaft Werkzeuge und Werkstoffe Remscheid e.V.), Alexander Czechowicz (Kunststoffverarbeitung Hoffmann GmbH), Jakob Bachler (Technical University of Munich), Martin Langer (Technical University of Munich), Tejas Kale (Technical University of Munich), Rupert Amann (Technical University of Munich), Sven Langbein (Forschungsgemeinschaft Werkzeuge und Werkstoffe Remscheid e.V.), Peter Dueltgen (Forschungsgemeinschaft Werkzeuge und Werkstoffe Remscheid e.V.) Presentation: Florian Schummer - Sunday, March 3th, 09:50 PM - Gallatin
        Over the past decade, the figures of merit of volume and mass for satellites have increased dramatically while the time to orbit decreased in the same manner. New applications based on small satellites are drivers in this new era where spacecraft development has to be both fast and reliable. Deployment and pointing mechanisms are one keystone for future commercial as well as scientific missions. Thus, future spacecraft actuators need to combine reliability with adaptability, while ideally being low power and low mass devices. IRESA (Intelligent, Redundant Spacecraft Actuator), currently under development at the Technical University of Munich in cooperation with the FGW Forschungsgemeinschaft f\"ur Werkzeuge and Werkstoffe e.V., is an actuator for pointing mechanisms on small satellites based on shape memory alloys (SMA). IRESA is based on the extrinsic two-way SMA effect. One of multiple parallel SMA-wires is heated electrically, effecting in its contraction. The linear motion is then transformed into rotation for pointing or used directly in hold-down and release applications. The mechanism resets via a mechanical spring. In case of a broken or otherwise dysfunctional SMA-wire, a parallel wire takes over. IRESA enables solar array alignment, pointing of payloads and antennas and easily resettable hold-down and release mechanisms. The actuator's modular design helps to keep the qualification effort and project risks for small satellites at a minimum, while being adaptable in required accuracy, reliability, torque and range. Compared to its classical counterparts, IRESA offers a higher force to mass ratio and simpler design with less components. This paper consists of three parts; in the first part, we will present the concept for the modular actuator and its implementation in a first demonstrator. Performance values of the actuator, including mass, volume, power consumption and achievable torque are given. In the second part, we will present test results from more than five months of qualification testing, covering thermal vacuum performance, shaker and endurance tests. The third part focuses on a method to judge and predict fatigue of the SMA-wire based on intrinsic measurements, enabling both preflight assessment of the actuator age and preventive maintenance during operations. Reliability and adaptability are two cornerstones for future small satellite missions. IRESA is meant to fulfill these goals, while keeping complexity at a minimum through a modular design.
    • 08.07 Spacecraft Propulsion and Power Systems Erica Deionno (The Aerospace Corporation) & John Brophy (Jet Propulsion Laboratory)
      • 08.0705 GoSolAr - DLR's Gossamer Solar Array Concept Using Flexible Thinfilm Photovoltaics Tom Sproewitz (German Aerospace Center), Jan Thimo Grundmann (DLR German Aerospace Center), Patric Seefeldt (German Aerospace Center - DLR), Hauke Martens (German Aerospace Center - DLR), Siebo Reershemius (German Aerospace Center - DLR), Nies Reininghaus (German Aerospace Center - DLR), Kaname Sasaki (German Aerospace Center - DLR), Peter Spietz (DLR German Aerospace), Maciej Sznajder (German Aerospace Center - DLR), Norbert Toth (German Aerospace Center - DLR) Presentation: Tom Sproewitz - Tuesday, March 5th, 10:35 AM - Madison
        The power demand for future satellite applications will continue to rise. Geostationary telecom-munication satellites currently approach a power level of up to 20 kW. Future spacecraft will provide yet more transponders and/or direct mobile-satellite services. Electric propulsion is in-creasingly used for station keeping, attitude control and GEO circularization. Interplanetary mis-sions already use kW-range electric propulsion. Space Tugs are studied for several fields. Suitable engines require 100 kW or more. The envisaged use of such engines and the operation of future GEO satellites lead to a renewed interest in large, deployable and ultra-lightweight power gen-erators in space. Within the GoSolAr (Gossamer Solar Array) activity, DLR develops a new photovoltaic array technology for power generation. It is based on the DLR Gossamer approach using lightweight, deployable CFRP booms and a polymer membrane covered with thin-film CIGS photovoltaics. The booms are arranged in a crossed configuration with a central deployment unit. The photovol-taic area is composed of one large square membrane with double folding using two-dimensional deployment. Even though the efficiency of thin-film photovoltaics is currently only about 1/3 of that of con-ventional photovoltaics, a membrane based array can already achieve better mass/power ratios. A 50 kW array requires an area of approximately 20 m x 20 m. In a first step, DLR develops a fully functional 5 m x 5 m demonstrator partially covered with thin-film photovoltaics, using the DLR small satellite platform S2TEP. Space compatible thin-film photovoltaics need to be select-ed and tested. They are integrated on standardized generator modules that will be assembled into a large, foldable and deployable membrane. A controlled deployment of structure and membrane, and a sufficiently stiff support structure for operation are key development topics. We present the conceptual design of the GoSolAr demonstrator, the main requirements, prelimi-nary technical budgets and the development strategy. An overview will be given on the selection and the maturity of the key technologies and subsystems, such as deployable membrane with in-tegrated photovoltaic generators; deployable CFRP booms including deployment mechanisms; photovoltaic cell selection and integration to generator units; the array harness concept as well as the electronics concept, for operation and photovoltaics characterization. Furthermore, an over-view of the first manufactured breadboard models and their testing will be presented, e.g. com-bined testing of booms and mechanically representative generator arrays to evaluate deployment and interface forces for the preliminary design.
      • 08.0706 Design PV for a Small GEO Satellite and Studying the Effect of Using Different Types of Propulsion Ahmed Lotfy () Presentation: Ahmed Lotfy - Tuesday, March 5th, 11:00 AM - Madison
        This paper presents an optimum design of the solar Photo-Voltaic (PV) power system for small Geostationary Earth Orbit (GEO) satellites using triple-junction solar cells and advanced Lithium-Ion batteries. The paper applies the proposed system on various propulsion technologies; full chemical, full electrical and hybrid propulsions. This research work studies the capability to fulfill efficiently all the satellite power requirements during both the launching and the on-station phases while reducing the high-cost challenge. Since the propulsion type is a key factor for the satellite cost, an economic analysis is demonstrated and investigated in two different strategies. The first scenario fixes the satellite weight and offers the revenue due to the increase in the satellite payload. However, the second scenario evaluates the saving profits due to the reduction in the satellite weight using the same number of satellite transponders. The analytical comparison among the different propulsion techniques shows the superior advantages of using the full electrical satellites.
      • 08.0711 Infrared Nanoantenna-Coupled Rectenna for Energy Harvesting Joshua Shank (Sandia National Laboratories), Paul Davids (Sandia National Laboratories), David Peters (Sandia National Laboratories) Presentation: Joshua Shank - Tuesday, March 5th, 11:25 AM - Madison
        Energy harvesting from relatively low-temperature heat sources is important in applications where long-term power sources are needed. Current solutions exhibit low efficiency, require exotic materials and structures, and direct contact to the heat source. While the infrared rectenna is currently low efficiency, the path exists for high-efficiency solid state devices. We have made a scalable design using standard CMOS processes, allowing for large-area fabrication. This would allow devices to be made on the wafer scale using existing fabrication technology. The rectenna has the advantage of using radiated power, thus it does not require direct contact to the hot source, but instead must only view the source. This will simplify packaging requirements and make a more robust system. The devices are monolithic and thus robust to adverse operating environments. Here we will discuss the rectenna’s physics of operation, particularly light coupling into the structure. Incoming light is coupled to a metal-oxide-semiconductor (MOS) tunnel diode via a broad-area nanoantenna. The nanoantenna consists of a subwavelength metal patterning that concentrates the light into the tunnel diode where the optical signal is rectified. Both the nanoantenna and tunnel diode are distributed devices utilizing the entire area of the surface. The nanoantenna also serves as one contact of the tunnel diode. This direct integration of the nanoantenna and diode overcomes the resistive loss limitations found in prior IR rectenna concepts that resembled microwave rectenna designs scaled down to infrared size scale, thus making metal leads very small and lossy. We will show simulation and experimental results of fabricated devices. Simulations of the optical fields in the tunnel gap are illustrative of device operation and will be discussed. The measured infrared photocurrent is compared to simulated expectations. Far-field radiation power conversion is demonstrated using standard radiometric techniques and correlated with the rectified current response. We also show thermal modelling of the localized heat generation within the rectenna structure to demonstrate the lack of a thermoelectric response. Lastly, we discuss future directions of work to improve power conversion efficiency.
    • 08.08 Nuclear Space Power Generation David Woerner (Jet Propulsion Laboratory) & June Zakrajsek (NASA - Glenn Research Center)
      • 08.0801 Americium Oxide Surrogate Studies: Pursuing the European Radioisotope Power Systems Fuel Form Emily Jane Watkinson (University of Leicester), Richard Ambrosi (University of Leicester), Jens Najorka (Natural History Museum), Daniel Freis (European Commission - JRC), Jean Francois Vigier (), Tim Tinsley (National Nuclear Laboratory), Mark Sarsfield (National Nuclear Laboartory), Keith Stephenson (European Space Agency) Presentation: Emily Jane Watkinson - Monday, March 4th, 08:30 AM - Dunraven
        The European Space Agency funded programme into the research and development of European radioisotope power systems (RPSs) began in 2008. Three RPS technologies are under development, namely, radioisotope heater units, radioisotope thermoelectric generators, and Stirling generators. Americium (241Am) was selected as the ‘fuel’, which provides radiogenic heat to the RPSs. An essential aspect of the programme is the ability to create an americium oxide fuel form, namely discs or pellets, that meet a range of requirements e.g. intact bodies with relative densities that allow for He-outgassing. Research with surrogates for americium oxides is essential for investigating the range of variables that influence the ability to achieve this whilst limiting the research with the highly radioactive material. In this study, americium oxide surrogates (e.g. Nd2O3) have been created using two different techniques (continuous oxalate precipitation and calcination, and sol-gel) with the objective of creating particles with differing morphology. Owing to the polymorphism of Nd2O3, high temperature X-ray diffraction is conducted to assess crystal structure phase changes in the powder material to inform sintering studies. The surrogate powders are cold-pressed and sintered to assess the impact on pellet properties e.g. density and integrity. The surrogate fuel study highlights the importance of assessing the impact of particle shape and crystal structure on the ability to meet fuel form requirements, and will inform future research with the americium oxide fuel.
      • 08.0802 Progress and Future Roadmap on 241Am Production for Use in Radioisotope Power Systems Tim Tinsley (National Nuclear Laboratory) Presentation: Tim Tinsley - Monday, March 4th, 08:55 AM - Dunraven
        Plutonium-238 has been used as a power source for spacecraft since the early days of space exploration. It has proven to be an effective source of power where the use of solar generated power is impractical. Historically, Europe has relied on collaborations with the USA or Russia to access these nuclear power sources. During 2009, the European Space Agency (ESA) funded a project to examine the cost and practicality of establishing a European source of material suitable for Radioisotope Power Systems (RPS) and concluded that 241Am was the most suitable choice for European based production. This takes what would otherwise be a waste material from the nuclear industry and uses it to power future science exploration missions in outer space. This is also very much cheaper than the development of a European supply of 238Pu, and the material has much greater availability opening up the potential for many more missions. The preferred European alternative of 241Am for use in future RPS and the issues that will need to be addressed has continued with the development and underpinning of a conceptual flowsheet to be used for production of 241Am. The National Nuclear Laboratory has assessed the feasibility and costs associated with installing within its existing facilities a European Radioisotope Production Facility to produce 241Am for use by the European Space Agency in radioisotope power systems for space missions. Work has also been completed on validating the flowsheet, along with the production of a quantity of separated 241Am for analysis. This has included using aged plutonium in NNL’s PuMA laboratory and the separation of 241Am from this material. Scale up of the process to produce quantities of material suitable for use in RPS heater units is ready, housed within the NNL’s Central Laboratory in an existing facility designed for plutonium active operations. The required product is americium oxide powder in a package suitable for temporary storage pending fabrication into RPSs. As part of a consortium, the National Nuclear Laboratory has also assessed the feasibility and design required for an Am2O3 fueled pellet that is consistent with conventional RTG and RHU configurations. With confirmation of the flowsheet performance, and the development of the costed design for a suitable production plant, the next phase of work will see prototypic pellets manufactured for testing, leading to a demonstration RPS heat system being produced. Provision of RPSs to future missions would bring significant benefit to the range of science in space exploration that is able to be achieved. The paper will outline the reasons behind the choice of 241Am, the development work that has taken place so far, and the expected route forward towards a flight ready system.
      • 08.0803 Stirling Convertor Based 50-500W Radioisotope Power System Generator Study Joseph Vander Veer (Teledyne Energy Systems), Robert Sievers (Teledyne Energy Systems) Presentation: Joseph Vander Veer - Monday, March 4th, 09:20 AM - Dunraven
        Free piston Stirling convertor based generators present a significant advantage over traditional radioisotope power systems (radioisotope thermoelectric generators), which is conversion efficiency. Several configurations are considered ranging from ~50We to ~500We. Current dynamic systems have yet to prove themselves with respect to reliability. Therefore, a significant portion of the analysis focuses on reliability of the configurations. As dynamic convertor reliability has yet to be determined generator reliability studies are relative to convertor reliability. Reliability studies include the system controller, individual convertor controllers, and convertor redundancy. In addition to reliability: power, thermal efficiency, conversion efficiency, and weight are considered. Investigated configurations show system level efficiencies as high as 24% are possible.
      • 08.0805 Design and Development of the ESA Am-Fuelled Radioisotope Power Systems Alessandra Barco (University of Leicester), Richard Ambrosi (University of Leicester), Hugo Williams (University of Leicester), Tony Crawford (University of Leicester), Ramy Mesalam (The University of Leicester), Christopher Bicknell (University of Leicester Space Research Centre), Emily Jane Watkinson (University of Leicester), Keith Stephenson (European Space Agency), Alexander Godfrey (Lockheed Martin UK - Ampthill), Colin Stroud (lockheed martin uk ampthill), Marie Claire Perkinson (Airbus), Christopher Burgess (Airbus), Tim Tinsley (National Nuclear Laboratory) Presentation: Alessandra Barco - Monday, March 4th, 09:45 AM - Dunraven
        Radioisotope heater units (RHU) and radioisotope thermoelectric generators (RTG) are currently being developed for the ESA radioisotope power system programme. The state-of-the-art for the USA and Russian systems is to use Plutonium-238 as the radioisotope fuel; however, for the ESA applications Americium-241 has been selected due to its availability and relatively cost-effective production in the European context. The proposed designs implement a multi-layer containment approach for safety reasons, with a platinum-rhodium alloy for the inner containment of the fuel and carbon-based materials for the outer layers. The Am-fuelled RHU provides 3 W of thermal power, and makes this design competitive with existing models in relation to specific power. The heat source for the RTG has a 6-side polygonal shape, with a distributed 3-fuel pellet architecture: this configuration allows to maximise the specific power of the RTG, since Am-based fuels have a lower power density than Pu-based fuels. The heat supplied by the fuel is 200 W, with an expected electrical power output of 10 W provided by six Bi-Te thermoelectric modules. Finite element structural and thermal analyses have been performed to assess the theoretical feasibility of the components as initially conceived. Mechanical and electrically-heated prototypes for the systems have already been tested in a representative lab environment at the University of Leicester; these tests have provided initial estimates for the efficiency of the systems. Both the RHU and RTG architectures are currently undergoing a new design iteration process. This paper reports on the overall architecture and design of the Am-fuelled RTG and RHU, the modelling results and the experimental data obtained so far.
      • 08.0806 Effect of Martian and Titan Atmospheres on Carbon Components in the General Purpose Heat Source Chris Whiting (University of Dayton), Chadwick Barklay (University of Dayton Research Institute) Presentation: Chris Whiting - Monday, March 4th, 10:10 AM - Dunraven
        Radioisotope power systems (RPS) that are currently in use today are designed with a closed fuel cavity. Multiple proposed designs for future RPS have included the use of an open fuel cavity, which indicates that the fuel, and its associated hardware, will be exposed to gases that could be found outside the generator. Most missions that would utilize an RPS will take place in the vacuum of space, and for those missions the choice of an open or closed fuel cavity is inconsequential. A few missions, however, could take place at a site that has an atmosphere, such as Mars or Titan. In these cases, it is important to understand how the extraterrestrial atmosphere of these locations could impact the components within the RPS. This knowledge will then help RPS designs make informed decisions regarding the choice of an open or closed fuel cavity. One of the most important components within the fuel cavity in an RPS is the general purpose heat source (GPHS) module. The GPHS plays critical thermal, structural, and safety based roles within the RPS. In this paper, we will examine the potential impact of a Martian or Titan atmosphere on the GPHS in the fuel cavity. First, thermodynamic chemical modeling studies were performed. These studies indicated that nearly all of the Martian atmosphere would be able to react with and erode the GPHS carbon. Considering the very low pressure of the Martian atmosphere, however, it is recommended that reaction rate studies are performed on GPHS carbon to determine if the erosion will be significant over the life of the RPS. Modeling studies of Titan indicated that there are no predicted chemical reactions between the Titan atmosphere and the GPHS. It was noted, however, that components of the Titan atmosphere could decompose to form solid carbon and ammonia. While these products are not expected to be a problem for the GPHS, which is the focus of this study, they could create significant issues for other materials in the RPS. It is therefore recommended that any open fuel cavity designs consider the impact that solid carbon and ammonia could have on the whole RPS. Initial reaction rate studies were performed between a simulated Martian atmosphere and a carbon-carbon composite material that is a surrogate for GPHS carbon. It was interesting to note that there was no measurable erosion in the sample after 72 h at 700 oC. While this preliminary result is encouraging, it is not possible to provide a recommendation at this point regarding the use of an open fuel cavity on Mars. Additional studies will be required to evaluate the degree of erosion over much longer times and much higher temperatures. In addition to studying the erosion of the GPHS carbon, it is recommended that future studies also investigate changes in other GPHS carbon properties, including thermal conductivity and mechanical strength.
      • 08.0807 Nuclear Considerations for the Application of Lanthanum Telluride in Future RPS Systems Michael Smith (Oak Ridge National Laboratory), Chadwick Barklay (University of Dayton Research Institute), Chris Whiting (University of Dayton) Presentation: Michael Smith - Monday, March 4th, 10:35 AM - Dunraven
        Lanthanum telluride (La3Te4) is an n-type high-performance thermoelectric material, and the continued development of La3Te4 by the NASA Jet Propulsion Laboratory has made it a top candidate for future radioisotope power systems (RPS). Thermoelectric based RPS units, produced by the United States, convert the heat released by the decay of plutonium dioxide (238PuO2) into electricity by the Seebeck effect. Also associated with the decay of 238PuO2 is the generation of neutrons from spontaneous fission and alpha-n (α,n) reactions. A portion of these neutrons will interact with the telluride based thermoelectric materials and induce trace amounts of transmutation reactions in various tellurium isotopes. Any transmuted, radioactive atoms will subsequently decay to produce isotopes of iodine, some of which are radioactive, and some not. The non-radioactive iodine isotopes could accumulate over time to amounts significant enough to raise chemical concerns. Although iodine is classified as a halogen, it is the least reactive of the halogens as well as the most electropositive, meaning it tends to lose electrons and form positive ions during chemical reactions. Iodine will easily react with metals to produce a wide variety of salts. This behavior could affect the performance of segmented couple-level architectures that employ La3Te4. In this type of architecture, several segments of different thermoelectric materials are joined to increase the average thermoelectric figure of merit of the leg over a relatively large temperature gradient. It is plausible that sophisticated bonding/metallization layers could be required to join the segment interfaces, and the segmented thermocouple legs to the cold- and hot-shoe materials. The long-term stability and performance of these segmented material combinations could degrade as a result of the potential formation and reactions of iodide metal compounds at the segment interfaces. This paper investigates to what degree this process may threaten current La3Te4 thermoelectric technologies, calculates the amount of iodine that could be generated over the operational life of a radioisotope thermoelectric generator (RTG) design, and discusses potential effects of the resulting iodine’s chemical reactions in a segmented couple-level architecture that contains La3Te4.
      • 08.0809 Safety Studies for the ESA Space Nuclear Power Systems: Accident Modelling and Analysis Alessandra Barco (University of Leicester), Richard Ambrosi (University of Leicester), Keith Stephenson (European Space Agency) Presentation: Alessandra Barco - Monday, March 4th, 11:00 AM - Dunraven
        Within the framework of ESA radioisotope power system (RPS) programme, the University of Leicester is currently developing radioisotope heater unit (RHU) and radioisotope thermoelectric generator (RTG) systems for future space missions, with Americium-241 as radioactive fuel. An important aspect of the overall programme is safety, and this involves ensuring that the design of these systems, in particular of the heat source (i.e. fuel and containment layers), meets a set of stringent requirements: it is paramount that both the RTG and RHU always remain intact, in order to avoid inadvertently releasing radioactive material into the environment in the event of an accident. The inner containment, or cladding, made of a platinum-rhodium alloy, is the first line of defence surrounding the Americium-based fuel; additional layers of carbon-based insulators and carbon-carbon composites for the aeroshell ensure that the heat source can survive all possible accident conditions, from launch failures to Earth re-entry. Validated heat source accident models are necessary to inform the design iteration of the RHU and RTG heat sources, and to construct a safety case for their launch. The goal of the activity here described, performed in collaboration with Ariane-Group in France and ESA, is to start the process of understanding the behaviour of the fuel containment systems under the most relevant accident conditions by computer modelling, to validate them experimentally given the infrastructure, test means and expertise of Ariane-Group in this field, and to characterise the different materials at ESTEC. The data obtained will help to iterate and improve the design of the European RPS heat sources by focusing on the fuel containment.
      • 08.0810 Small Stirling Technology Exploration Power for Future Space Science Missions Scott Wilson (NASA - Glenn Research Center) Presentation: Scott Wilson - Monday, March 4th, 11:25 AM - Dunraven
        High efficiency dynamic Radioisotope Power Systems (RPS) could be mission enabling for low power space applications such as small probes, landers rovers, and communication repeaters. These applications would contain science instruments and be distributed across planetary surfaces or near objects of interest where solar flux is insufficient for using solar cells. Small RPS could be used to provide power for sensing radiation, temperature, pressure, seismic activity, and other measurements of interest to planetary scientists. Small RPS would use fractional versions of the General Purpose Heat Source (GPHS) or Light Weight Radioisotope Heater Units (LWRHU), to heat power conversion technologies. Dynamic power systems are capable of three to four times higher conversion efficiency compared to static power conversion technologies, and would provide an equal amount of power using less fuel or more power using an equal amount of fuel. Providing spacecraft with more power could decrease duty cycling of basic functions and, therefore, increase the quality and abundance of science data. NASA GRC is developing a low power dynamic RPS that would convert heat from multiple LWRHU to one watt of usable direct current electric power for spacecraft instrumentation and communication. The power system could be used to charge batteries or capacitors for higher power burst usage. The initial design, called Small Stirling Technology Exploration Power (smallSTEP), is around 3 kg, 11 cm diameter X 32 cm long, and converts 8 watts of heat to one watt of electricity using a Stirling convertor. This low power conversion system represents a new class of RPS with power levels two orders of magnitude lower than prototypes currently being developed for space applications under NASA contracts. Development of the 1-watt RPS includes maturation of convertor and controller designs, performance evaluation of an evacuated metal foil insulation, and development of system interfaces. Initial demonstration of the subsystems has been completed in a laboratory environment and a higher fidelity system is being pursued for demonstration in relevant environments for use on small spacecraft needed to carry out future space science missions.
      • 08.0811 Impedance Spectroscopy: A Tool for Assessing Thermoelectric Modules for Radioisotope Power Systems Ramy Mesalam (The University of Leicester), Hugo Williams (University of Leicester), Richard Ambrosi (University of Leicester), Daniel Kramer (University of Dayton Research Institute), Chadwick Barklay (University of Dayton Research Institute), Keith Stephenson (European Space Agency) Presentation: Ramy Mesalam - Monday, March 4th, 11:50 AM - Dunraven
        Thermoelectric energy convertors in the form of solid state modules are utilised in space nuclear power systems such as a radioisotope thermoelectric generator (RTG). However, to ensure that implemented thermoelectric modules are reliable, efficient, and capable of delivering power and energy over a required lifespan, Standardised, accurate and repeatable high-throughput measurement systems are needed. Recently, Impedance spectroscopy has shown promise as a tool to parametrically characterise thermoelectric modules with one simple measurement, showcasing itself as a potentially key enabling technology. This paper investigates the use of impedance spectroscopy as a measurement system for assessing the health state of practical thermoelectric modules after intentionally introducing device defects and degradation. Device defects were inserted with control over type, concentration and location, while degradation was introduced by subjecting individual modules to prolonged operational and environmental conditions expected to be encountered in an americium-241 fuelled RTG. The complex impedances of each module in the form of Nyquist spectra, before and after introducing damage is reported. From these results it was found that the low and high frequency response of Nyquist spectra are highly sensitive to the physical features of a module as well as its corresponding material properties.
      • 08.0812 Radioisotope Power Systems for the European Space Nuclear Power Programme Richard Ambrosi (University of Leicester), Emily Jane Watkinson (University of Leicester), Ramy Mesalam (The University of Leicester), Christopher Bicknell (University of Leicester Space Research Centre), Tony Crawford (University of Leicester), Hugo Williams (University of Leicester), Marie Claire Perkinson (Airbus), Alexander Godfrey (Lockheed Martin UK - Ampthill), Colin Stroud (lockheed martin uk ampthill), Michael Reece (Queen Mary University London), Keith Stephenson (European Space Agency), Tim Tinsley (National Nuclear Laboratory) Presentation: Richard Ambrosi - Tuesday, March 5th, 08:30 AM - Madison
        Radioisotope thermoelectric generators (RTG) and heater units (RHU) systems are being developed in Europe as part of a European Space Agency (ESA) funded program. Aimed at enabling or significantly enhancing space science and exploration missions, these systems rely on the cost-effective production of americium-241 for the fuel. The use of an iterative approach and the application of lean methodologies for the development these systems have been the focus of this technology program. Isotope containment architectures and, in the case of RTG systems, bismuth telluride based thermoelectric generators are under development. At the small end of the scale, the RHU configuration is based on a 3 W thermal power output. The first version of this system has been designed and analyzed. Electrically-heated and mechanical models have been produced and tested. The RTG heat source configuration is designed to deliver 200 W of thermal power output while minimizing the volume occupied by the fuel. A 5% total system conversion efficiency and a modular scalable design imply that electrical power output can range between 10 W and 50 W. Each RTG system could house up to 5 heat sources. An electrically-heated RTG system based on the 200 W heat source architecture has been designed, analyzed and tested. The advancement in the design of the heat source for both RTGs and RHUs is currently the focus of the programme with the aim of advancing the technology readiness level of the containment structures. The most recent results of the programme will be presented. Some recent results of mission studies requiring space nuclear power systems carried out in academia by student teams will also be highlighted.
      • 08.0813 Identifying and Mitigating Barriers to the Adoption of Advanced Radioisotope Power Systems Scott Brummel (), Mary Cummings () Presentation: Scott Brummel - Tuesday, March 5th, 08:55 AM - Madison
        Operators of complex systems, particularly safety-critical ones like those in command and control settings often distrust new technologies which can negatively affect mission outcomes since systems are not utilized to their full capacity [1]. This problem is only expected worsen as more opaque technologies like those enabled with artificial intelligence are inserted into these systems. To address these issues, significant research is underway to better understand the core cognitive elements of trust and risk perception for such systems, as well as to develop models and design interventions for appropriately anchoring trust [2]. While this previous research addresses a clear operational need, a limitation of these efforts is the focus on operators and managers of real (or near-) time systems. Of interest to many organizations like NASA is the need to identify when, where, and how inappropriate perceptions of risk and anchoring of trust affect technology development and acceptance, primarily from the perspective of engineers and related management. Despite the significant research that is currently underway to mitigate inappropriate trust and risk perception for operators of complex systems, very little research is occurring to assess, describe, model, or develop risk mitigation strategies for engineers developing or applying new technologies. Here, we attempt to minimize this gap by defining and explaining factors contributing to inappropriate risk perception and resulting barriers for the adoption of Dynamic Radioisotope Power Systems (DRPS) for space exploration and offer up mitigations to these barriers. While solar power is a common and reliable means of providing electricity for most of NASA’s space missions, many potential space science opportunities exist in environments without sufficient sunlight for solar powered space flight. For example, because Saturn is about ten times farther from the Sun than Earth, the available sunlight to produce electricity for space operations is only one hundredth of that on Earth. Non-solar solutions can overcome these limitations and fill critical mission gaps in space exploration [3]. Since the 1960s Pioneer deep space missions, NASA has relied on static RPS to generate energy using radioisotope thermoelectric generators (RTG) with thermocouples to produce electricity through alternating the intense heat of decaying plutonium with the low temperature of space through metal pairs. Such technology has also been used for more recent Mars Curiosity rover missions. However, as will be discussed in more detail in the paper, static RPS systems are inefficient and plutonium is in short supply. Thus, to achieve longer duration space science goals, it is critical that more efficient RPS systems be developed but NASA has struggled to field such systems, and it appears trust and risk perception play an important role, which will be elucidated in this paper. [1] R Mittu., Sofge, D., Wagner, A., and Lawless W.F., Robust Intelligence and Trust in Autonomous Systems. Springer US, 2016. [2] J. D. Lee and K. A. See, “Trust in Computer Technology and the Implications for Design and Evaluation,” [3] Priorities in Space Science Enabled by Nuclear Power and Propulsion. Washington, D.C.: National Academies Press, 2006.
      • 08.0814 Utilization of MMRTG’s "Waste Heat" to Increase Overall Thermal to Electrical Conversion Efficiency Daniel Kramer (University of Dayton Research Institute), Richard Ambrosi (University of Leicester) Presentation: Daniel Kramer - Tuesday, March 5th, 09:20 AM - Madison
        Since the launch of the first radioisotope power system (RPS) on Transit 4A in 1961, numerous research and development activities have been performed centered on increasing the overall thermal to electrical conversion efficiency of the selected nuclear fueled power system. The latest U.S. fielded RPS (MMRTG – Multi-Mission Radioisotope Thermoelectric Generator) contains eight GPHS (General Purpose Heat Source) modules which nominally yield ~2000WTh from the thirty-two 238PuO2 ceramic fuel pellets. MMRTG’s PbTe/TAGS-85 based thermoelectric couples have a thermal to electrical conversion efficiency of ~5.5% thus yielding initialing ~110We at launch. This paper centers on the discussion of a conceptual idea that entails employing a second set of thermoelectrics on the MMRTG, these would be employed for possibly converting a portion of the underutilized “waste heat” (~1800 Wth) into additional electrical mission power. First-order experiments and calculations employing bismuth telluride (Bi2Te3) based thermoelectric modules, being considered for a European RPS by the University of Leicester, indicate that an improvement in the efficiency of a MMRTG could be achieved by integrating them with the PbTe/TAGS-85 thermoelectrics being utilized in the MMRTG in a “dual” or “cascaded” arrangement. This arrangement of two different integral thermoelectric materials suggests the intriguing possibility of the additional harvesting of a portion of the MMRTG’s currently unutilized “waste heat”. This appears to be feasible since the cold side temperature of the PbTe/TAGS-85 in a MMRTG is ~200oC, which corresponds to typical hot-side operating temperatures of the Bi2Te3 thermoelectric modules. It is recognize that extensive thermo-mechanical design, modeling, and analysis are required to fully investigate the cascaded thermoelectric concept. In addition, materials compatibility and assembly aspects also will need to be fully addressed in the future. However, the present work does indicate that system-level performance gains could be achieved via a “cascaded” or cMMRTG which could result in electrical power increases at Beginning-of-Life (BOL) of up to ~25%, and perhaps more significantly, gains in End-of-Design Life (EODL) power of up to ~40% which could be utilized on a future space mission.
      • 08.0815 Development of a High-Efficiency Cascaded Thermoelectric Radioisotope Power System Chadwick Barklay (University of Dayton Research Institute), Daniel Kramer (University of Dayton Research Institute), Richard Ambrosi (University of Leicester), Ramy Mesalam (The University of Leicester) Presentation: Chadwick Barklay - Tuesday, March 5th, 09:45 AM - Madison
        Since the 1960s there have been numerous development activities on high-impact material and device-level technologies that could be integrated into current or future radioisotope power systems (RPS) to enhance their performance. One recent concept study proposed cascading thermoelectrics to convert some of the waste heat from the Multi-Mission Radioisotope Thermoelectric Generator (MMRTG) into electrical power. The first-order evaluations of the concept study suggested that performance improvements to beginning-of-life (BOL) and end-of-design life (EODL) power could be achieved by integrating bismuth telluride (Bi2Te3) thermoelectric modules into the MMRTG design. This paper discusses a proof of concept development approach to determine the BOL and EODL performance gains that could potentially be obtained by employing Bi2Te3 thermoelectric modules in an RPS as a second stage in a cascaded architecture. In addition, this paper addresses some of the design considerations that will need to be addressed at the system level. For the purposes of this study the MMRTG characteristics and properties are employed as the basis for this paper.
      • 08.0818 A Status Update on the eMMRTG Project Christopher Matthes (NASA Jet Propulsion Lab), David Woerner (Jet Propulsion Laboratory), Stan Pinkowski (NASA Jet Propulsion Lab) Presentation: Christopher Matthes - Tuesday, March 5th, 10:10 AM - Madison
        NASA has employed Radioisotope Thermoelectric Generators (RTGs) to power many missions throughout the past several decades. The Multi-Mission RTG (MMRTG) used on Mars Science Laboratory is the most recent generator developed, and the only spaceflight-qualified system currently available. The enhanced Multi-Mission RTG (eMMRTG) would be an upgrade of the MMRTG using the most current thermoelectric (TE) technology, and would provide the space community with a system that would have substantially higher end-of-design-life (EODL) power. The NASA RPS Program recently instantiated an eMMRTG system development project, evolving from an ongoing technology maturation effort at JPL to a project designed to mature and transition the skutterudite (SKD) TE couples and technology into an operational RTG. The project has made significant advances in maturing SKD technology for use in the eMMRTG, and is looking ahead to RTG system development. Mini-module and couple life tests have produced substantial performance data that has helped refine the couple design and support lifetime performance predictions. Additional strength and properties tests have been performed to verify the design specifications and robustness of the candidate TE couples. Replacing the current MMRTG couples with SKD also necessitates system design changes that must be well understood. Recent systems engineering studies have focused on minimizing risk associated with updating the flight-proven MMRTG design. Upgrading the module insulation has been shown to result in 98% lower levels of CH4/H2 outgassing products. Performance analysis has been completed using the most recent TE couple sizes in order to understand the maximum acceptable power degradation rate to achieve the required eMMRTG power of 77 W at EODL. This paper presents the results of recent SKD technology maturation efforts, eMMRTG lifetime performance predictions, and a number of systems engineering tasks that continue to pave the way for successful system development.
    • 08.09 Autonomy for Aerospace Applications Ted Steiner (Draper) & Julia Badger (NASA - Johnson Space Center)
      • 08.0902 Model-Based Approach to Rover Health Assessment - Mars Yard Discoveries Ksenia Kolcio Prather (Okean Solutions, Inc), Ryan Mackey (Jet Propulsion Laboratory), Lorraine Fesq (Jet Propulsion Laboratory) Presentation: Ksenia Kolcio Prather - Thursday, March 7th, 04:30 PM - Madison
        More capable Health Assessment functionality is one of the pillar technologies of JPL’s Self-Reliant Rover Strategic Initiative program along with Autonomous Science, Autonomous Navigation, and Onboard Planning and Scheduling. Together these capabilities are expected to enable future rovers to perform opportunistic science even in the face of off nominal and fault conditions. In support of a more autonomous rover, Health Assessment must not only indicate detected faults but should also provide positive indication of health. A rover more fully aware of its own health state will be able to more effectively manage onboard resources and re-plan accordingly when needed. If a detected problem can be attributed to a particular component the rover will have detailed information that can either be used for onboard response or relayed to the ground in support of faster recovery. Furthermore, discriminating between terrain-induced off nominal behavior such as stalling on a rock and mechanical/electrical motor stalls will allow executing specific responses for each situation. The Health Assessment capability being developed for the SRR program incorporates MONSID (Model-Based Off-Nominal State Identification and Detection). The MONSID system propagates sensor data through a nominal model of system behavior. Inconsistencies between modeled state and sensor data are indicative of off-nominal conditions. MONSID is designed to provide off-nominal state detection and identification capabilities that are key components to assessing rover health state. The rover’s awareness of its own state of health will support other decision functions such as resource management and planning and scheduling activities. The ability to autonomously identify rover failures to component levels will enable faster and more targeted responses and recovery thereby reducing time that the rover is idle. This paper presents the modeling process and preliminary test results of MONSID as applied to JPL’s Athena test rover’s mobility subsystem. Athena’s mobility system includes independent steering and driving motors for six wheels as well as wheel position sensors. The visual odometry system provides rover attitude and cartesian position as additional sensor data to the MONSID model. Onboard terrain classification can also be used to estimate a measure of expected slip based on perceived terrain. The rover mobility system proved particularly challenging from a modeling perspective due to lack of visibility into lower level wheel assembly hardware and a high degree of terrain interaction. However, even with simplified models Mars Yard test results show that MONSID is able to detect simulated motor faults as well as instances of partial stalls when Athena was stuck on a rock. On the other hand, the modeling process has surfaced several hardware and operational problems resulting in control firmware and sensor processing software updates. This speaks to the benefit MONSID’s model-based approach even in the Athena system development phase.
      • 08.0903 Silhouette-based 3D Shape Reconstruction of a Small Body from a Spacecraft Saptarshi Bandyopadhyay (Jet Propulsion Laboratory), Issa Nesnas (Jet Propulsion Laboratory), Benjamin Hockman (NASA Jet Propulsion Laboratory, California Institute of Technology) Presentation: Saptarshi Bandyopadhyay - Thursday, March 7th, 04:55 PM - Madison
        In this paper, we present a novel technique for silhouette-based 3D shape reconstruction of a small body (e.g., asteroid, comet, small celestial body), which can be used by the spacecraft when it is far away from the small body. As the spacecraft approaches the small body, it periodically captures images of the small body. Since these images are taken from a very far distance, they do not have enough features to use standard techniques like Stereo-Photo-Clinometry (SPC), Structure from Motion (SfM), or Simultaneous Localization and Mapping (SLAM) algorithms. But the silhouette of the small body in these images is in sharp contrast with the background darkness. We use a combination of light curve and silhouette to reconstruct an initial 3D shape of the small body that would later be used to seed more conventional image-based techniques as the object’s size in the image grows and image details emerge. First, the light curve of the small body, which represents the variation of brightness intensity of the small body with respect to time, is obtained from the images. We estimate the rotation rate of the small body using the Fast-Fourier-Transform (FFT) of the light curve. Second, using the rotation period and image time-stamp, images from the same rotation phase are grouped together. Since the spacecraft is traveling towards the small body, images in a group (i.e., having the same phase) represent a series of magnifications of the small body. From this information, we estimate the center of rotation of the small body to within one-pixel uncertainty. Finally, we developed a multi-resolution 3D-voxel shape reconstruction algorithm, where the silhouettes of the small body taken from different orientations are used to reconstruct the shape of the small body. Since the orientation of the rotation axis is unknown, we use multiple guesses of the orientation of the rotation axis and determine the shape using our algorithm. Once the shape model is created, it is compared against the images from all angles and an error is computed. The guessed orientation that has the lowest error is deemed to be the best estimate of the orientation of the rotation axis. In this technique, we assume that the small body is performing pure rotation (no tumbling) about its principal axis, the Sun is directly behind the spacecraft and fully illuminating the small body, and the distance from the spacecraft to the small body is known. This process of silhouette-based 3D shape reconstruction of the small body has been tested for both simulated data (for comet 67P/Churyumov–Gerasimenko, Eros, Itokawa) and real data from the Rosetta mission.
      • 08.0904 Design and Implementation of Power Management Algorithm for a Nano-satellite Varun Thakurta (Manipal University), Avi Jain (Manipal Institute Of Technology), Vishwanath Datla (), Akshit Akhoury (Manipal Institute of Technology), Arun Ravi (MIT), Akshiti Parashar (), Ruchitha Reddy (Manipal Institute of Technology), Harshal Dali (Manipal University ), Adhya Kejriwal (Manipal University) Presentation: Varun Thakurta - Thursday, March 7th, 05:20 PM - Madison
        This paper focuses on the design of a power management algorithm that can improve the performance and service lifetime of small satellites. Along with a highly efficient power distribution scheme, the onboard power management system plays a vital role in the operations of a satellite. Small satellites are primarily powered by solar cells. The constraints on the size and mass of a nanosatellite limit its power generation and storage ability. The harnessed energy is stored in rechargeable batteries to ensure a constant supply of power during the eclipse phase. The expected lifespan, capacity and number of full charge-discharge cycles a battery can withstand determines its critical discharge threshold. The depth of discharge (DoD) of the batteries must be minimized and kept under control to reduce degradation in capacity due to the charge-discharge cycles they experience over their mission life. The proposed algorithm reduces the mission dependency on the battery life and allows for longer mission durations. It employs two modes of satellite operation, viz. the static power management (SPM) and the dynamic power management (DPM). The SPM is active during the initial days of tumbling and when the satellite payload is dormant. The SPM enforces a fixed threshold on the battery DoD and switches the satellite to a low power state on exceeding it. Once the satellite detumbles, the power management system shifts to DPM. The power generation by a stable satellite follows a periodic trend. Nonetheless, it is still estimated every orbit since it changes due to the variation in relative positions of the sun and the earth over time. A simulation model for the same has been developed to study the variation in power generation over time. The DPM anticipates the future charge-discharge patterns for the next few orbits based on the performance in previous orbits all the while acknowledging the non-linear characteristics of the battery. The DPM attempts to maintain an average DoD over a span of time within the critical threshold. The satellite turns on its payload and transmission only above very specific locations on the Earth This allows a deeper discharge when running loads like the payload or the communication and smaller discharge when performing other low power tasks, thereby keeping the mean discharge below the critical threshold. Several power modes have been defined keeping in mind the interdependencies between the different satellite subsystems for smooth operation. The DPM switches between these defined modes and controls the consumption while making sure the satellite performs its primary task faithfully. The simulation model for the same has been developed and extensively tested. The results show a significant improvement in power performance over an implementation without an adaptive threshold. Therefore, demonstrating the need for DPM that adapts the load conditions to the power generation conditions. The simulation generated no false triggers, and the processes took place unhindered while meeting the critical threshold for the discharge effectively. The paper also includes the power calculations involving the solar panels, the battery and the various loads.
      • 08.0905 An Integrated System for Mixed-Initiative Planning of Manned Spaceflight Operations Martijn Ijtsma (Georgia Institute of Technology), Will Lassiter (), Amy Pritchett (), Martin Savelsbergh (Georgia Tech), Karen Feigh (Georgia Tech) Presentation: Martijn Ijtsma - Thursday, March 7th, 09:00 PM - Madison
        Manned spaceflight in outer/deeper space will require crew operations that are independent of ground support. This requires the crew to re-plan day-to-day activities, particularly in the case of unforeseen circumstances. To support these planning duties, we are developing a mixed-initiative planning tool that optimizes schedules in collaboration with astronauts. This paper highlights the tool's planning algorithm. The planning algorithm has two closely-coupled components: first, an optimization algorithm (optimizer) based on local search heuristics and, secondly, a computational model of the work that is to be performed. In this framework, the optimizer acts as a surrogate model of the more detailed computational models, such that new solutions can be efficiently explored. The computational work model is capable of simulating a plan through time, and can account for dynamic interactions between activities and work environment that are not modeled in the optimizer. Moreover, the computational model returns to the optimizer metrics that reflect required teamwork to coordinate activities between astronauts. The paper includes a description of the optimizer and computational simulation models as well as a case study with activities, agents and resources that are representative of a typical manned mission.
      • 08.0906 Motion Planning for Climbing Mobility with Implementation on a Wall-Climbing Robot Keenan Albee (NASA Jet Propulsion Lab), Antonio Teran Espinoza (Massachusetts Institute of Technology), Kristina Andreyeva (), Nathan Werner (), Howei Chen (), Tamas Sarvary () Presentation: Keenan Albee - Thursday, March 7th, 09:25 PM - Madison
        Future autonomous planetary explorers will require extreme terrain mobility to reach areas of interest, such as walled lunar pits and steep Martian rock layers. Climbing mobility systems are one proposed answer, requiring efficient and kinematically feasible motion planning for autonomous operation. Similarly, climbing planning is applicable to other microgravity situations requiring constant end effector contact with discrete handholds. This paper proposes a planning framework that poses kinematic climbing planning as a discrete optimal planning problem. Motion primitives are used to encourage large robot body workspaces and beneficial connections between climbing stances. A wall-climbing planner simulation is presented, along with implementation on a hardware demonstration testbed that successfully recognized, navigated, and climbed an arbitrary vertical wall.
      • 08.0907 A Distributed Hierarchical Framework for Autonomous Spacecraft Control Julia Badger (NASA - Johnson Space Center) Presentation: Julia Badger - Thursday, March 7th, 09:50 PM - Madison
        Future human space missions planned for exploring beyond low Earth orbit are in the conceptual design stage. These missions describe habitats in cis-lunar orbit that are visited by crew periodically or even missions to Mars. These missions have one important thing in common: the need for autonomy on the spacecraft. This need stems from the latency and bandwidth constraints on communications between the vehicle and the ground control. A variable amount of autonomy is necessary whether the spacecraft has crew on board or not. Spacecraft are complex systems that are engineered as a collection of subsystems. These subsystems work together to control the overall state of the spacecraft. Subsystems are designed and built somewhat independently, but need to work together. As such, solutions that increase the autonomy of the spacecraft (called autonomous functions) should respect both the independence and interconnectedness of the spacecraft subsystems. This distributed yet centralized approach to system monitoring and control is a key idea in the Modular Autonomous Systems Technology (MAST) framework. The MAST framework is a component-based architecture that provides interfaces and structure to developing autonomous technologies. The framework enforces and enables a distributed, hierarchical autonomous control system across subsystems, systems, elements, and vehicles. Each component of the autonomous system is broken into several “buckets” that are based on the OODA loop (Observe, Orient, Decide, Act). Each bucket has different requirements, and the bucket types are intended to work together across different levels of the control hierarchy to minimize the calculations and interactions between components. The three main reasons for creating this architecture are as follows: 1. Using products from autonomy across levels of abstraction, 2. Creating systems that are straight-forward to verify, or are constructed with guarantees, and 3. Allowing for variable autonomy. Autonomous systems are complex, difficult to test, and nearly impossible to conduct formal analysis on to find performance guarantees. However, the use of autonomous systems technology for human spacecraft will require convincing verification and validation. This architecture has a path to formal analysis, and will create assume-guarantee contracts as long as the autonomous technology components can be verified individually. An example autonomous system was implemented in this framework and tested using realistic spacecraft software and hardware simulations. Three subsystem autonomy components were designed, for the managed power system, the life support system, and for the automated rendezvous and docking process. Additionally, a vehicle spacecraft manager autonomy component was designed to oversee the entire spacecraft. This experiment was successfully tested as part of a broader habitat test using both hardware and software simulations. The distributed hierarchical approach to shared control was promising in that each autonomy component was relatively simple, yet complex behaviors could be derived from their interconnected execution. This test was able to prove out the basics of this autonomy framework and provided the foundation upon which many of the other ideas presented in the previous section can be developed. This paper will discuss the framework, tests conducted, results, and future work.
    • 08.10 Systems and Technologies for CubeSat/Smallsats Michael Swartwout (Saint Louis University) & Kyle Kemble (Air Force Research Laboratory)
      • 08.1001 Attitude Control System for the Mars Cube One Spacecraft David Sternberg (NASA Jet Propulsion Laboratory), John Essmiller (NASA Jet Propulsion Lab), Cody Colley (JPL), Andrew Klesh (Jet Propulsion Laboratory) Presentation: David Sternberg - Friday, March 8th, 08:30 AM - Madison
        CubeSats are small spacecraft based on a 10cm by 10cm by 10cm (1U) cube standard that have historically only been operated in Earth orbit. Mars Cube One (MarCO) is the first CubeSat mission developed for interplanetary operation. MarCO is a technology demonstration mission comprised of two identical, solar powered 6U satellites with several key goals, including that of providing a bent pipe telecom relay to Earth for NASA's InSight (Interior Exploration using Seismic Investigations, Geodesy and Heat Transport) mission during its Entry, Descent, and Landing sequence. MarCO launches on the same rocket as InSight and makes use of the Deep Space Network for communications and ranging. It therefore must have an attitude control system and propulsion system suitable for operating in several pointing modes, providing desaturations for reaction wheel momentum buildup, and thrusting to change the spacecraft trajectory. Because the spacecraft design is constrained to the CubeSat standards and because of the distances of the spacecraft from Earth and the sun, the components used for attitude control and propulsion must meet tight size, mass, and power requirements. Further, because MarCO is the first CubeSat developed to operate in deep space, a robust testing sequence is required to ensure that the spacecraft functions are exercised and that the operations team understands how the spacecraft are expected to behave after launch. Autonomous modes of operation are also critical to ensure that the spacecraft can function safely with periods of several hours occurring between consecutive communication periods. This paper discusses several elements of the MarCO attitude control and propulsion systems. The paper begins with a discussion of the hardware that was selected for the two systems as well as descriptions of the interface between the attitude control and propulsion systems and the interface between these systems and the rest of the spacecraft’s command and data handling system. Next, the paper summarizes the different types of tests that were performed at the system and spacecraft levels. Test data is included for some of these tests which helped define the methods by which the spacecraft is operated in space. Lastly, the paper lists a series of lessons-learned for developing attitude control and propulsion systems for interplanetary CubeSats.
      • 08.1003 Nano-sat Scale Electric Propulsion for Attitude Control - Performance Analysis Jin S. Kang (U.S. Naval Academy), Jeffery King (U.S. Naval Academy), Jonathan Kolbeck (), Michael Sanders (US Naval Academy), Michael Keidar (George Washington University) Presentation: Jin S. Kang - Friday, March 8th, 08:55 AM - Madison
        In recent years, the complexity of CubeSat missions has been increasing steadily as the platform capabilities have drastically improved. Missions involving high-accuracy pointing and interplanetary exploration is no longer out of the reach of CubeSat-class satellites. Accordingly, the CubeSat community also has been focusing on miniaturization of propulsion systems. So far, these systems have been mostly for ΔV maneuver applications and require a separate attitude control system to keep it pointed in the desired direction. Some commercially available propulsion units can provide attitude control capability, but are not adequate for high-accuracy pointing maneuvers. For fine-pointing, the currently available options are dominated by reaction-wheels and control-moment-gyros. The current attitude control systems and propulsions systems take up roughly 1U volume each, and are often the most expensive components on a CubeSat-class satellite. One promising solution for this is use of low-thrust propulsion systems for providing a combined ΔV maneuver and pointing capability. Electric propulsion systems can enable high-accuracy pointing capability while also providing orbit maneuver capability. High Isp of these thrusters also result in minimal required propellant mass. Electric propulsion system will not have the high slew-rate of the reaction wheels or ΔV responsiveness of conventional thrusters. However, if the main objective of the mission is in providing high-accuracy pointing, long-term stability of pointing, orbit maintenance, and long-term orbit maneuvers, multi-thruster electrical propulsion system can be substituted for the attitude control system and propulsion unit combination, resulting in volume and cost savings. One such system available for CubeSat application is the George Washington University’s Micro-cathode Arc Thruster (uCAT). uCATs provide propulsion capability in small volume, while consuming relatively small amount of power (approx. 1 W per thruster). At a nominal pulse rate of 10 Hz, each thruster produces 2 μN of thrust. A cluster of seven thrusters can fit in a 1/3U volume. Placing a cluster at each end of a CubeSat can provide rotation and translation maneuver capabilities in all axes. The thrust output can be throttled by varying pulse frequency. While this setup cannot provide timely maneuverability, it can provide fine-pointing capability and long pointing dwell time in a fuel efficient manner. It also reduces the cost, required volume, and resource consumption of the system. uCAT system also has the advantage of being linearly scalable where the thrust output can be directly correlated to the pulse frequency and number of thrusters. This paper will characterize the performance of uCAT system as a CubeSat attitude control system. First, the propulsion system characterization results will be given. Using these performance parameters, the theoretical pointing accuracy and target dwell time will be analyzed and discussed. The paper will also highlight potential application of the electric propulsion system and provide comparison results of the system performance as compared to other commercially available units in terms of cost, volumetric efficiency, and resource consumption.
      • 08.1004 Near Earth Asteroid Scout CubeSat Science Data Retrieval Optimization Using Onboard Data Analysis Jack Lightholder (NASA-JPL), David Thompson (Jet Propulsion Laboratory), Julie Castillo Rogez (JPL/Caltech), Christophe Basset () Presentation: Jack Lightholder - Friday, March 8th, 09:20 AM - Madison
        Small spacecraft are continually evolving in capability and mission complexity. As spacecraft size decreases, physical limitations provide new challenges for mission designers. These include limited instrument aperture, low communications bandwidth, and reduced attitude control. Software techniques can address these limitations to retain the capabilities of larger spacecraft, in a small form factor. These techniques move the first order science analysis, which is traditionally completed on the ground, onboard the spacecraft. This can minimize the amount of bandwidth required for first order decision making. ¬ As part of the Space Launch System (SLS) Exploration Mission 1 (EM-1), the Near-Earth Asteroid Scout (NEA Scout) CubeSat mission will fly to about 1 AU to conduct a flyby of a near Earth asteroid (NEA) less than 100 m across. A 6U CubeSat, NEA Scout will be guided by a solar sail, towards its low albedo target. A combination of target orbit uncertainty and long lead times for solar sail trajectory correction maneuvers drive a requirement to identify the target in optical navigation imagery at a distance of about 60,000 km. Traditional large spacecraft accomplish this using long exposure imaging to increase SNR and identify the low albedo target. Due to the jitter inherent in a small platform, long exposure imaging in not feasible. Onboard image processing overcomes this challenge. The spacecraft aligns and combines a stack of rapidly acquired images, resulting in a single image with a higher SNR than its constituent images. We filter the aligned images using a temporal median. This solution fits within the memory constrained onboard context. Prior to alignment, each image undergoes a first order image calibration, onboard, to improve the results of the alignment. This calibration consists of a dark current subtraction, flat field adjustment and bad pixel mask application. The temporal median has the added benefit of removing transient imaging artifacts, such as cosmic rays. Interplanetary CubeSats, such as NEA Scout, are additionally physically constrained by the size of their antenna and available transmission power. At closest target approach, NEA Scout will be constrained to approximately <1 kbps downlink bandwidth. We address this limitation with automatic image cropping algorithms and software routines which downlink image statistics, giving operators a better understanding of the image content before committing it to the downlink queue. Alternatively, operators can command specific cropping operations, or as a box size around the brightest point in the image. The combination of these techniques enables early target detection in an onboard context, without stringent pointing requirements, in a low bandwidth mission scenario. These capabilities leverage onboard data processing to distill decision making data to tenable size for limited communication and low bandwidth deep space communication paradigms. This technology demonstration will pave the way for future smallsat missions to far-off destinations. Acknowledgements: This work is being carried out at the Jet Propulsion Laboratory, California Institute of Technology, under contract to NASA. Government sponsorship acknowledged.
      • 08.1005 Project Implementation and Lessons Learned from the RainCube Mission Travis Imken (Jet Propulsion Laboratory), Eva Peral (), Shannon Statham (Jet Propulsion Laboratory), Shivani Joshi (Jet Propulsion Laboratory), Jonathan Sauder (Jet Propulsion Laboratory), Austin Williams (Tyvak Nano-Satellite Systems, Inc), Christopher Shaffer (Tyvak Nanosattelite Systems) Presentation: Travis Imken - Friday, March 8th, 09:45 AM - Madison
        RainCube (Radar in a CubeSat) is a technology demonstration mission to enable Ka-band precipitation radar technologies on a low-cost, quick-turnaround platform. The 6U CubeSat is funded through NASA’s Science Mission Directorate’s Research Opportunities in Space and Earth Science 2015 In-Space Validation of Earth Science Technologies solicitation. The mission features a complete radar payload built by the Jet Propulsion Laboratory (JPL) and a spacecraft bus developed by Tyvak Nano-Satellite Systems. Tyvak performed integration and test of the flight system and is responsible for operating the spacecraft and delivering payload data to JPL. RainCube was launched via Nanoracks on May 21st, 2018 and is expected to be ejected from the ISS and start operations around mid-July 2018. Once deployed and commissioned, the mission will validate two key technologies in the space environment over its two month mission: a miniaturized Ka-band precipitation profiling radar that occupies a 2.5U volume and a 0.5m Ka-band parabolic deployable antenna that stows in a 1.5U volume. Radar instruments have often been regarded as unsuitable for small satellite platforms due to their traditionally large size, weight, and power. RainCube’s novel payload makes use of a simplified and miniaturized radar architecture developed at JPL, enabling an order of magnitude reduction in number of components, power consumption, and mass compared to existing spaceborne radars. The paper will focus on lessons learned and flight results from an engineering perspective, highlighting the mission formulation process, maturation of the radar electronics, design and testing of the deployable antenna, development of the flight- and mission systems, and operational results. The proceedings will also highlight the technical challenges and solutions of meeting the power, thermal, and performance needs within the CubeSat form factor. Finally, the paper will highlight potential future applications for the science measurements and radar technologies. This work has been carried out at the Jet Propulsion Laboratory, California Institute of Technology, under contract to NASA. Government sponsorship acknowledged. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise, does not constitute or imply its endorsement by the United States Government or the Jet Propulsion Laboratory, California Institute of Technology.
      • 08.1008 Implementation of Wire Burn Deployment Mechanism Using COTS Resistors and Related Investigations Anirudh Kailaje (Manipal University), Madhav Brindavan (Manipal Institute of Technology), Pruthvi Tapadia (Manipal Insitute of Technology), Akash Paliya (Manipal Univesity), Hemant Ganti (Manipal University), Varun Thakurta (Manipal University), Aniketh Ajay Kumar (Manipal University) Presentation: Anirudh Kailaje - Friday, March 8th, 10:10 AM - Madison
        The most common method of deployment of mechanisms in Pico- and Nano-satellites is the Burn Wire release mechanism. Traditionally, the wire burn mechanism used in deployment mechanisms employ a nichrome wire. This paper presents the shortcomings of the use of nichrome wires and suggests using an off-the-shelf resistor as an alternative. The observations and conclusions are a result of the implementation of the solution above in a student nanosatellite. In the concerned nanosatellite commercially available off-the-shelf carbon film resistors were made use of for the thermal cleavage of the polymer braid in its wire burn mechanism. The resistor helped overcome the limitations and complications encountered in the use of the nichrome wire for the same mechanism. The resistor helped make the mechanism more compact, easy to stow and assemble, and reliable. It also allowed for simpler mechanism design and reduced the number of points of failures. The use of nichrome wire mandates that both the nichrome wire and the retention wire be taut and be firmly in contact with each other for the successful functioning of the mechanism. This causes an inconvenience as setups must be devised to keep both wires in tension. Moreover, the failure of either of the setups can significantly affect the overall performance. The release mechanism uses a nichrome wire to cleave a fiber braid (usually a polymer) thermally. This simple mechanism ensures high effectiveness and reliability. The reliability of the system is dependent on the amount of contact pressure between the nichrome wire and the polymer braid. Insufficient contact pressure may result in the fiber not cleaving at all. Hence a repeatable and measurable method to induce the required contact pressure is necessary. The tension in the polymer fiber directly corresponds to the contact pressure between it and the heating element. This paper demonstrates the implementation of a fiber tensioner in a nanosatellite. The tensioner maintains required tension at all times. The tensioner is efficient, light, and is easily machined. Additionally, the resistor allows for the implementation of a surge generation circuitry which allows the safe dissipation of a large amount of energy in a very short interval of time, maximizing the success rate of the deployment. The energy dissipation rate achieved by such circuits simply cannot be achieved in a nichrome wire setup. This paper presents the design of the mechanism and highlights the advantages of the use of a resistor. It also shows the mechanism test results as a demonstration of the system’s reliability. A comparison between the above two systems regarding reliability and point of failures is also performed. The most common method of deployment of mechanisms in Pico- and Nano-satellites is the Burn wire release mechanism. The system has high repeatability. This paper presents the design philosophy and process, and the reliability and repeatability test results of the same. The susceptibility of the system against vibrations is also evaluated. The system calibration techniques and results are also presented.
      • 08.1011 Solving Thermal Control Challenges for CubeSats: Optimizing Passive Thermal Design Jennifer Young (Blue Canyon Technologies) Presentation: Jennifer Young - Friday, March 8th, 10:35 AM - Madison
        The advantages of utilizing CubeSat and Microsatellite buses are numerous and in high demand, but thermal control can pose a significant challenge. High power density, limited radiator size, and limited heater power are among the top concerns when designing a small satellite. This paper will discuss the most common thermal challenges and describe how passive thermal design can be optimized while holding traditional thermal margins. Four CubeSats in low earth orbit with drastically different mission objectives and temperature control requirements will be discussed. The thermal design, test, and flight results will be presented along with lessons learned and considerations for concurrent mechanical, electrical, and thermal design to allow for mission success.
      • 08.1012 Electric Sail Tether Deployment System for CubeSats Michael Tinker (NASA Marshall Space Flight Center) Presentation: Michael Tinker - Friday, March 8th, 11:00 AM - Madison
        An Electric Sail (E-Sail) propulsion system consists of long, thin tethers - positively-charged wires extending radially and symmetrically outward from a spacecraft. Tethers must be biased using a high-voltage power supply to ensure that the solar wind produces thrust. While the E-Sail concept shows great promise for flying heliopause missions with higher characteristic acceleration than solar sails, there are significant technical challenges related to deploying and controlling multiple tethers. A typical full-scale design involves a hub and spoke arrangement of 10 to 100 tethers, each 20 km long. In the last 20 years, there have been multiple space mission failures due to tether deployment and control issues, and most configurations involved a single tether. This paper describes a collaborative effort by Marshall Space Flight Center (MSFC) and Tennessee Tech University to develop and test a simple yet robust single-tether deployment system for a two-6U CubeSat configuration. The project included the following: a) tether dynamic modeling/simulation b) E-Sail single-tether prototype development and testing in the MSFC Flight Robotics Lab c) space environmental effects testing to identify best materials for further development. These three types of investigation were needed to provide technical rationale for an E-Sail flight demonstration mission that is expected to be proposed for the 2022 timeframe. The project team used an “agile” engineering approach in which E-Sail single-tether prototype designs were iteratively developed and tested to solve problems and identify design improvements. The agile approach was ideal for this low Technology Readiness Level (TRL) project because tether deployer development involved many unknowns in prototype development that could only be discovered through iterative cycles of construction and testing. Agile-based approaches, which originated in the software industry in the early 2000s, have been successfully used in low-TRL hardware development programs in recent years. Extensive modeling and simulation were accomplished for three types of tether deployment: a) rotational with the primary 6U CubeSat free to rotate in the plane of the MSFC Flight Robotics Lab floor, and the secondary 6U free b) linear with primary 6U fixed, and propulsive force applied to the free secondary 6U c) linear with both 6Us free and propulsive forces applied to both (more representative of a bona fide flight experiment). Simulation results were valuable for understanding the propulsive and braking forces needed for controlled tether deployment. This paper describes the evolution, insights, and test/performance data related to the resultant single-tether two-6U E-Sail test article which has been demonstrated in the MSFC Flight Robotics Lab. The development effort suggests near-term work needed to achieve a useful flight demonstration, and provides ideas for how multiple-tether deployment systems might evolve going forward. A planned next-generation E-Sail prototype will include autonomous propulsive tether deployment while monitoring tether tension, location on the floor, distance between tether ends, acceleration, velocity, and propellant used.
    • 08.11 Planetary Exploration Using Small Spacecraft Carolyn Mercer (NASA - Glenn Research Center) & Young Lee (Jet Propulsion Laboratory) & Andrew Petro (NASA - Headquarters)
      • 08.11 8.11 Panel Presentation: - - Gallatin
      • 08.1101 Smallsat Missions Enabled by Paired Low-Thrust Hybrid Rocket and Low-Power Long-Life Hall Thruster Ryan Conversano (), Ashley Karp (Jet Propulsion Laboratory), Nathan Strange (Jet Propulsion Laboratory), Elizabeth Jens (Jet Propulsion Laboratory), Jason Rabinovitch (Jet Propulsion Laboratory, California Institute of Technology) Presentation: Ryan Conversano - Wednesday, March 6th, 04:30 PM - Gallatin
        The recent proliferation of miniaturized spacecraft technologies for smallsats and cubesats has enabled the possibility for sub-300 kg smallsats to perform challenging deep-space missions. Accessing the planets beyond Mars in reasonable times using smallsat-class spacecraft, however, requires innovative propulsion solutions. The pairing of a low-thrust hybrid rocket with a low-power, long-life Hall thruster on a smallsat would provide the capability for both time-critical impulsive maneuvers (orbit departure, orbit insertion, etc.) and precise, long duration applications of thrust (cruise, proximity operations, etc.). Hybrid propulsion technology (chemical propulsion utilizing a solid fuel and gaseous or liquid oxidizer) has been under development at NASA’s Jet Propulsion Laboratory for the past 5 years. The current goal is to provide a specific impulse of ~ 300 s to enable interplanetary smallsats. Technology development at JPL is currently focused on developing a restartable low-thrust (~50 N) hybrid rocket motor capable of burning for multiple minutes. This low-thrust and long-duration operation is required in order to deliver significant delta-V while ensuring that the smallsat is controllable throughout the burn. The MaSMi (Magnetically Shielded Miniature) Hall thruster under development at JPL is ideally suited for smallsat-class spacecraft. MaSMi is a 150 – 900 W magnetically shielded Hall thruster that has demonstrated >40% total efficiency and has undergone experimentally validated plasma simulations indicating a projected Xe throughput of >100 kg (>10,000 h lifetime). A flight-qualified version of the thruster and cathode has an estimated mass of >3.5 kg, suggesting a sub-10 kg mass for the entire electric propulsion (EP) subsystem (thruster, cathode, power processing unit [PPU], and cabling). Limiting the spacecraft wet mass to approximately 300 kg to fit within an ESPA Grande and using a combined hybrid rocket / MaSMi Hall thruster propulsion system, several mission studies will be performed to demonstrate the mission-enabling capabilities of this smallsat architecture. The combined capability of the high Isp, low thrust MaSMi with the comparatively high thrust hybrid motor enable missions otherwise not feasible or achievable in this spacecraft class.
      • 08.1102 SmallSat Aerocapture to Enable a New Paradigm of Planetary Missions Alex Austin (Jet Propulsion Laboratory), Adam Nelessen (Jet Propulsion Laboratory), Ethiraj Venkatapathy (NASA ARC), Robert Braun (Georgia Institute of Technology), William Strauss (NASA Jet Propulsion Lab), Robin Beck (NASA - Ames Research Center), Paul Wercinski (), Gary Allen (NASA Ames Research Center), Michael Werner (), Evan Roelke (University of Colorado Boulder) Presentation: Alex Austin - Wednesday, March 6th, 04:55 PM - Gallatin
        This paper presents a technology development initiative focused on delivering SmallSats to orbit a variety of bodies using aerocapture. Aerocapture uses the drag of a single pass through the atmosphere to capture into orbit instead of relying on large quantities of rocket fuel. Using drag modulation flight control, an aerocapture vehicle adjusts its drag area during atmospheric flight through a single-stage jettison of a drag skirt, allowing it to target a particular science orbit in the presence of atmospheric uncertainties. A team from JPL, NASA Ames, and CU Boulder has worked to address the key challenges and determine the feasibility of an aerocapture system for SmallSats less than 180kg. Key challenges include the ability to accurately target an orbit, stability through atmospheric flight and the jettison event, and aerothermal stresses due to high heat rates. Aerocapture is a compelling technology for orbital missions to Venus, Mars, Earth, Titan, Uranus, and Neptune, where eliminating the propellant for an orbit insertion burn can result in significantly more delivered payload mass. For this study, Venus was selected due to recent NASA interest in Venus SmallSat science missions, as well as the prevalence of delivery options due to co-manifesting with potentially many larger missions using Venus for gravity assist flybys. In addition, performing aerocapture at Venus would demonstrate the technology’s robustness to aerothermal extremes. A survey of potential deployment conditions was performed that confirmed that the aerocapture SmallSat could be hosted by either dedicated Venus-bound missions or missions performing a flyby. There are multiple options for the drag skirt, including a rigid heat shield or a deployable system to decrease volume. For this study, a rigid system was selected to minimize complexity. A representative SmallSat was designed to allocate the mass and volume for the hardware needed for a planetary science mission. In addition, a separation system was designed to ensure a clean separation of the drag skirt from the flight system without imparting tipoff forces. The total spacecraft mass is estimated to be 68 kg, with 26 kg of useful mass delivered to orbit for instruments and supporting subsystems. This is up to 85% more useful mass when compared to a propulsive orbit insertion, depending on the orbit altitude. Key to analyzing the feasibility of aerocapture is the analysis of the atmospheric trajectory, which was performed with 3 degree-of-freedom simulations and Monte Carlo analyses to characterize the orbit targeting accuracy. In addition, aerothermal sizing was performed to assess thermal protection system requirements, which concluded that mature TPS materials are adequate for this mission. CFD simulations were used to assess the risk of recontact by the drag skirt during the jettison event. This study has concluded that aerocapture for SmallSats could be a viable way to increase the delivered mass to Venus and can also be used at other destinations. With increasing interest in SmallSats and the challenges associated with performing orbit insertion burns on small platforms, this technology could enable a new paradigm of planetary science missions.
      • 08.1104 Stage-Based Electrospray Propulsion System for Deep-Space Exploration with CubeSats Oliver Jia Richards (MIT), Paulo Lozano () Presentation: Oliver Jia Richards - Wednesday, March 6th, 05:20 PM - Gallatin
        Independent deep-space exploration with CubeSats requires a compact and highly efficient propulsion system. The ion Electrospray Propulsion System under development at MIT’s Space Propulsion Laboratory is a high delta-V propulsion system that is compatible with the CubeSat form factor in both size and weight. Electrospray propulsion has been maturing at a rapid pace, and is a promising new technology for primary propulsion of an independent deep-space CubeSat mission. However, current electrospray thrusters have demonstrated laboratory lifetimes (500 hours) lower than the required firing time for an electrospray thruster propelled 3U CubeSat to escape from Earth starting from geostationary orbit (5000 hours). A stage-based approach is proposed where spent thrusters are ejected from the spacecraft exposing new thrusters to continue the mission. Such a staging strategy is usually not practical for in-space thrusters. However, the compactness of micro-fabricated electrospray thrusters means that their contribution to the overall spacecraft mass and volume is small relative to other subsystems. Two mechanisms are proposed for the stage-based approach. The first is the staging mechanism itself which holds together successive stages during flight and separates stages at the time of staging. The second is a signal routing mechanism which passively routes control signals to the active stage and eliminates the need for individual stage addressing. The number of stages required for a mission is analysed through a simulation of the escape trajectory away from Earth and a survey of near-Earth asteroids as potential targets. The staging mechanism is based on a fuse wire approach. A metal wire connects successive stages and is placed under tension with a compression spring. At the time of staging, current is run through the wire to heat it up until it melts. Once with wire melts, the stages are disconnected and the compression spring extends to separate the two stages. The signal routing mechanism is a custom made, momentary, normally closed push button. When the above stage is present the contacts in the routing mechanism are opened. Once the above stage is released, the contacts close and designate the stage as active. Since the routing mechanism is mechanical and activated by the staging process, no electrical addressing of individual stages is required. A simulation of the spacecraft’s escape from Earth orbit is used to analyse the required number of stages for escape and provide a numerical escape trajectory that includes the effects of staging. A number of near-Earth asteroids with orbital elements similar to those of Earth are surveyed as potential targets. Estimates for the required delta-V and number of stages to reach each asteroid are determined with the JPL Small-Body Mission-Design tool. Given the combined number of stages, the total mass and volume of the propulsion system is estimated.
      • 08.1105 Mars Small Spacecraft Mission Concepts Study Nathan Barba (Jet Propulsion Laboratory), Tom Komarek (Jet Propulsion Laboratory), Ryan Woolley (Jet Propulsion Laboratory), Lou Giersch (NASA Jet Propulsion Lab), Vlada Stamenkovic (JPL), Michael Gallagher (JPL), Charles Edwards (Jet Propulsion Laboratory) Presentation: Nathan Barba - Wednesday, March 6th, 09:00 PM - Gallatin
        NASA Mars Exploration Program is exploring a potential Mars Sample Return (MSR) campaign consisting of a series of mission over the next decade to return samples collected at Mars for analysis in terrestrial laboratories. In the presence of a large flagship Mars mission, many in the Mars science community would prefer to continue high priority investigations other than those provided by geological and astrobiological MSR investigations. There would be a need to continue the investigations using smaller, affordable missions, while still performing important science as defined in the National Academy of Sciences Decadal Survey, Mars Exploration Program Analysis Group (MEPAG) goals, and Human Exploration and Operations (HEO) Strategic Knowledge Gaps. This paper will discuss Mars small spacecraft mission concepts being studied at JPL. The study targets the use of small spacecraft with more science capability than currently achievable with CubeSats and lead to a spacecraft wet mass of approximately 100 to 350 kilograms. The access to Mars considered in this study includes a self-propelled transit from the Earth geosynchronous transfer orbit (GTO) to Mars as a secondary payload in a rideshare configuration. The paper will describe considered mission concepts, science objectives, mission design, concept of operations, flight system, enabling technologies, and mission cost along with launch vehicle interfaces. The mission design will examine primarily electric propulsion for the Earth-to-Mars cruise and insertion into a high Mars orbit or encounter with Martian moons. Other trajectories studied include hyperbolic approach and direct landing on the Martian surface. Both orbiters and landers are studied covering a wide range of mission concepts such as an areostationary constellation, low Mars orbit insertion by aerocapture, and a high-impact lander with less than 800g of acceleration. The paper will assess enabling technologies in the areas of propulsion, telecommunications, and scientific payload, which enhance and augment the needed capability of a small spacecraft architecture. Science objectives and other payloads will describe both in-situ and remote sensors with and without a high-capacity telecommunications relay for Mars surface assets back to Earth. The cost estimates of the mission concepts studied in this paper range from below $100M to less than $300M for development through launch. The paper will conclude with an outline of several examples of small spacecraft mission concepts to Mars that demonstrate significant scientific capability, are technically feasible, and fit within the desired cost cap.
      • 08.1106 Optimized Low-Thrust Transfers from Geostationary Transfer Orbit to Mars Ryan Woolley (Jet Propulsion Laboratory), Zubin Olikara (NASA Jet Propulsion Laboratory) Presentation: Ryan Woolley - Wednesday, March 6th, 09:25 PM - Gallatin
        In the past 5 years, over 50 satellites have been launched from U.S. soil en route to geosynchronous orbit and beyond. Many of these launches do not use the full capability of the launch vehicle, leaving an opportunity for a small secondary spacecraft to get a ride to geosynchronous transfer orbit (GTO). Utilizing advances in solar electric propulsion (SEP), a small spacecraft can make its way from GTO to Mars orbit where it could perform high-quality science and telecommunications for a very low cost. In this paper, we explore the mission design space for small, ESPA-class (200-300 kg) spacecraft transferring from GTO to Mars orbit (nominally Areostationary orbit). In order to construct an optimal mission architecture using SEP, it is essential to employ methods that simultaneously optimize both the flight system and trajectory design. Low-thrust trajectories are extremely flexible, but they are also very sensitive to driving parameters such as power level, thrust, mass, and time-of-flight (TOF). In fact, specific characteristics of a chosen thruster such as throttle points and rated lifetime can greatly affect the design of a mission. The mission design process consists of an iteration loop between a large database of optimized low-thrust trajectories and parametric models of key spacecraft subsystems. The database is created by sweeping through thruster types and quantities, power levels, mass levels, and flight durations. First, the GTO to Earth departure spiral is parameterized by the effective acceleration and desired time of flight, giving a curve fit of simulated trajectories. Next, MALTO (a low-thrust optimizer) is used to create tens of thousands of trajectories over the full range of combinations. These trajectories start at Earth departure and end after a spiral capture to areostationary orbit. The key characteristics are tabulated in a searchable database. For a given set of thruster, power, and starting mass, a curve is generated relating time-of-flight and propellant used. This curve typically exhibits a strong “knee”, beyond which adding more time does not result in much of a reduction in propellant. These knee points are found algorithmically and stored, thrust reducing the number of degrees of freedom. A spacecraft is designed by starting with a desired mass and power in GTO. From there, a trade is made between mass and duration on the outbound spiral. The ending mass is then used to find a suitable trajectory from the MALTO database. This gives a maximum dry mass allocation in the final orbit. The mass of the spacecraft subsystems are then estimated using the parametric models and their masses are subtracted from the allocation, yielding the effective useable payload. The process is then repeated, varying power level, TOF, etc., until a maximum useable payload is achieved that meets mission constraints.
      • 08.1107 Hybrid Propulsion System Enabling Orbit Insertion Delta-Vs within a 12 U Spacecraft Elizabeth Jens (Jet Propulsion Laboratory), Ashley Karp (Jet Propulsion Laboratory), Jason Rabinovitch (Jet Propulsion Laboratory, California Institute of Technology), Barry Nakazono () Presentation: Elizabeth Jens - Wednesday, March 6th, 09:50 PM - Gallatin
        This paper will describe the design and development status of a hybrid CubeSat propulsion system. This system is designed to be packaged within a 12 U envelope and deliver 800 m/s delta-V to a 25 kg spacecraft. The hybrid motor uses green propellants, specifically gaseous oxygen as the oxidizer and solid Poly(Methyl MethAcrylate) (PMMA) as the fuel. This propellant combination is separated by phase and is non-hypergolic, making the design well suited for use within a secondary payload where safety is paramount. The hybrid motor has high performance with an Isp of approximately 300 s and is able to be re-ignited, enabling numerous maneuvers to be conducted. The system also provides thrust vector control during main motor operation and attitude control for reaction wheel un-loading during the science mission. A dedicated test program has been conducted at the Jet Propulsion Laboratory over the past three years to progress this design towards flight and verify the design assumptions. This paper will summarize the results of this test program and the demonstrated performance of the motor. A path to progress this system towards flight shall also be discussed.
      • 08.1109 On the Development of CubeSat Swarm Technologies for Planetary Exploration Matt Sorgenfrei (Stinger Ghaffarian Technologies) Presentation: Matt Sorgenfrei - Thursday, March 7th, 10:10 AM - Dunraven
        Over the past decade the aerospace community has seen an ever-increasing pace of CubeSat-class spacecraft performing a wide range of activities in low Earth orbit, from simple technology demonstrations to global Earth imaging. These successes have in many ways lowered the barrier to entry for future applications of CubeSats, including dedicated planetary exploration activities. As demonstrated by the recent MarCO mission, CubeSats can be co-manifested with larger spacecraft traveling to targets in the solar system with little impact to the primary mission. These small companions can offer important value to an overall planetary exploration program by providing additional data collection of all kinds, despite being developed for a comparatively small amount of money. One compelling future application is that of swarms of CubeSats operating around a planetary body, due in part to their ability to collect temporally and spatially separated measurements. Examples of intriguing applications include mapping future robotic landing sites on the surface of Europa, further characterization of the distribution of Methane in the Martian atmosphere, or radio occultation around any one of a number of planetary bodies. However, before such missions can become a reality a variety of technologies must first be developed and tested in a relevant environment. This can be challenging due in part to the environmental constraints levied on a spacecraft when operating beyond Earth orbit and due to the unique challenges of communicating with a small, secondary payload via the Deep Space Network. This paper will describe a test campaign currently underway at NASA Ames Research Center, the goal of which is to demonstrate a variety of technologies which will enable future CubeSat-class planetary exploration missions. Based on the testing experience garnered during the BioSentinel program, NASA Ames is seeking to address key technology drivers for continued use of CubeSats in deep space, including issues such as proximity operations of members of a swarm, cross-link communications, and real-time sharing or reallocation of science operations. This work is being conducted in the Generalized Nanosatellite Avionics Testbed (G-NAT) Lab at NASA Ames, a collaboration between the Mission Design Division and the Exploration Systems Directorate. The G-NAT lab is equipped with dual testbeds that enable hardware and software testing of systems associated with spacecraft attitude determination and control, and as will be described herein these testbeds are being applied to the various problems associated with swarms of spacecraft operating in deep space. In this work a variety of lessons learned from environmental and mission simulation testing for the BioSentinel mission will be applied to a case study of a swarm of CubeSats performing mapping operations at Europa. BioSentinel, a six-unit CubeSat that will operate beyond the Van Allen Belts, makes use of a command and telemetry structure that anticipates the challenges of operating in deep space, and a number of hardware tests are undertaken in the G-NAT lab using this same infrastructure. Ultimately, this work will demonstrate a future path for CubeSat-class planetary exploration.
      • 08.1110 InSight/MarCO Opportunistic Multiple Spacecraft per Antenna (OMSPA) Demonstration Andre Tkacenko (Jet Propulsion Laboratory) Presentation: Andre Tkacenko - Thursday, March 7th, 10:35 AM - Dunraven
        As smallsats become increasingly capable, longer-lived, and have more secondary payload launch opportunities to beyond-GEO destinations, they are expected to play an increasing role in deep space science investigations. This expectation is borne out by several relatively recent NASA Science Mission Directorate solicitations regarding smallsat studies and small innovative missions. With the potential for these smallsats to substantially add to the number of spacecraft operating in deep space, we need to be thinking about ways to support communications with all of them without the huge expense of trying to build a commensurate number of deep space antennas. One approach to this challenge might involve making greater use of beam sharing techniques that allow all the spacecraft within the beamwidth of a single ground antenna to simultaneously downlink to the antenna. One of these techniques, Opportunistic Multiple Spacecraft Per Antenna (OMSPA), may be particularly suited to smallsats. In the concept for this technique, smallsats within the scheduled ground antenna beam of some other spacecraft, make opportunistic use of that spacecraft’s beam by transmitting “open-loop” to a recorder associated with the antenna. These transmissions get captured on the recorder and can be later retrieved, demodulated, and decoded so that the smallsats can recover their data – all w