- API data.nasa.gov | Last Updated 2018-07-19T10:10:34.000Z
To solve the problem of autonomous navigation on small satellite platforms less than 20 kg, we propose to develop an onboard orbit determination receiver for small LEO satellites which lack stable Attitude Determination and Control System (ADCS), continuous GPS coverage, or ground tracking. The system is a refinement of existing spaceborne receiver technology built around a new, innovative collective detection and direct positioning algorithm developed by Dr. Penny Axelrad, a reduced set of GPS hardware, and a compact orbit propagator. The small satellite collective orbit determination receiver (SCOR) brings together efficient reference orbit representations, snapshot GPS sampling, collective detection and direct positioning, and modular orbit propagation methods, to produce an effective new approach for onboard support of small satellites. Since the collective detection algorithm does not require continuous GPS tracking to generate navigation solutions, portions of the receiver can be duty cycled to reduce power consumption between measurements. Additionally, this approach allows for satellites without pointing capabilities to obtain sufficient measurements to generate solutions by taking multiple snapshots when the spacecraft attitude is in a favorable orientation with respect to the GPS constellation.
- API data.nasa.gov | Last Updated 2018-07-19T06:59:14.000Z
The research focus in the field of remotely sensed imagery has shifted from collection and warehousing of data ' tasks for which a mature technology already exists, to auto-extraction of information and knowledge discovery from this valuable resource ' tasks for which technology is still under active development. In particular, intelligent algorithms for analysis of very large rasters, either high resolutions images or medium resolution global datasets, that are becoming more and more prevalent, are lacking. We propose to develop the Geospatial Pattern Analysis Toolbox (GeoPAT) a computationally efficient, scalable, and robust suite of algorithms that supports GIS processes such as segmentation, unsupervised/supervised classification of segments, query and retrieval, and change detection in giga-pixel and larger rasters. At the core of the technology that underpins GeoPAT is the novel concept of pattern-based image analysis. Unlike pixel-based or object-based (OBIA) image analysis, GeoPAT partitions an image into overlapping square scenes containing 1,000'100,000 pixels and performs further processing on those scenes using pattern signature and pattern similarity ' concepts first developed in the field of Content-Based Image Retrieval. This fusion of methods from two different areas of research results in orders of magnitude performance boost in application to very large images without sacrificing quality of the output. GeoPAT v.1.0 already exists as the GRASS GIS add-on that has been developed and tested on medium resolution continental-scale datasets including the National Land Cover Dataset and the National Elevation Dataset. Proposed project will develop GeoPAT v.2.0 ' much improved and extended version of the present software. We estimate an overall entry TRL for GeoPAT v.1.0 to be 3-4 and the planned exit TRL for GeoPAT v.2.0 to be 5-6. Moreover, several new important functionalities will be added. Proposed improvements includes conversion of GeoPAT from being the GRASS add-on to stand-alone software capable of being integrated with other systems, full implementation of web-based interface, writing new modules to extent it applicability to high resolution images/rasters and medium resolution climate data, extension to spatio-temporal domain, enabling hierarchical search and segmentation, development of improved pattern signature and their similarity measures, parallelization of the code, implementation of divide and conquer strategy to speed up selected modules. The proposed technology will contribute to a wide range of Earth Science investigations and missions through enabling extraction of information from diverse types of very large datasets. Analyzing the entire dataset without the need of sub-dividing it due to software limitations offers important advantage of uniformity and consistency. We propose to demonstrate the utilization of GeoPAT technology on two specific applications. The first application is a web-based, real time, visual search engine for local physiography utilizing query-by-example on the entire, global-extent SRTM 90 m resolution dataset. User selects region where process of interest is known to occur and the search engine identifies other areas around the world with similar physiographic character and thus potential for similar process. The second application is monitoring urban areas in their entirety at the high resolution including mapping of impervious surface and identifying settlements for improved disaggregation of census data.
- API data.nasa.gov | Last Updated 2018-07-19T09:20:29.000Z
Flight-testing is a crucial component in NASA's mission to research and develop new aeronautical concepts because it allows for verification of simulated and wind-tunnel experiments and exposes previously unforeseen design problems. Video is an invaluable tool for flight-testing, allowing the collection of a wealth of information such as craft position, speed, health, as well as tracking different phases of flight, capturing events, extracting performance figures, and documenting historical flights. For several cases of interest (high-speed/high-altitude aircraft, lakebed remote landings, vehicle re-entry, smoke airflow traces, etc.) it is not feasible or physically possible to install external cameras close to the aircraft whose behavior is being filmed. Long-range imaging equipment is typically used in these cases, but the captured footage is severely limited in quality by atmospheric effects, which are often the dominating source of image degradation, long before diffraction-related limitations occur. In consequence, long-range imagery typically suffers from scintillation, blurring, poor spatial resolution, and low contrast. Since these problems result from atmospheric conditions, they cannot be overcome by simply improving imaging hardware. What is needed is a solution to combat atmospheric distortion. In Phase I, EM Photonics demonstrated a signal processing technique based on initial research from Lawrence Livermore National Laboratory. We modified and implemented this core algorithm and showed its ability to enhance imagery collected from the long-range imaging systems at NASA DFRC. In Phase II, we will evolve and integrate the prototype components developed in Phase I and deliver an image enhancement device capable of running in real time to mitigate the image distortion present in data collected from NASA DFRC's long-range cameras.
A 200 MHz Bandwidth, 4096 Spectral Channels, 3 W Power Consumption, Digital Auto-Correlation Spectrometer Chip for Spaceborne Microwave Radiometers, Phase Idata.nasa.gov | Last Updated 2018-07-19T20:24:19.000Z
NASA?s program for Exploration of the Solar System requires high-resolution microwave spectrometers for the analysis of chemical composition and physical properties of solar system atmospheres. The anticipated results of the proposed R/R&D effort (Phase I and II), if the project is successful, are to demonstrate experimentally the first digital auto-correlation spectrometer on a single chip for spaceborne microwave radiometers with the following important characteristics: (a) a bandwidth of 200 MHz, (b) 4096 spectral channels for high-resolution spectroscopy, (c) less than 3 W power consumption, (d) a mass of less that 800 grams, and (e) a space-qualifiable design and fabrication technology. The innovative approach proposed for achieving these significant objectives consists of a synergistic combination of the following: (a) a unique parallel architecture that will reduce the operating clock frequency, relative to a single-stream architecture, by a factor of 2 and consequently will lower significantly the power consumption, (b) novel differential analog and digital circuits that will improve robustness while operating in the presence of total dose natural radiation found in the space environment, and (c) an advanced 0.13 um CMOS fabrication process available from IBM for manufacturing high-performance, low-power, reliable, and robust (total dose radiation and latch-up resistant) space-qualifiable chips.
A Compact In Situ Sensor for Measurement of Absorption and Backscattering in Natural Waters, Phase Idata.nasa.gov | Last Updated 2018-07-19T09:37:02.000Z
We propose to develop an active sensor for in situ measurement of the inherent optical properties (IOPs) absorption and backscattering at multiple wavelengths. Multi- or hyper-spectral absorption of particles and dissolved materials is routinely measured in the laboratory and in situ in order to characterize, for example, the quantities and types of phytoplankton based on concentrations of specific absorbing pigments. Similarly, backscattering is employed to estimate the concentration of suspended material. Measurements of absorption and backscattering concurrently, and at multiple wavelengths, are useful as proxies for biogeochemical measurements such as particle composition, concentration of particulate organic carbon, and particle size distribution, as well as for remote sensing calibration and validation. The current state of the art for phytoplankton observation using optical sensors on autonomous platforms relies on linking biomass with optical backscattering and chlorophyll. The ability to quantify phytoplankton using absorption not only overcomes limitations of backscattering and fluorescence-based approaches, but multi-spectral (visible wavelength) measurements of absorption also provide the means to discern the presence of accessory pigments and pigment packaging, ultimately leading to not only improvements in phytoplankton biomass estimates, but also the potential for resolving phytoplankton functional types. Briefly, the proposed sensor emits a collimated beam of light into the water and measures the backscattered light as a radial function from the beam location. An inversion algorithm is then used to convert this backscattered intensity as a function of distance from the beam to the inherent optical properties absorption and backscattering. Multiple source wavelengths are used and the sensor is packaged in a compact, flat-faced geometry easing integration into autonomous platforms.
- API data.nasa.gov | Last Updated 2018-07-19T13:04:23.000Z
A key component of NASA's human exploration programs is a system that monitors the health of the crew during the space missions. The wearable physiological monitor proposed by Linea Research Corporation can be used to continuously observe the beat to beat blood pressure. The monitor can be used to observe the physiological effect of various countermeasures against prolonged exposures to reduced gravitational environments. The proposed device will allow the monitoring of the pharmacological effect on blood pressure over prolonged periods. Currently, beat to beat monitoring of blood pressure is done primarily in hospital settings through invasive procedures involving percutaneous insertion of catheters into the radial or brachial arteries. While non-invasive beat to beat blood pressures based on either the Penaz method or arterial applanation tonometry are currently available, they each have limitations. In addition, all monitors are based on large stationary equipment that requires the subject to be immobile. Successful implementation of the proposed program will result in an accurate wearable beat to beat blood pressure measurement.
- API data.nasa.gov | Last Updated 2018-07-19T07:22:11.000Z
At present, UAVs used in environmental monitoring mostly collect low spectral resolution imagery, capable of retrieving canopy greenness or properties related water stress. We propose a UAV based capacity for accurate measurement of spectral reflectance at high temporal frequencies and stability to depict diurnal/seasonal cycles in vegetation function. We will test our approaches first using spatially-resolved discrete point measurements characterizing VNIR reflectance and solar-induced fluorescence Y1, followed in Y2 by imaging spectroscopy. The ultimate goal is to produce science-quality spectral data from UAVs suitable for scaling ground measurements and comparison against airborne or satellite sensors. Because of the potential for rapid deployment, spatially explicit data from UAVs can be acquired irrespective of many of the cost, scheduling and logistic limitations to satellite or piloted aircraft missions. Provided that the measurements are suitably calibrated and well characterized, this opens up opportunities for calibration/validation activities not currently available. There is considerable interest in UAVs from the agricultural and forestry industries but there is a need to identify a workflow that yields calibrated comparisons through space and time. This will increase the likelihood that UAVs are economically feasible for applied and basic science, as well as land management. We target the consistent retrieval of calibrated surface reflectance, as well as biological parameters including chlorophyll fluorescence, photosynthetic capacity, nutrient and chlorophyll content, specific leaf area and leaf area index- all important to vegetation monitoring and yield. Scientifically, deployment of UAV sensors at sites such as flux towers will facilitate more frequent (e.g. within-day) and spatially comprehensive assessment of the vegetation physiology and function within tower footprints than is possible by foot, from sensors fixed to the tower, or irregular aircraft missions. We propose a rapid data assimilation and delivery system based on past SensorWeb efforts to move calibrated reflectance data and derived retrievals directly from the UAV to users. We will utilize SensorWeb functionalities to strategically run a data gathering campaign to optimize data yield. As well, we propose a mission deployment system to optimize flight paths based on real-time in-flight data processing to enable effective data collection strategies. All spectral data will also be uploaded to NASA's in-development EcoSIS online spectral library, and we will employ a cloud system to manage the intermediate products. Ultimately, we will demonstrate the acquisition of science-grade spectral measurements from UAVs to advance the use of UAVs in remote sensing beyond current state of application, providing measurements of a quality comparable to those from handheld instruments or well-calibrated air- and spaceborne systems. A key benefit is that UAV collections at 10-150m altitude bridge the gap between ground/proximal measurements and airborne measurements typically acquired at 500m and higher, allowing better linkage of comparable measurements across the full range of scales from the ground to satellites. This proposal is directly responsive to the AIST NRA in that it: bridges the gap in Earth observation between field and airborne measurements; and reduces risk to NASA through development of methods to make well-characterized measurements from UAVs for integration, calibration and validation of NASA satellite and airborne data, makes use of a data delivery system in which measurements and derived products are rapidly distributed to users, and provides spatially explicit data of calibrated reflectance and vegetation traits new temporal and spatial scales not currently available. We submit in the Core Topic area 'Operations Technologies' and are applicable to the 'Ecological Forecasting' subtopic. We will enter at TRL 3 and exit at TRL 5.
- API data.nasa.gov | Last Updated 2018-07-19T18:41:42.000Z
In the phase II effort, Intelligent Automation Inc., (IAI) and University of Central Florida (UCF) propose to develop a comprehensive numerical test suite for benchmarking current and future high performance computing activities that will include: (1) dense and unsymmetrical matrix problems faced in space aviation and problems in thermally driven structural response and radiation exchange, (2) implicit solution algorithms with production models and benchmarks for indefinite matrices and pathological cases, (3) configurations scaling for large systems in shared, distributed and mixed memory conditions, (4) documentation for strengths, weaknesses, and limitations of the toolkits used together with recommendations and (5) precision and round-off studies on serial and parallel machines, comparison of solutions on serial and parallel hardware with study of wall clock performance with respect to the number of processors We successfully demonstrated in phase I that we can accurately and precisely benchmark run time solvers of dense complex matrices in hybrid-distributed memory architecture. We achieved highly scalable super-linear speed-up and scalability of the algorithm for large problem sizes. The tools developed in phase II will greatly improve the performance and efficiency to adapt the benchmarks to HPC systems different hardware architectures at NASA facilities and for non-NASA commercial applications.
- API data.nasa.gov | Last Updated 2018-07-19T11:01:51.000Z
Development of a nonlinear particle filter for engine performance is proposed. The approach employs NASA high-fidelity C-MAPSS40K engine model as the central element, and addresses the issue of lack of observability of some of the engine health parameters in previous Kalman filter formulations. Proposed approach does not require linearity of the dynamics or Gaussian noise assumptions for satisfactory operation. The feasibility of real-time implementation of the proposed approach will be demonstrated using commercial, off-the-shelf General Purpose Graphical Processing Units. Phase I feasibility demonstration will show that the particle filter formulation of the engine performance monitoring system can overcome the limitations of previously employed approaches. Phase II research will develop a prototype implementation for hardware-in-loop simulations and eventual flight test.
Revolutionize Propulsion Test Facility High-Speed Video Imaging with Disruptive Computational Photography Enabling Technologydata.nasa.gov | Last Updated 2018-07-19T08:36:09.000Z
<p>Advanced rocket propulsion testing requires high-speed video recording that can capture essential information for NASA during rocket engine flight certification ground testing. While it is important to assess all anomalies during testing, this is particularly true in the event of a mishap. The video recording in use today at NASA’s Stennis Space Center (SSC) is significantly outdated and in need of the revolutionary approach being proposed. The current system has poor resolution and records to VHS tapes that are no longer commercially available. The system has been partially upgraded by incorporating consumer grade digital cameras, but these cameras have significant limitations including plume saturation and on-board memory storage, which make it nearly impossible, in catastrophic situations that result in the loss of a camera, to obtain critical information. This project will design and build a state-of-the-art high-speed video recording system using disruptive technologies based on emerging advances made in the field of computational photography. This system will not only provide quality, high-speed, 3-D high dynamic range video to the SSC engine test complex, but the technologies developed will be extendable to other NASA priorities including launch monitoring and space-based rover and robotics missions. </p><p>This project will design and build a novel state-of-the-art high-speed video recording system to provide 3-D High Dynamic Range (HDR) video imagery for operational use on the SSC engine test stands. The system will leverage newly emerging algorithms being developed within the computational photography discipline. Computational photography expands digital photography by applying computational image capture, processing, and manipulation techniques to improve image quality. HDR imaging effectively increases a camera’s dynamic range and eliminates saturation. Juxtaposed with current imaging techniques, which often utilize either multiple cameras or a single camera with multiple exposure sequencing, the transformative approach will be implemented at the chip level using a single camera, which significantly reduces cost and implementation complexities. Three such cameras will provide multiple viewing, enabling high-speed 3-D HDR imagery, important for a more robust analysis.</p>