Methods and systems disclosed herein relate generally to accessing and importing data, and more particularly to accessing Network Common Data Format (NetCDF) datasets and importing those datasets into geospatial map displays.
NetCDF is an Application Programming Interface (API) that is used to manage array-based scientific data in a machine-independent format. A variety of data types are typically stored in NetCDF files such as single-point observations, time series, regular-spaced grids, and satellite or radar images. The API is intended to provide a common data access method for all applications that generate and/or make use of NetCDF data.
Oceanographers plan and monitor the use of underwater gliders to collect environmental data and use numerical model forecasts stored in NetCDF files to study the effects of environmental forces on glider missions. The datasets provided within a numerical model forecast range from two-dimensional scalar to four-dimensional vector information. Physical aspects provided in these datasets that affect vehicle mission planning and monitoring include temperature, salinity and current magnitude and direction. Being able to import these datasets into a Geospatial Information System (GIS) for analysis is crucial to safe and successful operation of these vehicles.
Currently, users load datasets one file at a time, one layer at a time for every time slice. Each of the data layers then need to be custom clipped if a subset is all that is needed. Custom calculations have to be made manually. Further, the data display is disorganized and crowds the root level of the table of contents hindering the navigation of those data layers for viewing and analysis. Also any tool tip information is limited to a single attribute field within the dataset. Finally, if the data are viewed again later, the entire process has to be repeated.
Accordingly, there is a need for a method and system that provide (1) automatic extraction of each two-dimensional slice of data for each depth for every time interval, (2) subset area determination when needed, (3) automatic organization of the information into a manageable tree structure as in many cases there can be over 1000 layers that could possibly be imported per dataset, and (4) automatic calculation of a time depth average for all of the imported layers.