geotrellis.spark.io.hadoop.HadoopGeoTiffRDD
Read all file with an extension contained in the given list.
Override CRS of the input files. If None, the reader will use the file's original CRS.
Name of tiff tag containing the timestamp for the tile.
Pattern for java.time.format.DateTimeFormatter to parse timeTag.
Maximum allowed size of each tiles in output RDD. May result in a one input GeoTiff being split amongst multiple records if it exceeds this size. If no maximum tile size is specific, then each file file is read fully.
How many partitions Spark should create when it repartitions the data.
How many bytes should be read in at a time.
How many bytes should be read in at a time.
Override CRS of the input files.
Maximum allowed size of each tiles in output RDD.
Maximum allowed size of each tiles in output RDD. May result in a one input GeoTiff being split amongst multiple records if it exceeds this size. If no maximum tile size is specific, then each file file is read fully.
How many partitions Spark should create when it repartitions the data.
Read all file with an extension contained in the given list.
Pattern for java.time.format.DateTimeFormatter to parse timeTag.
Name of tiff tag containing the timestamp for the tile.
This case class contains the various parameters one can set when reading RDDs from Hadoop using Spark.
Read all file with an extension contained in the given list.
Override CRS of the input files. If None, the reader will use the file's original CRS.
Name of tiff tag containing the timestamp for the tile.
Pattern for java.time.format.DateTimeFormatter to parse timeTag.
Maximum allowed size of each tiles in output RDD. May result in a one input GeoTiff being split amongst multiple records if it exceeds this size. If no maximum tile size is specific, then each file file is read fully.
How many partitions Spark should create when it repartitions the data.
How many bytes should be read in at a time.