c

geotrellis.spark.partition

SpacePartitioner

case class SpacePartitioner[K](bounds: Bounds[K])(implicit evidence$1: Boundable[K], evidence$2: ClassTag[K], index: PartitionerIndex[K]) extends Partitioner with Product with Serializable

Linear Supertypes
Product, Equals, Partitioner, Serializable, Serializable, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. SpacePartitioner
  2. Product
  3. Equals
  4. Partitioner
  5. Serializable
  6. Serializable
  7. AnyRef
  8. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new SpacePartitioner(bounds: Bounds[K])(implicit arg0: Boundable[K], arg1: ClassTag[K], index: PartitionerIndex[K])

Value Members

  1. def apply[V, M](rdd: RDD[(K, V)] with Metadata[M])(implicit arg0: ClassTag[V], arg1: GetComponent[M, Bounds[K]]): RDD[(K, V)] with Metadata[Bounds[K]]

    Use this partitioner as a partitioner for rdd.

    Use this partitioner as a partitioner for rdd. The rdd may have a SpacePartitioner already. If it is in sync with Bounds in the Metadata and PartitionIndex we assume it to be valid . Otherwise we assume it has degraded to be a hash partitioner and we must perform a shuffle.

  2. val bounds: Bounds[K]
  3. def containsKey(key: Any): Boolean
  4. def getPartition(key: Any): Int
    Definition Classes
    SpacePartitioner → Partitioner
  5. def hasSameIndex(other: SpacePartitioner[K]): Boolean

    Is another space partitioner compatible in the sense of key to index mapping?

  6. implicit val index: PartitionerIndex[K]
  7. def numPartitions: Int
    Definition Classes
    SpacePartitioner → Partitioner
  8. def regionIndex(region: BigInt): Option[Int]

    Map given spatial region index to offset in region array (aka partition id)

  9. val regions: Array[BigInt]