Package

geotrellis.spark.io

accumulo

Permalink

package accumulo

Linear Supertypes
AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. accumulo
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Type Members

  1. class AccumuloAttributeStore extends DiscreteLayerAttributeStore

    Permalink
  2. class AccumuloCollectionLayerReader extends CollectionLayerReader[LayerId]

    Permalink
  3. trait AccumuloEncoder[T] extends AnyRef

    Permalink
  4. trait AccumuloInstance extends Serializable

    Permalink
  5. class AccumuloLayerCopier extends LayerCopier[LayerId]

    Permalink
  6. class AccumuloLayerDeleter extends LazyLogging with LayerDeleter[LayerId]

    Permalink
  7. case class AccumuloLayerHeader(keyClass: String, valueClass: String, tileTable: String) extends LayerHeader with Product with Serializable

    Permalink
  8. class AccumuloLayerManager extends LayerManager[LayerId]

    Permalink
  9. class AccumuloLayerReader extends FilteringLayerReader[LayerId]

    Permalink
  10. class AccumuloLayerReindexer extends LayerReindexer[LayerId]

    Permalink
  11. class AccumuloLayerUpdater extends LayerUpdater[LayerId] with LazyLogging

    Permalink
  12. class AccumuloLayerWriter extends LayerWriter[LayerId]

    Permalink
  13. class AccumuloValueReader extends ValueReader[LayerId]

    Permalink
  14. sealed trait AccumuloWriteStrategy extends AnyRef

    Permalink
  15. case class BaseAccumuloInstance(instanceName: String, zookeeper: String, user: String, tokenBytes: (String, Array[Byte])) extends AccumuloInstance with Product with Serializable

    Permalink
  16. case class HdfsWriteStrategy(ingestPath: Path) extends AccumuloWriteStrategy with Product with Serializable

    Permalink

    This strategy will perfom Accumulo bulk ingest.

    This strategy will perfom Accumulo bulk ingest. Bulk ingest requires that sorted records be written to the filesystem, preferbly HDFS, before Accumulo is able to ingest them. After the ingest is finished the nodes will likely go through a period of high load as they perform major compactions.

    Note: Giving relative URLs will cause HDFS to use the fs.defaultFS property in core-site.xml. If not specified this will default to local ('file:/') system, this is undesriable.

    ingestPath

    Path where spark will write RDD records for ingest

  17. case class SocketWriteStrategy(config: BatchWriterConfig = ..., threads: Int = AccumuloWriteStrategy.threads) extends AccumuloWriteStrategy with Product with Serializable

    Permalink

    This strategy will create one BatchWriter per partition and attempt to stream the records to the target tablets.

    This strategy will create one BatchWriter per partition and attempt to stream the records to the target tablets. In order to gain some parallism this strategy will create a number of splits in the target table equal to the number of tservers in the cluster. This is suitable for smaller ingests, or where HdfsWriteStrategy is otherwise not possible.

    This strategy will not create splits before starting to write. If you wish to do that use AccumuloUtils.getSplits first.

    There is a problem in Accumulo 1.6 (fixed in 1.7) where the split creation does not wait for the resulting empty tablets to distribute through the cluster before returning. This will create a warm-up period where the pressure the ingest writers on that node will delay tablet re-balancing.

    The speed of the ingest can be improved by setting tserver.wal.sync.method=hflush in accumulo shell. Note: this introduces higher chance of data loss due to sudden node failure.

    BatchWriter is notified of the tablet migrations and will follow them around the cluster.

    config

    Configuration for the BatchWriters

  18. implicit class connectorWriter extends AnyRef

    Permalink
  19. implicit class scannerIterator extends Iterator[(Key, Value)]

    Permalink

Value Members

  1. object AccumuloAttributeStore

    Permalink
  2. object AccumuloCollectionLayerReader

    Permalink
  3. object AccumuloCollectionReader

    Permalink
  4. object AccumuloInstance extends Serializable

    Permalink
  5. object AccumuloKeyEncoder

    Permalink
  6. object AccumuloLayerCopier

    Permalink
  7. object AccumuloLayerDeleter

    Permalink
  8. object AccumuloLayerHeader extends Serializable

    Permalink
  9. object AccumuloLayerMover

    Permalink
  10. object AccumuloLayerReader

    Permalink
  11. object AccumuloLayerReindexer

    Permalink
  12. object AccumuloLayerUpdater

    Permalink
  13. object AccumuloLayerWriter

    Permalink
  14. object AccumuloRDDReader

    Permalink
  15. object AccumuloRDDWriter

    Permalink
  16. object AccumuloUtils

    Permalink
  17. object AccumuloValueReader

    Permalink
  18. object AccumuloWriteStrategy

    Permalink
  19. object HdfsWriteStrategy extends Serializable

    Permalink
  20. def columnFamily(id: LayerId): String

    Permalink
  21. implicit def stringToText(s: String): Text

    Permalink

Inherited from AnyRef

Inherited from Any

Ungrouped