Class/Object

geotrellis.spark.io.accumulo

HdfsWriteStrategy

Related Docs: object HdfsWriteStrategy | package accumulo

Permalink

case class HdfsWriteStrategy(ingestPath: Path) extends AccumuloWriteStrategy with Product with Serializable

This strategy will perfom Accumulo bulk ingest. Bulk ingest requires that sorted records be written to the filesystem, preferbly HDFS, before Accumulo is able to ingest them. After the ingest is finished the nodes will likely go through a period of high load as they perform major compactions.

Note: Giving relative URLs will cause HDFS to use the fs.defaultFS property in core-site.xml. If not specified this will default to local ('file:/') system, this is undesriable.

ingestPath

Path where spark will write RDD records for ingest

Linear Supertypes
Serializable, Serializable, Product, Equals, AccumuloWriteStrategy, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. HdfsWriteStrategy
  2. Serializable
  3. Serializable
  4. Product
  5. Equals
  6. AccumuloWriteStrategy
  7. AnyRef
  8. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new HdfsWriteStrategy(ingestPath: Path)

    Permalink

    ingestPath

    Path where spark will write RDD records for ingest

Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  5. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  6. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  7. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  8. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  9. val ingestPath: Path

    Permalink

    Path where spark will write RDD records for ingest

  10. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  11. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  12. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  13. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  14. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  15. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  16. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  17. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  18. def write(kvPairs: RDD[(Key, Value)], instance: AccumuloInstance, table: String): Unit

    Permalink

    Requires that the RDD be pre-sorted

    Requires that the RDD be pre-sorted

    Definition Classes
    HdfsWriteStrategyAccumuloWriteStrategy

Inherited from Serializable

Inherited from Serializable

Inherited from Product

Inherited from Equals

Inherited from AccumuloWriteStrategy

Inherited from AnyRef

Inherited from Any

Ungrouped