Object

net.sansa_stack.inference.spark.data.loader

RDFGraphLoader

Related Doc: package loader

Permalink

object RDFGraphLoader

A class that provides methods to load an RDF graph from disk.

Linear Supertypes
AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. RDFGraphLoader
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  5. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  6. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  7. def equals(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  8. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  9. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  10. def hashCode(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  11. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  12. def loadFromDisk(session: SparkSession, path: URI, minPartitions: Int): RDFGraph

    Permalink

    Load an RDF graph from a single file or directory.

    Load an RDF graph from a single file or directory.

    session

    the Spark session

    path

    the path to the file or directory

    minPartitions

    min number of partitions for Hadoop RDDs (SparkContext.defaultMinPartitions)

    returns

    an RDF graph

  13. def loadFromDisk(session: SparkSession, paths: Seq[URI], minPartitions: Int): RDFGraph

    Permalink

    Load an RDF graph from multiple files or directories.

    Load an RDF graph from multiple files or directories.

    session

    the Spark session

    paths

    the files or directories

    minPartitions

    min number of partitions for Hadoop RDDs (SparkContext.defaultMinPartitions)

    returns

    an RDF graph

  14. def loadFromDisk(session: SparkSession, path: String, minPartitions: Int = 2): RDFGraph

    Permalink

    Load an RDF graph from a file or directory.

    Load an RDF graph from a file or directory. The path can also contain multiple paths and even wildcards, e.g. "/my/dir1,/my/paths/part-00[0-5]*,/another/dir,/a/specific/file"

    session

    the Spark session

    path

    the absolute path of the file

    minPartitions

    min number of partitions for Hadoop RDDs (SparkContext.defaultMinPartitions)

    returns

    an RDF graph

  15. def loadFromDiskAsDataFrame(session: SparkSession, path: String, minPartitions: Int = 4, sqlSchema: SQLSchema = SQLSchemaDefault): RDFGraphDataFrame

    Permalink

    Load an RDF graph from a file or directory with a Spark DataFrame as underlying datastructure.

    Load an RDF graph from a file or directory with a Spark DataFrame as underlying datastructure. The path can also contain multiple paths and even wildcards, e.g. "/my/dir1,/my/paths/part-00[0-5]*,/another/dir,/a/specific/file"

    session

    the Spark session

    path

    the absolute path of the file

    minPartitions

    min number of partitions for Hadoop RDDs (SparkContext.defaultMinPartitions)

    returns

    an RDF graph based on a org.apache.spark.sql.DataFrame

  16. def loadFromDiskAsDataset(session: SparkSession, paths: Seq[URI]): RDFGraphDataset

    Permalink

    Load an RDF graph from from from a file or directory with a Spark Dataset as underlying datastructure.

    Load an RDF graph from from from a file or directory with a Spark Dataset as underlying datastructure. The path can also contain multiple paths and even wildcards, e.g. "/my/dir1,/my/paths/part-00[0-5]*,/another/dir,/a/specific/file"

    session

    the Spark session

    paths

    the absolute path of the file

    returns

    an RDF graph based on a Dataset

  17. def loadFromDiskAsDataset(session: SparkSession, path: String): RDFGraphDataset

    Permalink

    Load an RDF graph from from multiple files or directories with a Spark Dataset as underlying datastructure.

    Load an RDF graph from from multiple files or directories with a Spark Dataset as underlying datastructure. The path can also contain multiple paths and even wildcards, e.g. "/my/dir1,/my/paths/part-00[0-5]*,/another/dir,/a/specific/file"

    session

    the Spark session

    path

    the absolute path of the file

    returns

    an RDF graph based on a Dataset

  18. def loadFromDiskAsRDD(session: SparkSession, path: String, minPartitions: Int): RDFGraphNative

    Permalink

    Load an RDF graph from a file or directory with a Spark RDD as underlying datastructure.

    Load an RDF graph from a file or directory with a Spark RDD as underlying datastructure. The path can also contain multiple paths and even wildcards, e.g. "/my/dir1,/my/paths/part-00[0-5]*,/another/dir,/a/specific/file"

    session

    the Spark session

    path

    the files

    minPartitions

    min number of partitions for Hadoop RDDs (SparkContext.defaultMinPartitions)

    returns

    an RDF graph

  19. def main(args: Array[String]): Unit

    Permalink
  20. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  21. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  22. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  23. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  24. def toString(): String

    Permalink
    Definition Classes
    AnyRef → Any
  25. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  26. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  27. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from AnyRef

Inherited from Any

Ungrouped