The easiest way is to use the RDFGraphMaterializer:
1234567891011121314151617// load triples from diskval graph = RDFGraphLoader.loadFromDisk(input, spark, parallelism)// create reasonerval reasoner = profile match {case TRANSITIVE => new TransitiveReasoner(spark.sparkContext, parallelism)case RDFS => new ForwardRuleReasonerRDFS(spark.sparkContext, parallelism)case RDFS_SIMPLE =>var r = new ForwardRuleReasonerRDFS(spark.sparkContext, parallelism) //.level.+(RDFSLevel.SIMPLE)r.level = RDFSLevel.SIMPLErcase OWL_HORST => new ForwardRuleReasonerOWLHorst(spark.sparkContext)}// compute inferred graphval inferredGraph = reasoner.apply(graph)// write triples to diskRDFGraphWriter.writeGraphToFile(inferredGraph, output.getAbsolutePath, writeToSingleFile, sortedOutput)
Full example code: https://github.com/SANSA-Stack/SANSA-Examples/blob/master/sansa-examples-spark/src/main/scala/net/sansa_stack/examples/spark/inference/RDFGraphInference.scala
1234567891011121314151617// load triples from diskval graph = RDFGraphLoader.loadFromDisk(input, env)// create reasonerval reasoner = profile match {case RDFS => new ForwardRuleReasonerRDFS(env)case RDFS_SIMPLE => {val r = new ForwardRuleReasonerRDFS(env)r.level = RDFSLevel.SIMPLEr}case OWL_HORST => new ForwardRuleReasonerOWLHorst(env)}// compute inferred graphval inferredGraph = reasoner.apply(graph)// write triples to diskRDFGraphWriter.writeToDisk(inferredGraph, output, writeToSingleFile, sortedOutput)
Full example code: https://github.com/SANSA-Stack/SANSA-Examples/blob/master/sansa-examples-flink/src/main/scala/net/sansa_stack/examples/flink/inference/RDFGraphInference.scala