couple questions

classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

couple questions

Knapp, Michael
Hi,

My team is starting to do more and more with NiFi, and I had several questions for you.

First, we are thinking of having multiple separate NiFi flows but we want a single source for data provenance.  In the source code I only see these implementations: PersistentProvenanceRepository, VolatileProvenanceRepository, and MockProvenanceRepository.  I was hoping to find a web service that I could run separately from NiFi, and have all my NiFi clusters publish events to that.  Is there any public implementation like that?

Also, we are thinking seriously about using repositories that are not backed by the local file system.  I am helping an intern write an implementation of ContentRepository that is backed by S3, he has already had some success with this (we started by copying a lot from the VolatileContentRepository).  I’m also interested in implementations backed by Kafka and Pachyderm.  If that works, we will probably also need the other repositories to follow, specifically the FlowFileRepository.  Unfortunately, I cannot find a lot of documentation on how to write these repositories, I have just been figuring things out by reviewing the source code and unit tests, but it is still very confusing to me.  So I was wondering:

1.       Has anybody been working on alternative ContentRepository implementations?  Specifically with S3, pachyderm, kafka, or some databases/datastores?

2.       Is there any thorough documentation regarding the contracts that these implementations must adhere to? (besides source code and unit tests)

I’m mainly interested in alternative repositories so I can make NiFi truly fault tolerant (one node dies, and the others immediately take over its work).  Also it would greatly simplify a lot of infrastructure/configuration management for us, could help us save some money, and might help us with compliance issues.  On the down side, it might hurt the file throughput.

Please let me know,

Michael Knapp

________________________________________________________

The information contained in this e-mail is confidential and/or proprietary to Capital One and/or its affiliates and may only be used solely in performance of work or services for Capital One. The information transmitted herewith is intended only for use by the individual or entity to which it is addressed. If the reader of this message is not the intended recipient, you are hereby notified that any review, retransmission, dissemination, distribution, copying or other use of, or taking of any action in reliance upon this information is strictly prohibited. If you have received this communication in error, please contact the sender and delete the material from your computer.
Reply | Threaded
Open this post in threaded view
|

Re: couple questions

Uwe@Moosheimer.com
Hi Michael,

you can use Apache Atlas as provenance sink.
There is a bridge for Atlas mentioned on Hortonworks and also a current Task (https://issues.apache.org/jira/browse/NIFI-3709).

Best regards
Uwe

> Am 23.06.2017 um 18:58 schrieb Knapp, Michael <[hidden email]>:
>
> Hi,
>
> My team is starting to do more and more with NiFi, and I had several questions for you.
>
> First, we are thinking of having multiple separate NiFi flows but we want a single source for data provenance.  In the source code I only see these implementations: PersistentProvenanceRepository, VolatileProvenanceRepository, and MockProvenanceRepository.  I was hoping to find a web service that I could run separately from NiFi, and have all my NiFi clusters publish events to that.  Is there any public implementation like that?
>
> Also, we are thinking seriously about using repositories that are not backed by the local file system.  I am helping an intern write an implementation of ContentRepository that is backed by S3, he has already had some success with this (we started by copying a lot from the VolatileContentRepository).  I’m also interested in implementations backed by Kafka and Pachyderm.  If that works, we will probably also need the other repositories to follow, specifically the FlowFileRepository.  Unfortunately, I cannot find a lot of documentation on how to write these repositories, I have just been figuring things out by reviewing the source code and unit tests, but it is still very confusing to me.  So I was wondering:
>
> 1.       Has anybody been working on alternative ContentRepository implementations?  Specifically with S3, pachyderm, kafka, or some databases/datastores?
>
> 2.       Is there any thorough documentation regarding the contracts that these implementations must adhere to? (besides source code and unit tests)
>
> I’m mainly interested in alternative repositories so I can make NiFi truly fault tolerant (one node dies, and the others immediately take over its work).  Also it would greatly simplify a lot of infrastructure/configuration management for us, could help us save some money, and might help us with compliance issues.  On the down side, it might hurt the file throughput.
>
> Please let me know,
>
> Michael Knapp
>
> ________________________________________________________
>
> The information contained in this e-mail is confidential and/or proprietary to Capital One and/or its affiliates and may only be used solely in performance of work or services for Capital One. The information transmitted herewith is intended only for use by the individual or entity to which it is addressed. If the reader of this message is not the intended recipient, you are hereby notified that any review, retransmission, dissemination, distribution, copying or other use of, or taking of any action in reliance upon this information is strictly prohibited. If you have received this communication in error, please contact the sender and delete the material from your computer.
Reply | Threaded
Open this post in threaded view
|

Re: couple questions

Andy LoPresto-2
In reply to this post by Knapp, Michael
Michael,

With Apache NiFi 1.0.0+ (current version 1.3.0), a feature called “multi-tenant authorization” is available. Bryan Bende has written a good explanation of the use case here [1], but in summary, it is enabled via very granular resource-based access controls to allow a single deployment (standalone or clustered) which can be administered in a single location while providing side-by-side flows for different teams. 

With 1.2.0, two new provenance repository implementations were made available — WriteAheadProvenanceRepository and EncryptedWriteAheadProvenanceRepository. I have written some notes on the changes these implementations provide [2]. If you are looking for an external service to store provenance event data for glacial recording, I would look at Apache Atlas [3][4] or simply using the SiteToSiteProvenanceReportingTask from NiFi to export the captured provenance data to itself via an output port, and then use a dedicated flow to transform it and route it as you would any other arbitrary data. 

As for the alternate repository implementations, I am working on an encrypted content [5] and flowfile repository implementation [6], but those would still leverage the storage capabilities of the existing implementations rather than provide a different backing store. Mark Payne would be able to answer with more detail on the various repository architectures and if there are expectations for contract compliance, but I do know the Apache NiFi In Depth document [7] has some excellent reference literature on the original use cases. why the existing implementations were written as they are, and how they perform. In addition, there are wiki articles on the Write Ahead Log Implementation (WALI) [8] and the Persistent Provenance Repository Design [9] (still the default but now superseded by the Write Ahead Provenance Repository). 

Hopefully some of these resources are helpful for you. If you have further specific questions, I encourage you to reach out here. Hopefully some other users have similar experiences and can provide the benefit of their investigation as well. 





Andy LoPresto
PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69

On Jun 23, 2017, at 12:58 PM, Knapp, Michael <[hidden email]> wrote:

Hi,

My team is starting to do more and more with NiFi, and I had several questions for you.

First, we are thinking of having multiple separate NiFi flows but we want a single source for data provenance.  In the source code I only see these implementations: PersistentProvenanceRepository, VolatileProvenanceRepository, and MockProvenanceRepository.  I was hoping to find a web service that I could run separately from NiFi, and have all my NiFi clusters publish events to that.  Is there any public implementation like that?

Also, we are thinking seriously about using repositories that are not backed by the local file system.  I am helping an intern write an implementation of ContentRepository that is backed by S3, he has already had some success with this (we started by copying a lot from the VolatileContentRepository).  I’m also interested in implementations backed by Kafka and Pachyderm.  If that works, we will probably also need the other repositories to follow, specifically the FlowFileRepository.  Unfortunately, I cannot find a lot of documentation on how to write these repositories, I have just been figuring things out by reviewing the source code and unit tests, but it is still very confusing to me.  So I was wondering:

1.       Has anybody been working on alternative ContentRepository implementations?  Specifically with S3, pachyderm, kafka, or some databases/datastores?

2.       Is there any thorough documentation regarding the contracts that these implementations must adhere to? (besides source code and unit tests)

I’m mainly interested in alternative repositories so I can make NiFi truly fault tolerant (one node dies, and the others immediately take over its work).  Also it would greatly simplify a lot of infrastructure/configuration management for us, could help us save some money, and might help us with compliance issues.  On the down side, it might hurt the file throughput.

Please let me know,

Michael Knapp

________________________________________________________

The information contained in this e-mail is confidential and/or proprietary to Capital One and/or its affiliates and may only be used solely in performance of work or services for Capital One. The information transmitted herewith is intended only for use by the individual or entity to which it is addressed. If the reader of this message is not the intended recipient, you are hereby notified that any review, retransmission, dissemination, distribution, copying or other use of, or taking of any action in reliance upon this information is strictly prohibited. If you have received this communication in error, please contact the sender and delete the material from your computer.


signature.asc (859 bytes) Download Attachment