I am coming across an issue where my unicode characters are being converted
"\u0432\u0430\u0436\u043d\u0435\u0435". This is happening with Twitter data
that is collected using the Twitter processor. How can I debug my workflow
to figure out where the characters are being converted?
If you believe that a process in the flow is manipulating the
characters you can use the built in provenance, archive, and data
viewer functions. We need to document how to set this stuff up. But
for now if you configure the nifi.properties as follows and restart
you'll have the good stuff. This is all assuming you're on the latest
develop branch codebase:
Set the following properties to the following values (these are just examples):
# Comma-separated list of fields. Fields that are not indexed will not
be searchable. Valid fields are:
# EventType, FlowFileUUID, Filename, TransitURI, ProcessorID,
AlternateIdentifierURI, ContentType, Relationship, Details
# FlowFile Attributes that should be indexed and made searchable
# Large values for the shard size will result in more Java heap usage
when searching the Provenance Repository
# but should provide better performance
Basically the things different from default here would be:
Anyway what this does is tells nifi to hang onto the content until it
has to actually delete it from disk. It then allows you to look at
the provenance trail of any object and then you can 'view content' in
our built-in content viewer. You can use that to step by step review
the content as it goes through the flow.
We must make a nice blog out of this with screenshots. It is a really
If that doesn't get you the info you need please let us know.
> I am coming across an issue where my unicode characters are being converted
> "\u0432\u0430\u0436\u043d\u0435\u0435". This is happening with Twitter data
> that is collected using the Twitter processor. How can I debug my workflow
> to figure out where the characters are being converted?
The first property says to index all provenance events after 30 seconds
instead of waiting 5 mins (the default).
Second property says to index those specific fields for all provenance
Third property enables the Provenance Data content viewer.
The other 2 properties indicate that the content should be kept on the
box for up to 24 hours, but to delete content if the disk is 80% full.
After changing those, you'd need to restart your system.
So I'm suggesting that you do that so that you can make use of NiFi's
data provenance to debug workflows. It's a super powerful feature.
Then, you can click on the Data Provenance icon in the UI (4th icon in
the toolbar in the very right-hand side). Then click "Search". You can
search by filename or whatever. If you just want to find data coming
from the twitter processor, you can enter that for the "Component ID"
(to get the id of that processor, right-click on it and choose
configure. it's in the Settings tab.)
Then when you search you can see up to 1000 results. Click the little
icon on the right-hand side that looks a bit like a propeller (it's
actually intended to show a graph/tree). From there you can see what
happened to the data as it went through your flow. For any of those
events, you can right-click and "View Details". This will show you all
sorts of info about the event. In the Content tab, you can click "View"
to see what the content looked like at that point in time. You can then
go back to the lineage view and look at the next or previous event and
do the same thing until you know exactly where it changed.
Hope this helps!
Let us know if you have any further questions.
------ Original Message ------
From: "Adam Estrada" <[hidden email]>
To: [hidden email] Sent: 4/30/2015 2:20:56 PM
Subject: Maintain character encoding in workflow
>I am coming across an issue where my unicode characters are being
>"\u0432\u0430\u0436\u043d\u0435\u0435". This is happening with Twitter
>that is collected using the Twitter processor. How can I debug my
>to figure out where the characters are being converted?