Setting basic NiFi network problem

classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|

Setting basic NiFi network problem

Xphos
Okay so I know that this is certainly a duplicate post but i really need help setting this up.
I have installed NiFi and compiled but I am getting stuck trying to setup the basic network.

So I am thinking I am misconfiguring the ports because I am trying to build a 3 node network with just localhost. What keeps happening every time i boot is that the nodes all turn on but then one by one disconnect until a perpetual election cycle happens (All my flow net works are blank)

So I am including my nifi.property configuration files at the bottom for the 3 nodes I am only including the basic data because i haven't touch anything else except setting the zookeeper embeddded property to true for all nodes.

Areas of concern:
1. When I was reading the documentation for setting up the zookeeper conf file it sets up server.1=hostname:2888:3888 so i just followed that example for every server now i didn't want to do that because i feel i should have different ports because i used local host but i don't know were it decided in need to use these ports so I did not think i could change them.

2. web.http.port are all set to 8080 but this might be be incorrectly interpreting the docs.



Node 1:
# Site to Site properties
nifi.remote.input.host=localhost
nifi.remote.input.secure=false
nifi.remote.input.socket.port=8130
nifi.remote.input.http.enabled=true
nifi.remote.input.http.transaction.ttl=30 sec

# web properties #
nifi.web.war.directory=./lib
nifi.web.http.host=
nifi.web.http.port=8080
nifi.web.http.network.interface.default=
nifi.web.https.host=
nifi.web.https.port=
nifi.web.https.network.interface.default=
nifi.web.jetty.working.directory=./work/jetty
nifi.web.jetty.threads=200

# cluster common properties (all nodes must have same values) #
nifi.cluster.protocol.heartbeat.interval=5 sec
nifi.cluster.protocol.is.secure=false

# cluster node properties (only configure for cluster nodes) #
nifi.cluster.is.node=true
nifi.cluster.node.address=
nifi.cluster.node.protocol.port=8100
nifi.cluster.node.protocol.threads=10
nifi.cluster.node.event.history.size=25
nifi.cluster.node.connection.timeout=5 sec
nifi.cluster.node.read.timeout=5 sec
nifi.cluster.firewall.file=
nifi.cluster.flow.election.max.wait.time=1 mins
nifi.cluster.flow.election.max.candidates=3

# zookeeper properties, used for cluster management #
nifi.zookeeper.connect.string=localhost:2180,localhost:2190,localhost:2200
nifi.zookeeper.connect.timeout=3 secs
nifi.zookeeper.session.timeout=3 secs
nifi.zookeeper.root.node=/nifi




Node 2:
# Site to Site properties
nifi.remote.input.host=localhost
nifi.remote.input.secure=false
nifi.remote.input.socket.port=8120
nifi.remote.input.http.enabled=true
nifi.remote.input.http.transaction.ttl=30 sec

# web properties #
nifi.web.war.directory=./lib
nifi.web.http.host=
nifi.web.http.port=8080
nifi.web.http.network.interface.default=
nifi.web.https.host=
nifi.web.https.port=
nifi.web.https.network.interface.default=
nifi.web.jetty.working.directory=./work/jetty
nifi.web.jetty.threads=200

# cluster common properties (all nodes must have same values) #
nifi.cluster.protocol.heartbeat.interval=5 sec
nifi.cluster.protocol.is.secure=false

# cluster node properties (only configure for cluster nodes) #
nifi.cluster.is.node=true
nifi.cluster.node.address=
nifi.cluster.node.protocol.port=8090
nifi.cluster.node.protocol.threads=10
nifi.cluster.node.event.history.size=25
nifi.cluster.node.connection.timeout=5 sec
nifi.cluster.node.read.timeout=5 sec
nifi.cluster.firewall.file=
nifi.cluster.flow.election.max.wait.time=1 mins
nifi.cluster.flow.election.max.candidates= 3

# zookeeper properties, used for cluster management #
nifi.zookeeper.connect.string=localhost:2180,localhost:2190,localhost:2200
nifi.zookeeper.connect.timeout=3 secs
nifi.zookeeper.session.timeout=3 secs
nifi.zookeeper.root.node=/nifi



Node 3:
# Site to Site properties
nifi.remote.input.host=localhost
nifi.remote.input.secure=false
nifi.remote.input.socket.port=8140
nifi.remote.input.http.enabled=true
nifi.remote.input.http.transaction.ttl=30 sec

# web properties #
nifi.web.war.directory=./lib
nifi.web.http.host=
nifi.web.http.port=8080
nifi.web.http.network.interface.default=
nifi.web.https.host=
nifi.web.https.port=
nifi.web.https.network.interface.default=
nifi.web.jetty.working.directory=./work/jetty
nifi.web.jetty.threads=200

# cluster common properties (all nodes must have same values) #
nifi.cluster.protocol.heartbeat.interval=5 sec
nifi.cluster.protocol.is.secure=false

# cluster node properties (only configure for cluster nodes) #
nifi.cluster.is.node=true
nifi.cluster.node.address=
nifi.cluster.node.protocol.port=8110
nifi.cluster.node.protocol.threads=10
nifi.cluster.node.event.history.size=25
nifi.cluster.node.connection.timeout=5 sec
nifi.cluster.node.read.timeout=5 sec
nifi.cluster.firewall.file=
nifi.cluster.flow.election.max.wait.time=1 mins
nifi.cluster.flow.election.max.candidates=3

# zookeeper properties, used for cluster management #
nifi.zookeeper.connect.string=localhost:2180,localhost:2190,localhost:2200
nifi.zookeeper.connect.timeout=3 secs
nifi.zookeeper.session.timeout=3 secs
nifi.zookeeper.root.node=/nifi



Zookeeper conf files
clientPort=2200  //////for all of my nodes i just boost this port by 10
initLimit=10
autopurge.purgeInterval=24
syncLimit=5
tickTime=2000
dataDir=./state/zookeeper
autopurge.snapRetainCount=30


server.1=localhost:2888:3888
server.2=localhost:2888:3888
server.3=localhost:2888:3888
Reply | Threaded
Open this post in threaded view
|

Re: Setting basic NiFi network problem

Andy LoPresto-2
Yes, you need to ensure that each node in the cluster (each instance of the application) is running its web server on a unique port if they are all running on the same machine. You’ll also need to pay attention to the S2S and cluster protocol ports. 

Pierre Villard has written an excellent article with prescriptive steps for setting up a 3 node cluster (unsecured [1] and secured [2]) and I encourage you to follow those steps. 

[2] https://pierrevillard.com/2016/11/29/apache-nifi-1-1-0-secured-cluster-setup/


Andy LoPresto
PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69

On May 25, 2017, at 1:54 PM, Xphos <[hidden email]> wrote:

Okay so I know that this is certainly a duplicate post but i really need help
setting this up.
I have installed NiFi and compiled but I am getting stuck trying to setup
the basic network.

So I am thinking I am misconfiguring the ports because I am trying to build
a 3 node network with just localhost. What keeps happening every time i boot
is that the nodes all turn on but then one by one disconnect until a
perpetual election cycle happens (All my flow net works are blank)

So I am including my nifi.property configuration files at the bottom for the
3 nodes I am only including the basic data because i haven't touch anything
else except setting the zookeeper embeddded property to true for all nodes.

Areas of concern:
1. When I was reading the documentation for setting up the zookeeper conf
file it sets up server.1=hostname:2888:3888 so i just followed that example
for every server now i didn't want to do that because i feel i should have
different ports because i used local host but i don't know were it decided
in need to use these ports so I did not think i could change them.

2. web.http.port are all set to 8080 but this might be be incorrectly
interpreting the docs.



*Node 1:*
# Site to Site properties
nifi.remote.input.host=localhost
nifi.remote.input.secure=false
nifi.remote.input.socket.port=8130
nifi.remote.input.http.enabled=true
nifi.remote.input.http.transaction.ttl=30 sec

# web properties #
nifi.web.war.directory=./lib
nifi.web.http.host=
nifi.web.http.port=8080
nifi.web.http.network.interface.default=
nifi.web.https.host=
nifi.web.https.port=
nifi.web.https.network.interface.default=
nifi.web.jetty.working.directory=./work/jetty
nifi.web.jetty.threads=200

# cluster common properties (all nodes must have same values) #
nifi.cluster.protocol.heartbeat.interval=5 sec
nifi.cluster.protocol.is.secure=false

# cluster node properties (only configure for cluster nodes) #
nifi.cluster.is.node=true
nifi.cluster.node.address=
nifi.cluster.node.protocol.port=8100
nifi.cluster.node.protocol.threads=10
nifi.cluster.node.event.history.size=25
nifi.cluster.node.connection.timeout=5 sec
nifi.cluster.node.read.timeout=5 sec
nifi.cluster.firewall.file=
nifi.cluster.flow.election.max.wait.time=1 mins
nifi.cluster.flow.election.max.candidates=3

# zookeeper properties, used for cluster management #
nifi.zookeeper.connect.string=localhost:2180,localhost:2190,localhost:2200
nifi.zookeeper.connect.timeout=3 secs
nifi.zookeeper.session.timeout=3 secs
nifi.zookeeper.root.node=/nifi




*Node 2:*
# Site to Site properties
nifi.remote.input.host=localhost
nifi.remote.input.secure=false
nifi.remote.input.socket.port=8120
nifi.remote.input.http.enabled=true
nifi.remote.input.http.transaction.ttl=30 sec

# web properties #
nifi.web.war.directory=./lib
nifi.web.http.host=
nifi.web.http.port=8080
nifi.web.http.network.interface.default=
nifi.web.https.host=
nifi.web.https.port=
nifi.web.https.network.interface.default=
nifi.web.jetty.working.directory=./work/jetty
nifi.web.jetty.threads=200

# cluster common properties (all nodes must have same values) #
nifi.cluster.protocol.heartbeat.interval=5 sec
nifi.cluster.protocol.is.secure=false

# cluster node properties (only configure for cluster nodes) #
nifi.cluster.is.node=true
nifi.cluster.node.address=
nifi.cluster.node.protocol.port=8090
nifi.cluster.node.protocol.threads=10
nifi.cluster.node.event.history.size=25
nifi.cluster.node.connection.timeout=5 sec
nifi.cluster.node.read.timeout=5 sec
nifi.cluster.firewall.file=
nifi.cluster.flow.election.max.wait.time=1 mins
nifi.cluster.flow.election.max.candidates= 3

# zookeeper properties, used for cluster management #
nifi.zookeeper.connect.string=localhost:2180,localhost:2190,localhost:2200
nifi.zookeeper.connect.timeout=3 secs
nifi.zookeeper.session.timeout=3 secs
nifi.zookeeper.root.node=/nifi



*Node 3:*
# Site to Site properties
nifi.remote.input.host=localhost
nifi.remote.input.secure=false
nifi.remote.input.socket.port=8140
nifi.remote.input.http.enabled=true
nifi.remote.input.http.transaction.ttl=30 sec

# web properties #
nifi.web.war.directory=./lib
nifi.web.http.host=
nifi.web.http.port=8080
nifi.web.http.network.interface.default=
nifi.web.https.host=
nifi.web.https.port=
nifi.web.https.network.interface.default=
nifi.web.jetty.working.directory=./work/jetty
nifi.web.jetty.threads=200

# cluster common properties (all nodes must have same values) #
nifi.cluster.protocol.heartbeat.interval=5 sec
nifi.cluster.protocol.is.secure=false

# cluster node properties (only configure for cluster nodes) #
nifi.cluster.is.node=true
nifi.cluster.node.address=
nifi.cluster.node.protocol.port=8110
nifi.cluster.node.protocol.threads=10
nifi.cluster.node.event.history.size=25
nifi.cluster.node.connection.timeout=5 sec
nifi.cluster.node.read.timeout=5 sec
nifi.cluster.firewall.file=
nifi.cluster.flow.election.max.wait.time=1 mins
nifi.cluster.flow.election.max.candidates=3

# zookeeper properties, used for cluster management #
nifi.zookeeper.connect.string=localhost:2180,localhost:2190,localhost:2200
nifi.zookeeper.connect.timeout=3 secs
nifi.zookeeper.session.timeout=3 secs
nifi.zookeeper.root.node=/nifi



*Zookeeper conf files*
clientPort=2200  //////for all of my nodes i just boost this port by 10
initLimit=10
autopurge.purgeInterval=24
syncLimit=5
tickTime=2000
dataDir=./state/zookeeper
autopurge.snapRetainCount=30


server.1=localhost:2888:3888
server.2=localhost:2888:3888
server.3=localhost:2888:3888



--
View this message in context: http://apache-nifi-developer-list.39713.n7.nabble.com/Setting-basic-NiFi-network-problem-tp15989.html
Sent from the Apache NiFi Developer List mailing list archive at Nabble.com.


signature.asc (859 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Setting basic NiFi network problem

Xphos
Thanks for the help so far but i am now getting stuck i have this error form the log file and i am not sure how to fix i been trying to look it up online but there no solution insight maybe you've seen it before.

The Errors
2017-05-26 14:54:09,495 ERROR [Leader Election Notification Thread-1] o.a.c.f.imps.CuratorFrameworkImpl Background exception was not retry-able or retry gave up
2017-05-26 14:54:09,672 ERROR [/127.0.0.1:3888] o.a.z.server.quorum.QuorumCnxManager Exception while listening



The expanded relevant log file
2017-05-26 14:54:00,026 INFO [main] o.a.n.c.r.WriteAheadFlowFileRepository Initialized FlowFile Repository using 256 partitions
2017-05-26 14:54:00,207 INFO [main] o.a.n.p.lucene.SimpleIndexManager Index Writer for ./provenance_repository/index-1495817893000 has been returned to Index Manager and is no longer in use. Closing Index Writer
2017-05-26 14:54:00,210 INFO [main] o.a.n.p.PersistentProvenanceRepository Recovered 0 records
2017-05-26 14:54:00,219 INFO [main] o.a.n.p.PersistentProvenanceRepository Created new Provenance Event Writers for events starting with ID 0
2017-05-26 14:54:00,223 INFO [main] o.a.n.c.repository.FileSystemRepository Maximum Threshold for Container default set to 22946397061 bytes; if volume exceeds this size, archived data will be deleted until it no longer exceeds this size
2017-05-26 14:54:00,224 INFO [main] o.a.n.c.repository.FileSystemRepository Initializing FileSystemRepository with 'Always Sync' set to false
2017-05-26 14:54:00,333 INFO [main] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog@377cbdae finished recovering records. Performing Checkpoint to ensure proper state of Partitions before updates
2017-05-26 14:54:00,334 INFO [main] org.wali.MinimalLockingWriteAheadLog Successfully recovered 0 records in 5 milliseconds
2017-05-26 14:54:00,340 INFO [main] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog@377cbdae checkpointed with 0 Records and 0 Swap Files in 6 milliseconds (Stop-the-world time = 2 milliseconds, Clear Edit Logs time = 2 millis), max Transaction ID -1
2017-05-26 14:54:00,385 INFO [main] o.a.n.c.s.server.ZooKeeperStateServer Starting Embedded ZooKeeper Peer
2017-05-26 14:54:00,423 INFO [main] o.a.z.server.persistence.FileSnap Reading snapshot ./state/zookeeper/version-2/snapshot.0
2017-05-26 14:54:00,447 INFO [main] o.apache.nifi.controller.FlowController Checking if there is already a Cluster Coordinator Elected...
2017-05-26 14:54:00,531 INFO [main] o.a.c.f.imps.CuratorFrameworkImpl Starting
2017-05-26 14:54:07,447 WARN [main] o.a.n.c.l.e.CuratorLeaderElectionManager Unable to determine the Elected Leader for role 'Cluster Coordinator' due to org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /nifi/leaders/Cluster Coordinator; assuming no leader has been elected
2017-05-26 14:54:07,448 INFO [Curator-Framework-0] o.a.c.f.imps.CuratorFrameworkImpl backgroundOperationsLoop exiting
2017-05-26 14:54:07,550 INFO [main] o.apache.nifi.controller.FlowController It appears that no Cluster Coordinator has been Elected yet. Registering for Cluster Coordinator Role.
2017-05-26 14:54:07,551 INFO [main] o.a.n.c.l.e.CuratorLeaderElectionManager CuratorLeaderElectionManager[stopped=true] Registered new Leader Selector for role Cluster Coordinator; this node is an active participant in the election.
2017-05-26 14:54:07,551 INFO [main] o.a.c.f.imps.CuratorFrameworkImpl Starting
2017-05-26 14:54:07,555 INFO [main] o.a.n.c.l.e.CuratorLeaderElectionManager CuratorLeaderElectionManager[stopped=false] Registered new Leader Selector for role Cluster Coordinator; this node is an active participant in the election.
2017-05-26 14:54:07,555 INFO [main] o.a.n.c.l.e.CuratorLeaderElectionManager CuratorLeaderElectionManager[stopped=false] started
2017-05-26 14:54:07,555 INFO [main] o.a.n.c.c.h.AbstractHeartbeatMonitor Heartbeat Monitor started
2017-05-26 14:54:08,198 INFO [main] o.e.jetty.server.handler.ContextHandler Started o.e.j.w.WebAppContext@3f67f5ff{/nifi-api,file:///home/quinnl/Desktop/node1/work/jetty/nifi-web-api-1.2.0.war/webapp/,AVAILABLE}{./work/nar/framework/nifi-framework-nar-1.2.0.nar-unpacked/META-INF/bundled-dependencies/nifi-web-api-1.2.0.war}
2017-05-26 14:54:08,753 INFO [main] o.e.j.a.AnnotationConfiguration Scanning elapsed time=407ms
2017-05-26 14:54:08,755 INFO [main] o.e.j.C./nifi-content-viewer No Spring WebApplicationInitializer types detected on classpath
2017-05-26 14:54:08,775 INFO [main] o.e.jetty.server.handler.ContextHandler Started o.e.j.w.WebAppContext@4c57ca10{/nifi-content-viewer,file:///home/quinnl/Desktop/node1/work/jetty/nifi-web-content-viewer-1.2.0.war/webapp/,AVAILABLE}{./work/nar/framework/nifi-framework-nar-1.2.0.nar-unpacked/META-INF/bundled-dependencies/nifi-web-content-viewer-1.2.0.war}
2017-05-26 14:54:08,785 INFO [main] o.e.jetty.server.handler.ContextHandler Started o.e.j.s.h.ContextHandler@787b7796{/nifi-docs,null,AVAILABLE}
2017-05-26 14:54:08,821 INFO [main] o.e.j.a.AnnotationConfiguration Scanning elapsed time=23ms
2017-05-26 14:54:08,823 INFO [main] o.e.jetty.ContextHandler./nifi-docs No Spring WebApplicationInitializer types detected on classpath
2017-05-26 14:54:08,850 INFO [main] o.e.jetty.server.handler.ContextHandler Started o.e.j.w.WebAppContext@a0e35c3{/nifi-docs,file:///home/quinnl/Desktop/node1/work/jetty/nifi-web-docs-1.2.0.war/webapp/,AVAILABLE}{./work/nar/framework/nifi-framework-nar-1.2.0.nar-unpacked/META-INF/bundled-dependencies/nifi-web-docs-1.2.0.war}
2017-05-26 14:54:08,895 INFO [main] o.e.j.a.AnnotationConfiguration Scanning elapsed time=25ms
2017-05-26 14:54:08,896 INFO [main] org.eclipse.jetty.ContextHandler./ No Spring WebApplicationInitializer types detected on classpath
2017-05-26 14:54:08,922 INFO [main] o.e.jetty.server.handler.ContextHandler Started o.e.j.w.WebAppContext@255b84a9{/,file:///home/quinnl/Desktop/node1/work/jetty/nifi-web-error-1.2.0.war/webapp/,AVAILABLE}{./work/nar/framework/nifi-framework-nar-1.2.0.nar-unpacked/META-INF/bundled-dependencies/nifi-web-error-1.2.0.war}
2017-05-26 14:54:08,932 INFO [main] o.eclipse.jetty.server.AbstractConnector Started ServerConnector@e280006{HTTP/1.1,[http/1.1]}{0.0.0.0:8080}
2017-05-26 14:54:08,933 INFO [main] org.eclipse.jetty.server.Server Started @25845ms
2017-05-26 14:54:09,480 INFO [main] org.apache.nifi.web.server.JettyServer Loading Flow...
2017-05-26 14:54:09,492 INFO [Curator-Framework-0] o.a.c.f.imps.CuratorFrameworkImpl backgroundOperationsLoop exiting
2017-05-26 14:54:09,495 ERROR [Leader Election Notification Thread-1] o.a.c.f.imps.CuratorFrameworkImpl Background exception was not retry-able or retry gave up
java.lang.IllegalStateException: Client is not started
        at com.google.common.base.Preconditions.checkState(Preconditions.java:173)
        at org.apache.curator.CuratorZookeeperClient.getZooKeeper(CuratorZookeeperClient.java:114)
        at org.apache.curator.framework.imps.CuratorFrameworkImpl.performBackgroundOperation(CuratorFrameworkImpl.java:835)
        at org.apache.curator.framework.imps.CuratorFrameworkImpl.processBackgroundOperation(CuratorFrameworkImpl.java:507)
        at org.apache.curator.framework.imps.FindAndDeleteProtectedNodeInBackground.execute(FindAndDeleteProtectedNodeInBackground.java:60)
        at org.apache.curator.framework.imps.CreateBuilderImpl.protectedPathInForeground(CreateBuilderImpl.java:496)
        at org.apache.curator.framework.imps.CreateBuilderImpl.forPath(CreateBuilderImpl.java:474)
        at org.apache.curator.framework.imps.CreateBuilderImpl.forPath(CreateBuilderImpl.java:44)
        at org.apache.curator.framework.recipes.locks.StandardLockInternalsDriver.createsTheLock(StandardLockInternalsDriver.java:50)
        at org.apache.curator.framework.recipes.locks.LockInternals.attemptLock(LockInternals.java:217)
        at org.apache.curator.framework.recipes.locks.InterProcessMutex.internalLock(InterProcessMutex.java:232)
        at org.apache.curator.framework.recipes.locks.InterProcessMutex.acquire(InterProcessMutex.java:89)
        at org.apache.curator.framework.recipes.leader.LeaderSelector.doWork(LeaderSelector.java:386)
        at org.apache.curator.framework.recipes.leader.LeaderSelector.doWorkLoop(LeaderSelector.java:441)
        at org.apache.curator.framework.recipes.leader.LeaderSelector.access$100(LeaderSelector.java:64)
        at org.apache.curator.framework.recipes.leader.LeaderSelector$2.call(LeaderSelector.java:245)
        at org.apache.curator.framework.recipes.leader.LeaderSelector$2.call(LeaderSelector.java:239)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:748)
2017-05-26 14:54:09,669 INFO [main] o.a.n.c.l.e.CuratorLeaderElectionManager CuratorLeaderElectionManager[stopped=true] stopped and closed
2017-05-26 14:54:09,669 INFO [main] o.a.n.c.c.h.AbstractHeartbeatMonitor Heartbeat Monitor stopped
2017-05-26 14:54:09,670 INFO [main] o.apache.nifi.controller.FlowController Initiated immediate shutdown of flow controller...
2017-05-26 14:54:09,672 ERROR [/127.0.0.1:3888] o.a.z.server.quorum.QuorumCnxManager Exception while listening
java.net.SocketException: Socket closed
        at java.net.PlainSocketImpl.socketAccept(Native Method)
        at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:409)
        at java.net.ServerSocket.implAccept(ServerSocket.java:545)
        at java.net.ServerSocket.accept(ServerSocket.java:513)
        at org.apache.zookeeper.server.quorum.QuorumCnxManager$Listener.run(QuorumCnxManager.java:509)
2017-05-26 14:54:10,403 INFO [main] o.apache.nifi.controller.FlowController Controller has been terminated successfully.
2017-05-26 14:54:10,405 WARN [main] org.apache.nifi.web.server.JettyServer Failed to start web server... shutting down.
java.lang.Exception: Unable to load flow due to: org.apache.nifi.lifecycle.LifeCycleStartException: Failed to start Flow Service due to: java.net.BindException: Address already in use (Bind failed)
        at org.apache.nifi.web.server.JettyServer.start(JettyServer.java:799)
        at org.apache.nifi.NiFi.<init>(NiFi.java:160)
        at org.apache.nifi.NiFi.main(NiFi.java:267)
Caused by: org.apache.nifi.lifecycle.LifeCycleStartException: Failed to start Flow Service due to: java.net.BindException: Address already in use (Bind failed)
        at org.apache.nifi.controller.StandardFlowService.start(StandardFlowService.java:312)
        at org.apache.nifi.web.server.JettyServer.start(JettyServer.java:789)
        ... 2 common frames omitted
Caused by: java.net.BindException: Address already in use (Bind failed)
        at java.net.PlainSocketImpl.socketBind(Native Method)
        at java.net.AbstractPlainSocketImpl.bind(AbstractPlainSocketImpl.java:387)
        at java.net.ServerSocket.bind(ServerSocket.java:375)
        at java.net.ServerSocket.<init>(ServerSocket.java:237)
        at java.net.ServerSocket.<init>(ServerSocket.java:128)
        at org.apache.nifi.io.socket.SocketUtils.createServerSocket(SocketUtils.java:108)
        at org.apache.nifi.io.socket.SocketListener.start(SocketListener.java:85)
        at org.apache.nifi.cluster.protocol.impl.SocketProtocolListener.start(SocketProtocolListener.java:90)
        at org.apache.nifi.cluster.protocol.impl.NodeProtocolSenderListener.start(NodeProtocolSenderListener.java:64)
        at org.apache.nifi.controller.StandardFlowService.start(StandardFlowService.java:303)
        ... 3 common frames omitted
2017-05-26 14:54:10,407 INFO [Thread-1] org.apache.nifi.NiFi Initiating shutdown of Jetty web server...
2017-05-26 14:54:10,410 INFO [Thread-1] o.eclipse.jetty.server.AbstractConnector Stopped ServerConnector@e280006{HTTP/1.1,[http/1.1]}{0.0.0.0:8080}
2017-05-26 14:54:10,410 INFO [Thread-1] org.eclipse.jetty.server.session Stopped scavenging
Reply | Threaded
Open this post in threaded view
|

Re: Setting basic NiFi network problem

Bryan Rosander-3
Hi Xophos,

You need to ensure that all the ports are different if you're running the
instances on a single machine.  That includes the http(s) port.

Your operating system will not let multiple processes bind to the same ip
address and port.

Relevant portion of the exception:
java.net.BindException: Address already in use (Bind failed)
at org.apache.nifi.web.server.JettyServer.start(JettyServer.java:799)
at org.apache.nifi.NiFi.<init>(NiFi.java:160)
at org.apache.nifi.NiFi.main(NiFi.java:267)
Caused by: org.apache.nifi.lifecycle.LifeCycleStartException: Failed to
start Flow Service due to: java.net.BindException: Address already in use
(Bind failed)

On Fri, May 26, 2017 at 3:07 PM, Xphos <[hidden email]> wrote:

> Thanks for the help so far but i am now getting stuck i have this error
> form
> the log file and i am not sure how to fix i been trying to look it up
> online
> but there no solution insight maybe you've seen it before.
>
> The Errors
> 2017-05-26 14:54:09,495 ERROR [Leader Election Notification Thread-1]
> o.a.c.f.imps.CuratorFrameworkImpl Background exception was not retry-able
> or
> retry gave up
> 2017-05-26 14:54:09,672 ERROR [/127.0.0.1:3888]
> o.a.z.server.quorum.QuorumCnxManager Exception while listening
>
>
>
> The expanded relevant log file
> 2017-05-26 14:54:00,026 INFO [main] o.a.n.c.r.WriteAheadFlowFileRepository
> Initialized FlowFile Repository using 256 partitions
> 2017-05-26 14:54:00,207 INFO [main] o.a.n.p.lucene.SimpleIndexManager
> Index
> Writer for ./provenance_repository/index-1495817893000 has been returned
> to
> Index Manager and is no longer in use. Closing Index Writer
> 2017-05-26 14:54:00,210 INFO [main] o.a.n.p.PersistentProvenanceRepository
> Recovered 0 records
> 2017-05-26 14:54:00,219 INFO [main] o.a.n.p.PersistentProvenanceRepository
> Created new Provenance Event Writers for events starting with ID 0
> 2017-05-26 14:54:00,223 INFO [main] o.a.n.c.repository.
> FileSystemRepository
> Maximum Threshold for Container default set to 22946397061 bytes; if volume
> exceeds this size, archived data will be deleted until it no longer exceeds
> this size
> 2017-05-26 14:54:00,224 INFO [main] o.a.n.c.repository.
> FileSystemRepository
> Initializing FileSystemRepository with 'Always Sync' set to false
> 2017-05-26 14:54:00,333 INFO [main] org.wali.MinimalLockingWriteAheadLog
> org.wali.MinimalLockingWriteAheadLog@377cbdae finished recovering records.
> Performing Checkpoint to ensure proper state of Partitions before updates
> 2017-05-26 14:54:00,334 INFO [main] org.wali.MinimalLockingWriteAheadLog
> Successfully recovered 0 records in 5 milliseconds
> 2017-05-26 14:54:00,340 INFO [main] org.wali.MinimalLockingWriteAheadLog
> org.wali.MinimalLockingWriteAheadLog@377cbdae checkpointed with 0 Records
> and 0 Swap Files in 6 milliseconds (Stop-the-world time = 2 milliseconds,
> Clear Edit Logs time = 2 millis), max Transaction ID -1
> 2017-05-26 14:54:00,385 INFO [main] o.a.n.c.s.server.ZooKeeperStateServer
> Starting Embedded ZooKeeper Peer
> 2017-05-26 14:54:00,423 INFO [main] o.a.z.server.persistence.FileSnap
> Reading snapshot ./state/zookeeper/version-2/snapshot.0
> 2017-05-26 14:54:00,447 INFO [main] o.apache.nifi.controller.
> FlowController
> Checking if there is already a Cluster Coordinator Elected...
> 2017-05-26 14:54:00,531 INFO [main] o.a.c.f.imps.CuratorFrameworkImpl
> Starting
> 2017-05-26 14:54:07,447 WARN [main] o.a.n.c.l.e.
> CuratorLeaderElectionManager
> Unable to determine the Elected Leader for role 'Cluster Coordinator' due
> to
> org.apache.zookeeper.KeeperException$ConnectionLossException:
> KeeperErrorCode = ConnectionLoss for /nifi/leaders/Cluster Coordinator;
> assuming no leader has been elected
> 2017-05-26 14:54:07,448 INFO [Curator-Framework-0]
> o.a.c.f.imps.CuratorFrameworkImpl backgroundOperationsLoop exiting
> 2017-05-26 14:54:07,550 INFO [main] o.apache.nifi.controller.
> FlowController
> It appears that no Cluster Coordinator has been Elected yet. Registering
> for
> Cluster Coordinator Role.
> 2017-05-26 14:54:07,551 INFO [main] o.a.n.c.l.e.
> CuratorLeaderElectionManager
> CuratorLeaderElectionManager[stopped=true] Registered new Leader Selector
> for role Cluster Coordinator; this node is an active participant in the
> election.
> 2017-05-26 14:54:07,551 INFO [main] o.a.c.f.imps.CuratorFrameworkImpl
> Starting
> 2017-05-26 14:54:07,555 INFO [main] o.a.n.c.l.e.
> CuratorLeaderElectionManager
> CuratorLeaderElectionManager[stopped=false] Registered new Leader Selector
> for role Cluster Coordinator; this node is an active participant in the
> election.
> 2017-05-26 14:54:07,555 INFO [main] o.a.n.c.l.e.
> CuratorLeaderElectionManager
> CuratorLeaderElectionManager[stopped=false] started
> 2017-05-26 14:54:07,555 INFO [main] o.a.n.c.c.h.AbstractHeartbeatMonitor
> Heartbeat Monitor started
> 2017-05-26 14:54:08,198 INFO [main] o.e.jetty.server.handler.
> ContextHandler
> Started
> o.e.j.w.WebAppContext@3f67f5ff{/nifi-api,file:///home/
> quinnl/Desktop/node1/work/jetty/nifi-web-api-1.2.0.war/
> webapp/,AVAILABLE}{./work/nar/framework/nifi-framework-nar-
> 1.2.0.nar-unpacked/META-INF/bundled-dependencies/nifi-web-api-1.2.0.war}
> 2017-05-26 14:54:08,753 INFO [main] o.e.j.a.AnnotationConfiguration
> Scanning
> elapsed time=407ms
> 2017-05-26 14:54:08,755 INFO [main] o.e.j.C./nifi-content-viewer No Spring
> WebApplicationInitializer types detected on classpath
> 2017-05-26 14:54:08,775 INFO [main] o.e.jetty.server.handler.
> ContextHandler
> Started
> o.e.j.w.WebAppContext@4c57ca10{/nifi-content-viewer,file:///
> home/quinnl/Desktop/node1/work/jetty/nifi-web-content-
> viewer-1.2.0.war/webapp/,AVAILABLE}{./work/nar/
> framework/nifi-framework-nar-1.2.0.nar-unpacked/META-INF/
> bundled-dependencies/nifi-web-content-viewer-1.2.0.war}
> 2017-05-26 14:54:08,785 INFO [main] o.e.jetty.server.handler.
> ContextHandler
> Started o.e.j.s.h.ContextHandler@787b7796{/nifi-docs,null,AVAILABLE}
> 2017-05-26 14:54:08,821 INFO [main] o.e.j.a.AnnotationConfiguration
> Scanning
> elapsed time=23ms
> 2017-05-26 14:54:08,823 INFO [main] o.e.jetty.ContextHandler./nifi-docs No
> Spring WebApplicationInitializer types detected on classpath
> 2017-05-26 14:54:08,850 INFO [main] o.e.jetty.server.handler.
> ContextHandler
> Started
> o.e.j.w.WebAppContext@a0e35c3{/nifi-docs,file:///home/
> quinnl/Desktop/node1/work/jetty/nifi-web-docs-1.2.0.war/
> webapp/,AVAILABLE}{./work/nar/framework/nifi-framework-nar-
> 1.2.0.nar-unpacked/META-INF/bundled-dependencies/nifi-web-docs-1.2.0.war}
> 2017-05-26 14:54:08,895 INFO [main] o.e.j.a.AnnotationConfiguration
> Scanning
> elapsed time=25ms
> 2017-05-26 14:54:08,896 INFO [main] org.eclipse.jetty.ContextHandler./ No
> Spring WebApplicationInitializer types detected on classpath
> 2017-05-26 14:54:08,922 INFO [main] o.e.jetty.server.handler.
> ContextHandler
> Started
> o.e.j.w.WebAppContext@255b84a9{/,file:///home/quinnl/
> Desktop/node1/work/jetty/nifi-web-error-1.2.0.war/webapp/,
> AVAILABLE}{./work/nar/framework/nifi-framework-nar-
> 1.2.0.nar-unpacked/META-INF/bundled-dependencies/nifi-web-error-1.2.0.war}
> 2017-05-26 14:54:08,932 INFO [main] o.eclipse.jetty.server.
> AbstractConnector
> Started ServerConnector@e280006{HTTP/1.1,[http/1.1]}{0.0.0.0:8080}
> 2017-05-26 14:54:08,933 INFO [main] org.eclipse.jetty.server.Server
> Started
> @25845ms
> 2017-05-26 14:54:09,480 INFO [main] org.apache.nifi.web.server.JettyServer
> Loading Flow...
> 2017-05-26 14:54:09,492 INFO [Curator-Framework-0]
> o.a.c.f.imps.CuratorFrameworkImpl backgroundOperationsLoop exiting
> 2017-05-26 14:54:09,495 ERROR [Leader Election Notification Thread-1]
> o.a.c.f.imps.CuratorFrameworkImpl Background exception was not retry-able
> or
> retry gave up
> java.lang.IllegalStateException: Client is not started
>         at com.google.common.base.Preconditions.checkState(
> Preconditions.java:173)
>         at
> org.apache.curator.CuratorZookeeperClient.getZooKeeper(
> CuratorZookeeperClient.java:114)
>         at
> org.apache.curator.framework.imps.CuratorFrameworkImpl.
> performBackgroundOperation(CuratorFrameworkImpl.java:835)
>         at
> org.apache.curator.framework.imps.CuratorFrameworkImpl.
> processBackgroundOperation(CuratorFrameworkImpl.java:507)
>         at
> org.apache.curator.framework.imps.FindAndDeleteProtectedNodeInBa
> ckground.execute(FindAndDeleteProtectedNodeInBackground.java:60)
>         at
> org.apache.curator.framework.imps.CreateBuilderImpl.
> protectedPathInForeground(CreateBuilderImpl.java:496)
>         at
> org.apache.curator.framework.imps.CreateBuilderImpl.
> forPath(CreateBuilderImpl.java:474)
>         at
> org.apache.curator.framework.imps.CreateBuilderImpl.
> forPath(CreateBuilderImpl.java:44)
>         at
> org.apache.curator.framework.recipes.locks.StandardLockInternalsDriver.
> createsTheLock(StandardLockInternalsDriver.java:50)
>         at
> org.apache.curator.framework.recipes.locks.LockInternals.
> attemptLock(LockInternals.java:217)
>         at
> org.apache.curator.framework.recipes.locks.InterProcessMutex.internalLock(
> InterProcessMutex.java:232)
>         at
> org.apache.curator.framework.recipes.locks.InterProcessMutex.acquire(
> InterProcessMutex.java:89)
>         at
> org.apache.curator.framework.recipes.leader.LeaderSelector.
> doWork(LeaderSelector.java:386)
>         at
> org.apache.curator.framework.recipes.leader.LeaderSelector.
> doWorkLoop(LeaderSelector.java:441)
>         at
> org.apache.curator.framework.recipes.leader.LeaderSelector.
> access$100(LeaderSelector.java:64)
>         at
> org.apache.curator.framework.recipes.leader.LeaderSelector$
> 2.call(LeaderSelector.java:245)
>         at
> org.apache.curator.framework.recipes.leader.LeaderSelector$
> 2.call(LeaderSelector.java:239)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>         at java.util.concurrent.Executors$RunnableAdapter.
> call(Executors.java:511)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>         at java.util.concurrent.Executors$RunnableAdapter.
> call(Executors.java:511)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>         at
> java.util.concurrent.ScheduledThreadPoolExecutor$
> ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>         at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(
> ScheduledThreadPoolExecutor.java:293)
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1142)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:617)
>         at java.lang.Thread.run(Thread.java:748)
> 2017-05-26 14:54:09,669 INFO [main] o.a.n.c.l.e.
> CuratorLeaderElectionManager
> CuratorLeaderElectionManager[stopped=true] stopped and closed
> 2017-05-26 14:54:09,669 INFO [main] o.a.n.c.c.h.AbstractHeartbeatMonitor
> Heartbeat Monitor stopped
> 2017-05-26 14:54:09,670 INFO [main] o.apache.nifi.controller.
> FlowController
> Initiated immediate shutdown of flow controller...
> 2017-05-26 14:54:09,672 ERROR [/127.0.0.1:3888]
> o.a.z.server.quorum.QuorumCnxManager Exception while listening
> java.net.SocketException: Socket closed
>         at java.net.PlainSocketImpl.socketAccept(Native Method)
>         at
> java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:409)
>         at java.net.ServerSocket.implAccept(ServerSocket.java:545)
>         at java.net.ServerSocket.accept(ServerSocket.java:513)
>         at
> org.apache.zookeeper.server.quorum.QuorumCnxManager$
> Listener.run(QuorumCnxManager.java:509)
> 2017-05-26 14:54:10,403 INFO [main] o.apache.nifi.controller.
> FlowController
> Controller has been terminated successfully.
> 2017-05-26 14:54:10,405 WARN [main] org.apache.nifi.web.server.JettyServer
> Failed to start web server... shutting down.
> java.lang.Exception: Unable to load flow due to:
> org.apache.nifi.lifecycle.LifeCycleStartException: Failed to start Flow
> Service due to: java.net.BindException: Address already in use (Bind
> failed)
>         at org.apache.nifi.web.server.JettyServer.start(JettyServer.
> java:799)
>         at org.apache.nifi.NiFi.<init>(NiFi.java:160)
>         at org.apache.nifi.NiFi.main(NiFi.java:267)
> Caused by: org.apache.nifi.lifecycle.LifeCycleStartException: Failed to
> start Flow Service due to: java.net.BindException: Address already in use
> (Bind failed)
>         at
> org.apache.nifi.controller.StandardFlowService.start(
> StandardFlowService.java:312)
>         at org.apache.nifi.web.server.JettyServer.start(JettyServer.
> java:789)
>         ... 2 common frames omitted
> Caused by: java.net.BindException: Address already in use (Bind failed)
>         at java.net.PlainSocketImpl.socketBind(Native Method)
>         at java.net.AbstractPlainSocketImpl.bind(
> AbstractPlainSocketImpl.java:387)
>         at java.net.ServerSocket.bind(ServerSocket.java:375)
>         at java.net.ServerSocket.<init>(ServerSocket.java:237)
>         at java.net.ServerSocket.<init>(ServerSocket.java:128)
>         at
> org.apache.nifi.io.socket.SocketUtils.createServerSocket(
> SocketUtils.java:108)
>         at org.apache.nifi.io.socket.SocketListener.start(
> SocketListener.java:85)
>         at
> org.apache.nifi.cluster.protocol.impl.SocketProtocolListener.start(
> SocketProtocolListener.java:90)
>         at
> org.apache.nifi.cluster.protocol.impl.NodeProtocolSenderListener.start(
> NodeProtocolSenderListener.java:64)
>         at
> org.apache.nifi.controller.StandardFlowService.start(
> StandardFlowService.java:303)
>         ... 3 common frames omitted
> 2017-05-26 14:54:10,407 INFO [Thread-1] org.apache.nifi.NiFi Initiating
> shutdown of Jetty web server...
> 2017-05-26 14:54:10,410 INFO [Thread-1]
> o.eclipse.jetty.server.AbstractConnector Stopped
> ServerConnector@e280006{HTTP/1.1,[http/1.1]}{0.0.0.0:8080}
> 2017-05-26 14:54:10,410 INFO [Thread-1] org.eclipse.jetty.server.session
> Stopped scavenging
>
>
>
>
> --
> View this message in context: http://apache-nifi-developer-
> list.39713.n7.nabble.com/Setting-basic-NiFi-network-
> problem-tp15989p15998.html
> Sent from the Apache NiFi Developer List mailing list archive at
> Nabble.com.
>