Swing Worker:Function Get()

SwingWorker get() freezing interface

As far as I know you should not call SwingWorker's get() method directly from UI thread unless you are sure it is done processing. Instead you should call it inside overriden MySwingWorker.done(), where you can be sure that background task finished executing.

It is not relevant, that all JPanel creation is done before blocking as Swing still needs its main UI thread for repainting and updating UI and to stay responsive. By calling get() you are blocking it.

From UI thread you should just call execute() and all the processing of results (productIds = snap.getProductIds(); oldLocationIds = snap.getOldLocationIds(); oldStatusIds = snap.getOldStatusIds(); in your case) should be done inside done() callback.

A simple example is available in this answer.

In your case it should go something like this:

//inside MySwingWorker
@Override
protected void done() {
try {
System.out.println("My long running database process is done. Now I can update UI without blocking/freezing it.");
GMSnapshotVO snap = get();
loadingJPanel.updateStatus("Complete");
loadingJFrame.dispose();
productIds = snap.getProductIds();
oldLocationIds = snap.getOldLocationIds();
oldStatusIds = snap.getOldStatusIds();
//..do other stuff with ids
} catch (InterruptedException | ExecutionException e) {
e.printStackTrace();
}
}

Hope it helps.

SwingWorker done() method throws nullpointerexception with get()

The problem is line 830 of your LogFileAnalyzerGUI class. The itemStateChanged() method of your keywordJComboBox is causing a NullPointerException

This method is triggered by your ProcessDatabaseWorker at keywordJComboBox.removeAllItems()

By removing all items the ItemState changes to DESELECTED.

The method itemStateChanged() runs again, but no more items are in the ComboBox thus causing the NPE.

SwingWorker done method throws cancellationexception with get()

cancel doesn't just set the isCancelled flag for you to read at your leisure. That would be pretty much useless. It prevents the task from starting if it hasn't already and may actively interrupt the thread if it's already running. As such, getting a CancellationException is the natural consequence of cancelling a running task.

To further the point, the Javadoc on isCancelled states:

Returns true if this task was cancelled before it completed normally.

Hence if this returns true, then your task cannot complete normally. You cannot cancel a task and expect it to continue as per normal.

SwingWorker docs say "An abstract class to perform lengthy GUI-interaction tasks in a background thread". However, the definition of "lengthly" is different for GUI and for an application lifetime. A 100ms task is very long for a GUI, and is best done by a SwingWorker. A 10 minute task is too long for a SwingWorker simply because it has a limited thread pool, that you may exhaust. Judging by your problem description, you have exactly that - a potentially very long running task. As such, you should rather make a proper background thread than use a SwingWorker.

In that thread, you would have either an AtomicBoolean or simply a volatile boolean flag that you can manually set from the EDT. The thread can then post an event to the EDT with the result.

Code:

class PacketCaptureWorker implements Runnable {
private volatile boolean cancelled = false;
public void cancel() {
cancelled = true;
}
public void run() {
while (!cancelled) {
//do work
}
SwingUtilities.invokeLater(new Runnable() {
public void run() {
//Use the result of your computation on the EDT
}
});
}
}

new Thread(new PacketCaptureWorker()).start();

SwingWorker, done() is executed before process() calls are finished

SHORT ANSWER:

This happens because publish() doesn't directly schedule process, it sets a timer which will fire the scheduling of a process() block in the EDT after DELAY. So when the worker is cancelled there is still a timer waiting to schedule a process() with the data of the last publish. The reason for using a timer is to implement the optimization where a single process may be executed with the combined data of several publishes.

LONG ANSWER:

Let's see how publish() and cancel interact with each other, for that, let us dive into some source code.

First the easy part, cancel(true):

public final boolean cancel(boolean mayInterruptIfRunning) {
return future.cancel(mayInterruptIfRunning);
}

This cancel ends up calling the following code:

boolean innerCancel(boolean mayInterruptIfRunning) {
for (;;) {
int s = getState();
if (ranOrCancelled(s))
return false;
if (compareAndSetState(s, CANCELLED)) // <-----
break;
}
if (mayInterruptIfRunning) {
Thread r = runner;
if (r != null)
r.interrupt(); // <-----
}
releaseShared(0);
done(); // <-----
return true;
}

The SwingWorker state is set to CANCELLED, the thread is interrupted and done() is called, however this is not SwingWorker's done, but the future done(), which is specified when the variable is instantiated in the SwingWorker constructor:

future = new FutureTask<T>(callable) {
@Override
protected void done() {
doneEDT(); // <-----
setState(StateValue.DONE);
}
};

And the doneEDT() code is:

private void doneEDT() {
Runnable doDone =
new Runnable() {
public void run() {
done(); // <-----
}
};
if (SwingUtilities.isEventDispatchThread()) {
doDone.run(); // <-----
} else {
doSubmit.add(doDone);
}
}

Which calls the SwingWorkers's done() directly if we are in the EDT which is our case. At this point the SwingWorker should stop, no more publish() should be called, this is easy enough to demonstrate with the following modification:

while(!isCancelled()) {
textArea.append("Calling publish\n");
publish("Writing...\n");
}

However we still get a "Writing..." message from process(). So let us see how is process() called. The source code for publish(...) is

protected final void publish(V... chunks) {
synchronized (this) {
if (doProcess == null) {
doProcess = new AccumulativeRunnable<V>() {
@Override
public void run(List<V> args) {
process(args); // <-----
}
@Override
protected void submit() {
doSubmit.add(this); // <-----
}
};
}
}
doProcess.add(chunks); // <-----
}

We see that the run() of the Runnable doProcess is who ends up calling process(args), but this code just calls doProcess.add(chunks) not doProcess.run() and there's a doSubmit around too. Let's see doProcess.add(chunks).

public final synchronized void add(T... args) {
boolean isSubmitted = true;
if (arguments == null) {
isSubmitted = false;
arguments = new ArrayList<T>();
}
Collections.addAll(arguments, args); // <-----
if (!isSubmitted) { //This is what will make that for multiple publishes only one process is executed
submit(); // <-----
}
}

So what publish() actually does is adding the chunks into some internal ArrayList arguments and calling submit(). We just saw that submit just calls doSubmit.add(this), which is this very same add method, since both doProcess and doSubmit extend AccumulativeRunnable<V>, however this time around V is Runnable instead of String as in doProcess. So a chunk is the runnable that calls process(args). However the submit() call is a completely different method defined in the class of doSubmit:

private static class DoSubmitAccumulativeRunnable
extends AccumulativeRunnable<Runnable> implements ActionListener {
private final static int DELAY = (int) (1000 / 30);
@Override
protected void run(List<Runnable> args) {
for (Runnable runnable : args) {
runnable.run();
}
}
@Override
protected void submit() {
Timer timer = new Timer(DELAY, this); // <-----
timer.setRepeats(false);
timer.start();
}
public void actionPerformed(ActionEvent event) {
run(); // <-----
}
}

It creates a Timer that fires the actionPerformed code once after DELAY miliseconds. Once the event is fired the code will be enqueued in the EDT which will call an internal run() which ends up calling run(flush()) of doProcess and thus executing process(chunk), where chunk is the flushed data of the arguments ArrayList. I skipped some details, the chain of "run" calls is like this:

  • doSubmit.run()
  • doSubmit.run(flush()) //Actually a loop of runnables but will only have one (*)
  • doProcess.run()
  • doProcess.run(flush())
  • process(chunk)

(*)The boolean isSubmited and flush() (which resets this boolean) make it so additional calls to publish don't add doProcess runnables to be called in doSubmit.run(flush()) however their data is not ignored. Thus executing a single process for any amount of publishes called during the life of a Timer.

All in all, what publish("Writing...") does is scheduling the call to process(chunk) in the EDT after a DELAY. This explains why even after we cancelled the thread and no more publishes are done, still one process execution appears, because the moment we cancel the worker there's (with high probability) a Timer that will schedule a process() after done() is already scheduled.

Why is this Timer used instead of just scheduling process() in the EDT with an invokeLater(doProcess)? To implement the performance optimization explained in the docs:

Because the process method is invoked asynchronously on the Event
Dispatch Thread multiple invocations to the publish method might occur
before the process method is executed. For performance purposes all
these invocations are coalesced into one invocation with concatenated
arguments.
For example:

 publish("1");
publish("2", "3");
publish("4", "5", "6");

might result in:
process("1", "2", "3", "4", "5", "6")

We now know that this works because all the publishes that occur within a DELAY interval are adding their args into that internal variable we saw arguments and the process(chunk) will execute with all that data in one go.

IS THIS A BUG? WORKAROUND?

It's hard to tell If this is a bug or not, It might make sense to process the data that the background thread has published, since the work is actually done and you might be interested in getting the GUI updated with as much info as you can (if that's what process() is doing, for example). And then it might not make sense if done() requires to have all the data processed and/or a call to process() after done() creates data/GUI inconsistencies.

There's an obvious workaround if you don't want any new process() to be executed after done(), simply check if the worker is cancelled in the process method too!

@Override
protected void process(List<String> chunks) {
if (isCancelled()) return;
String string = chunks.get(chunks.size() - 1);
textArea.append(string);
}

It's more tricky to make done() be executed after that last process(), for example done could just use also a timer that will schedule the actual done() work after >DELAY. Although I can't think this is would be a common case since if you cancelled It shouldn't be important to miss one more process() when we know that we are in fact cancelling the execution of all the future ones.

How to use SwingWorker?

At first, i just call the doInBackground() function from MySQL.execute(); using this. And then in doInBackground() function , i just collect those counter values and use publish(); function to passcertain value. here i just pass flag to denote data's were collected successfully.
publish(GraphLock);

After calling the Publish(); method, Process(List chunks) method get invoked. On that i just check the condition and call the Graph class to generate the graph.

    if(GraphLock==true)
SwingUtilities.invokeLater(new Graph());

It works properly...

How do I wait for a SwingWorker's doInBackground() method?

Typically anything that needs to be done after a SwingWorker completes its background work is done by overriding the done() method in it. This method is called on the Swing event thread after completion, allowing you to update the GUI or print something out or whatever. If you really do need to block until it completes, you can call get().

NB. Calling get() within the done() method will return with your result immediately, so you don't have to worry about that blocking any UI work.

How can I pass arguments into SwingWorker?

Why not give it a File field and fill that field via a constructor that takes a File parameter?

class ClassB extends SwingWorker<Void, Integer>
{
private File cfgFile;

public ClassB(File cfgFile) {
this.cfgFile = cfgFile;
}

protected Void doInBackground()
{
ClassC.runProgram(cfgFile);
}

protected void done()
{
try
{
tabs.setSelectedIndex(1);
}
catch (Exception ignore)
{
// *** ignoring exceptions is usually not a good idea. ***
}
}
}

And then run it like so:

public void actionPerformed(ActionEvent e) 
{
new ClassB(cfgFile).execute();
}

Update Swing Component using Swing Worker Thread

When the "publish" method if invoked very frequently the values will probably be accumulated before the "process" is invoked on the EDT.

That is why the "process" method receives a List of object to publish. It is the responsibility of your code to iterate through the List and update your GUI using all the data in the List.

So given that your "doInBackground" logic uses a for loop, I would suggest multiple values are accumulated and your "process" method is only process one of many.

When you use a Thread.sleep(...) you limit the number of objects that will potentially be consolidated into a single "process" event.

So the solution is to make sure your "process" method iterates through all the objects in the List.



Related Topics



Leave a reply



Submit