View Single Post
Old 09-29-14, 12:38 AM   #10
nopoe
Swabbie
 
Join Date: Sep 2014
Posts: 12
Downloads: 2
Uploads: 0
Default

Quote:
Originally Posted by magicstix View Post
USML is not designed to run "real-time," though in your case what this means depends on how you're using it.

If you're just using it to control how bright lines are on your sonar display, you should be OK with it producing results every few seconds.

If you're trying to use it to generate actual acoustics (sound), you'll definitely run into problems. For one, it's designed to give more of a qualitative overview of the ocean environment, so it doesn't necessarily provide the kind of detail you'll need to generate the actual sound you'd hear. The fact that it is also slow means it'll be very difficult to generate acoustics without extensive interpolation/extrapolation of the model's output.

Using the eigenrays will pose a big challenge to you as well. Expect the model to generate thousands of eigenrays for a single point-to-point run. How you use those eigenrays will dramatically affect the overall presentation of the ocean, and will be important especially for how multipath effects show up.
Yeah, I know it's not a real-time modeling engine, but I think I'll probably be ok. As far as I can tell from the unit test code, it gives you a bunch of propagation losses for the frequencies you propagate. I'm going to propagate a frequency vector and store the propagation losses and approximate travel times on a per-vessel basis. Then the USML worker thread will move on to modeling another vessel's sound. Meanwhile the sonar display will iterate through all vessels taking the most recent propagation loss and then use that, combined with the amount of sound the contact was making in each frequency band n seconds ago to find the "brightness" of the line on the broadband display, where n is the travel time. So if the other vessel makes a ton of noise for a split second, you'll still hear it even if there was no wave propagation happening during that time.

From my tests on Linux I think this is pretty doable real-time on a fast, modern quad core as long as you don't need to update propagation losses too fast and you have a reasonable number of vessels nearby. I've been testing with the malta-movie unit test as my benchmark, which (I think) uses eigenrays so I should be fine.

On Linux it took 4 seconds or so to finish the malta-movie unit test, which included time to load the bathymetry and temperature data from disk (though I removed the saving of the netcdf data). The test itself propagates a wavefront with 90 individual rays for 60 simulated seconds. If you say the average contact distance is going to be 60 seconds away, that means you can update one contact's propagation losses every 4 seconds using one core. Using multiple independent USML worker threads (assuming I can set that up) you can average an update frequency 1/n where n is the number of vessels.

With 20 vessels, which seems like a lot to me, that's one update every 20 seconds. 20 seconds seems like a very long period of time to go without updating propagation losses, but keep in mind the sonar display will still track the sound they've been making during that time. Additionally, that's for contacts that are almost 50 nautical miles away. Even if the two contacts are racing towards each other at 30 knots each they're barely going to cover 600 meters in that time. To put that in perspective, 600m is 0.6% of the total seperation, roughly. That's not going to change their acoustic situation that much. Sound from contacts closer to the sonar array will require (approximately) proportionally less time to propagate. From what I can tell, the complexity per timestep really depends on what the rays are doing in the ocean, but roughly speaking the closer contacts can be updated more frequently.

I'm by no means an expert on submarines, computational acoustics or anything else that this project involves, but I do think I did the benchmarks and the math right.

Like I said, though, it's too slow on windows. Way too slow. And I think OpenCL is probably the wrong choice because there's a lot of overhead associated with it, so I'm currently looking at Eigen as an alternative.

The current status is that most of the stuff in the "ublas" subdirectory is rewritten to use Eigen and I've started moving the rest of the library over. I'm currently puzzling as to how to best handle the seq_vector class. I know it's a pretty big undertaking and stuff, but the project is dead already otherwise so it's worth a shot. If I do get it working I think performance should be as good or better on windows as it was on linux with uBLAS. That's not a small "if" though. It would be really nice if visual studio had better performance...

Last edited by nopoe; 09-29-14 at 01:13 PM.
nopoe is offline   Reply With Quote