Warning

 

Close

Confirm Action

Are you sure you wish to do this?

Confirm Cancel
BCM
User Panel

Site Notices
Page / 13
Link Posted: 7/4/2015 2:52:10 PM EDT
[#1]
Link Posted: 7/4/2015 2:56:46 PM EDT
[#2]
Discussion ForumsJump to Quoted PostQuote History
Quoted:


I'm honestly unsure how to code that with the epiphany cores.  Similar base code, same with epiphany code (I think, other than a common RAM pool), but getting the right work blocks to the correct epiphany core and back to the display  is where I'm hitting a speed choke point in my head.   I've only administered clusters, haven't coded across them since they were acting as a virtual server (code ready made)...




View Quote View All Quotes
View All Quotes
Discussion ForumsJump to Quoted PostQuote History
Quoted:
Quoted:
Got the example MPI hello, world working across all 4 boards in the cluster.

Spending way too much time playing 'linux system admin' though to get to this point, cutting into coding time

Hello World from MPI Process 3 on machine parallella1
Hello World from MPI Process 1 on machine parallella1
Hello World from MPI Process 2 on machine parallella1
Hello World from MPI Process 9 on machine parallella3
Hello World from MPI Process 0 on machine parallella1
Hello World from MPI Process 8 on machine parallella3
Hello World from MPI Process 10 on machine parallella3
Hello World from MPI Process 5 on machine parallella2
Hello World from MPI Process 4 on machine parallella2
Hello World from MPI Process 13 on machine parallella4
Hello World from MPI Process 6 on machine parallella2
Hello World from MPI Process 14 on machine parallella4
Hello World from MPI Process 12 on machine parallella4
Hello World from MPI Process 7 on machine parallella2
Hello World from MPI Process 15 on machine parallella4
Hello World from MPI Process 11 on machine parallella3


I'm honestly unsure how to code that with the epiphany cores.  Similar base code, same with epiphany code (I think, other than a common RAM pool), but getting the right work blocks to the correct epiphany core and back to the display  is where I'm hitting a speed choke point in my head.   I've only administered clusters, haven't coded across them since they were acting as a virtual server (code ready made)...






Me too, I have a steep learning curve I think
Link Posted: 7/4/2015 2:58:09 PM EDT
[#3]
Link Posted: 7/4/2015 3:13:24 PM EDT
[#4]
That's interesting, I might have to look at it to see if it fits with anything I've been working on.

I do have Windows 10 Core currently running on a Raspberry Pi 2.  Basically just been piddling with the settings, management, deployment and just getting something on there to be able to browse to a web app.  It may prove to be a very economical replacement for some of the thin client stuff we currently have.
Link Posted: 7/4/2015 3:36:32 PM EDT
[#5]
I am thinking I will need controller programs running on each board waiting for MPI messages to kick off processing of the current iterations data,
Link Posted: 7/4/2015 4:58:33 PM EDT
[#6]
Link Posted: 7/5/2015 1:40:55 PM EDT
[#7]
I am going to go with openCL it looks like, they have a parallella openCL example for matrix multiplication that uses the epiphany cores, not sure if it spans across boards yet or not though.
Link Posted: 7/5/2015 1:55:53 PM EDT
[#8]
Link Posted: 7/5/2015 2:11:23 PM EDT
[#9]
Actually now I found an example Nbody sim from Brown Deer Technology that uses MPI to spread the load across the epiphany cores on a single board.

This is confusing.

I am having trouble getting their example built, may go that route if I can get it working on one board then scale to the others if possible.

Too much new stuff to learn.
Link Posted: 7/5/2015 2:42:30 PM EDT
[#10]
Link Posted: 7/5/2015 3:26:11 PM EDT
[#11]
Alright, finally got the nbody mpi example built, the coprthr library that Brown Deer has prebuilt for download on their website is not the most current one, so I had to get that library off their github and build it myself, which was another challenge but I got it done.

Now I have something I can dig into and start hacking away at, which is what I do best.
Link Posted: 7/5/2015 4:13:09 PM EDT
[#12]
Link Posted: 7/5/2015 7:40:21 PM EDT
[#13]
4096 stars using MPI on one board.

My code is faster than the MPI version on smaller number of stars, but the MPI version is faster on bigger numbers.

Not as smooth as I would like yet, but it's a start.

Link Posted: 7/5/2015 8:07:38 PM EDT
[#14]
Link Posted: 7/5/2015 8:20:05 PM EDT
[#15]
Discussion ForumsJump to Quoted PostQuote History
Quoted:


Is that using the ARM cores MPI, or one board with the coprocessor lib?  I'm confused...    Looks good, though!

View Quote View All Quotes
View All Quotes
Discussion ForumsJump to Quoted PostQuote History
Quoted:
Quoted:
4096 stars using MPI on one board.

My code is faster than the MPI version on smaller number of stars, but the MPI version is faster on bigger numbers.

Not as smooth as I would like yet, but it's a start.

http://youtu.be/DOi9BTTJTaY


Is that using the ARM cores MPI, or one board with the coprocessor lib?  I'm confused...    Looks good, though!



Using the 16 core epiphany on one board

I have the MPI Hello World example running across 4 boards, so I may be able to work some magic here and get the star sim spread over 4 boards using MPI
Link Posted: 7/6/2015 12:22:25 PM EDT
[#16]
Link Posted: 7/6/2015 1:42:01 PM EDT
[#17]
Discussion ForumsJump to Quoted PostQuote History
Quoted:


So the demo above with more stars is using both cores of the ARM plus the Copper Threads, or just both cores plus the same epiphany code you have been using?   Too many cores to clearly get which ones you are using MPI with...
View Quote View All Quotes
View All Quotes
Discussion ForumsJump to Quoted PostQuote History
Quoted:
Quoted:
Quoted:
Quoted:
4096 stars using MPI on one board.

My code is faster than the MPI version on smaller number of stars, but the MPI version is faster on bigger numbers.

Not as smooth as I would like yet, but it's a start.

http://youtu.be/DOi9BTTJTaY


Is that using the ARM cores MPI, or one board with the coprocessor lib?  I'm confused...    Looks good, though!



Using the 16 core epiphany on one board

I have the MPI Hello World example running across 4 boards, so I may be able to work some magic here and get the star sim spread over 4 boards using MPI


So the demo above with more stars is using both cores of the ARM plus the Copper Threads, or just both cores plus the same epiphany code you have been using?   Too many cores to clearly get which ones you are using MPI with...


The star calculations are all done on the 16 epiphany cores, the ARM is just doing the actual display of the stars, the 16 cores are doing the heavy lifting.
Link Posted: 7/6/2015 2:28:41 PM EDT
[#18]
Link Posted: 7/6/2015 10:11:21 PM EDT
[#19]
Link Posted: 7/7/2015 5:53:01 AM EDT
[#20]
Discussion ForumsJump to Quoted PostQuote History
Quoted:
An idea to help, get the source to xstar

It's a *nix program that makes an X11 window and does an n-body simulation by a number of algorithms, which you can choose.



Source Code
View Quote View All Quotes
View All Quotes
Discussion ForumsJump to Quoted PostQuote History
Quoted:
An idea to help, get the source to xstar

It's a *nix program that makes an X11 window and does an n-body simulation by a number of algorithms, which you can choose.

XStar is an X11 client that ''solves'' the n-body problem, and displays the results on the screen. It starts by putting a bunch of stars on the screen, and then it lets the inter-body gravitational forces move the stars around. The result is a lot of neat wandering paths, as the stars interact and collide. Try using the display mode options (-c, -C, -R, or -M) to make things more colorful.


Source Code


I'll check it out, thanks

I am working on moving the MPI graphics to the ePiphany cores and dma copying the star positions into the frame buffer from there.

If I get that working it should blow the doors off, speed wise.
Link Posted: 7/7/2015 3:20:18 PM EDT
[#21]
I nailed it!

3000 stars using MPI and the Epiphany cores DMA copying the graphics data to the frame buffer, mucho fast now!

Link Posted: 7/7/2015 4:05:51 PM EDT
[#22]
Playing with the timestep I can hit warp speed

Yes, that is my fat thumb in the vid

I knew this would fast if I could get the DMA graphics thing working.

Link Posted: 7/7/2015 4:39:25 PM EDT
[#23]
Link Posted: 7/7/2015 4:43:07 PM EDT
[#24]
Discussion ForumsJump to Quoted PostQuote History
Quoted:
Awesome!   Is that using all 64 epiphany cores, or just the single board yet?

View Quote


Just one board, 16 cores.

It won't work on all 4 boards since I can't DMA copy from one board to another's frame buffer.

Going to have them generate openGL commands I think and display on my iMac.

Kinda like this guy did with his parallella and a raspberry pi

Link Posted: 7/7/2015 5:00:04 PM EDT
[#25]
Great updates, awesome work.
Link Posted: 7/7/2015 5:07:32 PM EDT
[#26]
Discussion ForumsJump to Quoted PostQuote History
Quoted:
Great updates, awesome work.
View Quote


Thanks!

Since it is all offloaded to the Epiphany coprocessor now, I can do other things while it runs.

Kind of like an active wallpaper

Link Posted: 7/7/2015 6:37:55 PM EDT
[#27]
Link Posted: 7/7/2015 6:43:23 PM EDT
[#28]
Yeah, this is fast enough now I am not even going to bother with trying to code a Barnes-Hut algorithm.
Link Posted: 7/7/2015 7:05:49 PM EDT
[#29]
Link Posted: 7/8/2015 2:33:38 PM EDT
[#30]
Discussion ForumsJump to Quoted PostQuote History
Quoted:
Now make it X11 enabled.

(same problem with memory/network speed, I know...)

View Quote


You should get your parallella today, tracking says 'out for delivery'
Link Posted: 7/8/2015 3:57:05 PM EDT
[#31]
Link Posted: 7/8/2015 4:03:09 PM EDT
[#32]
Cool, glad you got it ok.

That is the server version, I am not as up to speed on those, check parallella.org for more info.

They have a good forum, not as active as here (who is?) but tons of info I got to where I am today with this from there.

Link Posted: 7/8/2015 4:10:10 PM EDT
[#33]
Link Posted: 7/8/2015 4:16:07 PM EDT
[#34]
Discussion ForumsJump to Quoted PostQuote History
Quoted:


Will do that.  An additional big hump for me is re-learning X11 coding, haven't started "raw" for a long time, and even then was with Motif objects.

I'll try to run it through a Raspberry Pi to OpenGL using that guy's code in the video if I can find it...  Off for more Pi...

View Quote View All Quotes
View All Quotes
Discussion ForumsJump to Quoted PostQuote History
Quoted:
Quoted:
Cool, glad you got it ok.

That is the server version, I am not as up to speed on those, check parallella.org for more info.

They have a good forum, not as active as here (who is?) but tons of info I got to where I am today with this from there.



Will do that.  An additional big hump for me is re-learning X11 coding, haven't started "raw" for a long time, and even then was with Motif objects.

I'll try to run it through a Raspberry Pi to OpenGL using that guy's code in the video if I can find it...  Off for more Pi...



Ok.

My first job on Wall St was doing X11 Motif coding for trading systems, too long ago (20 years) to remember that stuff, but may be able to help locate what you need.
Link Posted: 7/8/2015 4:24:41 PM EDT
[#35]
Link Posted: 7/8/2015 4:40:35 PM EDT
[#36]
Go to here for the nBody code of my latest videos.

https://github.com/USArmyResearchLab/mpi-epiphany

The path to get to where it all compiles and runs is kind of a pain, but I can walk you though it.

That example has no stars output, just text, so should run on that server version ok once you work through the .lib issues I did.
Link Posted: 7/8/2015 4:46:29 PM EDT
[#37]
Link to the Epiphany SDK document

http://adapteva.com/docs/epiphany_sdk_ref.pdf

Also, the parallella chronicles arfcom member 'AD_UK' creates


https://www.parallella.org/2014/11/25/parallella-chronicles-part-one-2/
Link Posted: 7/8/2015 5:20:46 PM EDT
[#38]
If I could do a 'redo' on my life, I would have skipped Wall St and done this stuff, fascinates me like no other non-hot female subject does.

Link Posted: 7/8/2015 5:39:09 PM EDT
[#39]
Gravity sim Raspberry Pi gamers find interesting, lol

Link Posted: 7/8/2015 5:46:11 PM EDT
[#40]
Link Posted: 7/8/2015 6:13:10 PM EDT
[#41]
Discussion ForumsJump to Quoted PostQuote History
Quoted:


This one is pretty cool too:

https://youtu.be/MncUDWhPB_E
View Quote View All Quotes
View All Quotes
Discussion ForumsJump to Quoted PostQuote History
Quoted:
Quoted:
If I could do a 'redo' on my life, I would have skipped Wall St and done this stuff, fascinates me like no other non-hot female subject does.

https://www.youtube.com/watch?v=-S-T_iTiAxQ


This one is pretty cool too:

https://youtu.be/MncUDWhPB_E


I have seen that one before.

Pretty cool, but it reminds me of a wet paint brush slinging paint around

I am sure the science behind it is spot on, but the rendering looks like cotton candy, someone could tune that up a bit I think.
Link Posted: 7/8/2015 6:46:13 PM EDT
[#42]
Link Posted: 7/8/2015 6:53:41 PM EDT
[#43]
Discussion ForumsJump to Quoted PostQuote History
Quoted:


They spent 8 months waiting for it.  I guess I'd use what I had after that much time too.  They could probably re-render it in another color pretty quick today and re-post it.

That reminds me, how much torture would your code go through to run on that nVidia card you showed the smoke simulation on?   Be interesting to compare them on a same size problem.  

View Quote View All Quotes
View All Quotes
Discussion ForumsJump to Quoted PostQuote History
Quoted:
Quoted:
Quoted:
Quoted:
If I could do a 'redo' on my life, I would have skipped Wall St and done this stuff, fascinates me like no other non-hot female subject does.

https://www.youtube.com/watch?v=-S-T_iTiAxQ


This one is pretty cool too:

https://youtu.be/MncUDWhPB_E


I have seen that one before.

Pretty cool, but it reminds me of a wet paint brush slinging paint around

I am sure the science behind it is spot on, but the rendering looks like cotton candy, someone could tune that up a bit I think.


They spent 8 months waiting for it.  I guess I'd use what I had after that much time too.  They could probably re-render it in another color pretty quick today and re-post it.

That reminds me, how much torture would your code go through to run on that nVidia card you showed the smoke simulation on?   Be interesting to compare them on a same size problem.  



The Nvidia Jetson would blow the Parallella out of the water for this, but that is not a knock against the Parallella.

Parallella is very cool for what it is, and I am learning a ton playing with it. It just wasn't made to compete with the likes of GPU, more a low wattage embedded system type chip.

Samsung invested $3 mill into the company I believe, so maybe they will make an appearance in cell phones or tablets one of these days.

Tough to be a small company in this space these days, I wish them well, the founder of Adapteva seems like a great guy.

Also, there are 64 core chips out in the wild, not many, some kickstarter backers got some.

I think they could ramp up to 1024 cores pretty quick if the right customer comes along to pay for it.
Link Posted: 7/8/2015 7:26:13 PM EDT
[#44]
Recent Adapteva vid



Hmm, that one cuts off for some reason, another on programming the Epiphany

Link Posted: 7/8/2015 10:01:07 PM EDT
[#45]
Link Posted: 7/9/2015 12:50:58 PM EDT
[#46]
Link Posted: 7/9/2015 1:09:28 PM EDT
[#47]
Discussion ForumsJump to Quoted PostQuote History


Awesome, that must be a complicated orbital pattern
Link Posted: 7/9/2015 5:27:05 PM EDT
[#48]
I have been asked to scale this up to 4096 stars, yikes!
Link Posted: 7/9/2015 10:31:41 PM EDT
[#49]
Link Posted: 7/10/2015 6:34:12 AM EDT
[#50]
4096 stars

Page / 13
Close Join Our Mail List to Stay Up To Date! Win a FREE Membership!

Sign up for the ARFCOM weekly newsletter and be entered to win a free ARFCOM membership. One new winner* is announced every week!

You will receive an email every Friday morning featuring the latest chatter from the hottest topics, breaking news surrounding legislation, as well as exclusive deals only available to ARFCOM email subscribers.


By signing up you agree to our User Agreement. *Must have a registered ARFCOM account to win.
Top Top