Fortran: How to Get The Node Name of a Cluster

fortran: how to get the node name of a cluster

If your code is parallelised with MPI - which is kind of common for a code running on a cluster - then just call MPI_Get_processor_name() that just does exactly this.
If not, just use the iso_c_binding module to call the C function gethostname(), which again just does that.

EDIT: here is an example on how to call gethostname() with the iso_c_binding module. I'm definitely not an expert with that so it might not be the most effective one ever...

module unistd
interface
integer( kind = C_INT ) function gethostname( hname, len ) bind( C, name = 'gethostname' )
use iso_c_binding
implicit none
character( kind = C_CHAR ) :: hname( * )
integer( kind = C_INT ), VALUE :: len
end function gethostname
end interface
end module unistd

program hostname
use iso_c_binding
use unistd
implicit none
integer( kind = C_INT ), parameter :: sl = 100
character( kind = C_CHAR ) :: hn( sl )
character( len = sl ) :: fn
character :: c
integer :: res, i, j

res = gethostname( hn, sl )
if ( res == 0 ) then
do i = 1, sl
c = hn( i )
if ( c == char( 0 ) ) exit
fn( i: i ) = c
end do
do j = i, sl
fn( j: j ) = ' '
end do
print *, "->", trim( fn ), "<-"
else
print *, "call to gethostname() didn't work..."
end if
end program hostname

Obtaining current host name from Cray Fortran

Cray Fortran is quite ahead in modern Fortran features.
You can call the gethostname() using C interoperability features of Fortran 2003. The name would be null terminated.
You can also probably use GET_ENVIRONMENT_VARIABLE intrinsic subroutine from Fortran 2003.

How to orchestrate members in a cluster to read new input from a single file once the current job is done?

As much as it pains me to suggest it, this might be the one good use of MPI "Shared file pointers". These work in fortran, too, but I'm going to get the syntax wrong.

Each process can read a row from the file with MPI_File_read_shared This independent I/O routine will update a global "shared file pointer" bit of state. Should B or C finish their work quickly, they can call MPI_File_read_shared again. If A is slow, whenver it calls MPI_File_read_shared it will read whatever has not been dealt with yet.

Some warnings:

  • shared file pointers don't get a lot of attention.
  • The global bit of shared state is typically... a hidden file. So yeah, it might not scale terribly well. Should be fine for a few tens of processes, though.
  • the global bit of shared state is stored on a file system. Some file systems like PVFS do not support the locking required to ensure this shared state is always correct.


Related Topics



Leave a reply



Submit