Vendor import of LLVM openmp trunk r351319 (just before the release_80

branch point):
https://llvm.org/svn/llvm-project/openmp/trunk@351319
This commit is contained in:
Dimitry Andric 2019-03-14 20:09:10 +00:00
commit 4254a3821b
Notes: svn2git 2020-12-20 02:59:44 +00:00
svn path=/vendor/llvm-openmp/dist/; revision=345153
svn path=/vendor/llvm-openmp/openmp-trunk-r351319/; revision=345154; tag=vendor/llvm-openmp/openmp-trunk-r351319
110 changed files with 101454 additions and 0 deletions

61
CREDITS.txt Normal file
View File

@ -0,0 +1,61 @@
This file is a partial list of people who have contributed to the LLVM/openmp
project. If you have contributed a patch or made some other contribution to
LLVM/openmp, please submit a patch to this file to add yourself, and it will be
done!
The list is sorted by surname and formatted to allow easy grepping and
beautification by scripts. The fields are: name (N), email (E), web-address
(W), PGP key ID and fingerprint (P), description (D), and snail-mail address
(S).
N: Adam Azarchs
W: 10xgenomics.com
D: Bug fix for lock code
N: Carlo Bertolli
W: http://ibm.com
D: IBM contributor to PowerPC support in CMake files and elsewhere.
N: Diego Caballero
E: diego.l.caballero@gmail.com
D: Fork performance improvements
N: Sunita Chandrasekaran
D: Contributor to testsuite from OpenUH
N: Barbara Chapman
D: Contributor to testsuite from OpenUH
N: University of Houston
W: http://web.cs.uh.edu/~openuh/download/
D: OpenUH test suite
N: Intel Corporation OpenMP runtime team
W: http://openmprtl.org
D: Created the runtime.
N: John Mellor-Crummey and other members of the OpenMP Tools Working Group
E: johnmc@rice.edu
D: OpenMP Tools Interface (OMPT)
N: Matthias Muller
D: Contributor to testsuite from OpenUH
N: Tal Nevo
E: tal@scalemp.com
D: ScaleMP contributor to improve runtime performance there.
W: http://scalemp.com
N: Pavel Neytchev
D: Contributor to testsuite from OpenUH
N: Steven Noonan
E: steven@uplinklabs.net
D: Patches for the ARM architecture and removal of several inconsistencies.
N: Alp Toker
E: alp@nuanti.com
D: Making build work for FreeBSD.
N: Cheng Wang
D: Contributor to testsuite from OpenUH

174
LICENSE.txt Normal file
View File

@ -0,0 +1,174 @@
==============================================================================
The software contained in this directory tree is dual licensed under both the
University of Illinois "BSD-Like" license and the MIT license. As a user of
this code you may choose to use it under either license. As a contributor,
you agree to allow your code to be used under both. The full text of the
relevant licenses is included below.
In addition, a license agreement from the copyright/patent holders of the
software contained in this directory tree is included below.
==============================================================================
University of Illinois/NCSA
Open Source License
Copyright (c) 1997-2019 Intel Corporation
All rights reserved.
Developed by:
OpenMP Runtime Team
Intel Corporation
http://www.openmprtl.org
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal with
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
of the Software, and to permit persons to whom the Software is furnished to do
so, subject to the following conditions:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimers.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimers in the
documentation and/or other materials provided with the distribution.
* Neither the names of Intel Corporation OpenMP Runtime Team nor the
names of its contributors may be used to endorse or promote products
derived from this Software without specific prior written permission.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
CONTRIBUTORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS WITH THE
SOFTWARE.
==============================================================================
Copyright (c) 1997-2019 Intel Corporation
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
==============================================================================
Intel Corporation
Software Grant License Agreement ("Agreement")
Except for the license granted herein to you, Intel Corporation ("Intel") reserves
all right, title, and interest in and to the Software (defined below).
Definition
"Software" means the code and documentation as well as any original work of
authorship, including any modifications or additions to an existing work, that
is intentionally submitted by Intel to llvm.org (http://llvm.org) ("LLVM") for
inclusion in, or documentation of, any of the products owned or managed by LLVM
(the "Work"). For the purposes of this definition, "submitted" means any form of
electronic, verbal, or written communication sent to LLVM or its
representatives, including but not limited to communication on electronic
mailing lists, source code control systems, and issue tracking systems that are
managed by, or on behalf of, LLVM for the purpose of discussing and improving
the Work, but excluding communication that is conspicuously marked otherwise.
1. Grant of Copyright License. Subject to the terms and conditions of this
Agreement, Intel hereby grants to you and to recipients of the Software
distributed by LLVM a perpetual, worldwide, non-exclusive, no-charge,
royalty-free, irrevocable copyright license to reproduce, prepare derivative
works of, publicly display, publicly perform, sublicense, and distribute the
Software and such derivative works.
2. Grant of Patent License. Subject to the terms and conditions of this
Agreement, Intel hereby grants you and to recipients of the Software
distributed by LLVM a perpetual, worldwide, non-exclusive, no-charge,
royalty-free, irrevocable (except as stated in this section) patent license
to make, have made, use, offer to sell, sell, import, and otherwise transfer
the Work, where such license applies only to those patent claims licensable
by Intel that are necessarily infringed by Intel's Software alone or by
combination of the Software with the Work to which such Software was
submitted. If any entity institutes patent litigation against Intel or any
other entity (including a cross-claim or counterclaim in a lawsuit) alleging
that Intel's Software, or the Work to which Intel has contributed constitutes
direct or contributory patent infringement, then any patent licenses granted
to that entity under this Agreement for the Software or Work shall terminate
as of the date such litigation is filed.
Unless required by applicable law or agreed to in writing, the software is
provided on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND,
either express or implied, including, without limitation, any warranties or
conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE.
==============================================================================
ARM Limited
Software Grant License Agreement ("Agreement")
Except for the license granted herein to you, ARM Limited ("ARM") reserves all
right, title, and interest in and to the Software (defined below).
Definition
"Software" means the code and documentation as well as any original work of
authorship, including any modifications or additions to an existing work, that
is intentionally submitted by ARM to llvm.org (http://llvm.org) ("LLVM") for
inclusion in, or documentation of, any of the products owned or managed by LLVM
(the "Work"). For the purposes of this definition, "submitted" means any form of
electronic, verbal, or written communication sent to LLVM or its
representatives, including but not limited to communication on electronic
mailing lists, source code control systems, and issue tracking systems that are
managed by, or on behalf of, LLVM for the purpose of discussing and improving
the Work, but excluding communication that is conspicuously marked otherwise.
1. Grant of Copyright License. Subject to the terms and conditions of this
Agreement, ARM hereby grants to you and to recipients of the Software
distributed by LLVM a perpetual, worldwide, non-exclusive, no-charge,
royalty-free, irrevocable copyright license to reproduce, prepare derivative
works of, publicly display, publicly perform, sublicense, and distribute the
Software and such derivative works.
2. Grant of Patent License. Subject to the terms and conditions of this
Agreement, ARM hereby grants you and to recipients of the Software
distributed by LLVM a perpetual, worldwide, non-exclusive, no-charge,
royalty-free, irrevocable (except as stated in this section) patent license
to make, have made, use, offer to sell, sell, import, and otherwise transfer
the Work, where such license applies only to those patent claims licensable
by ARM that are necessarily infringed by ARM's Software alone or by
combination of the Software with the Work to which such Software was
submitted. If any entity institutes patent litigation against ARM or any
other entity (including a cross-claim or counterclaim in a lawsuit) alleging
that ARM's Software, or the Work to which ARM has contributed constitutes
direct or contributory patent infringement, then any patent licenses granted
to that entity under this Agreement for the Software or Work shall terminate
as of the date such litigation is filed.
Unless required by applicable law or agreed to in writing, the software is
provided on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND,
either express or implied, including, without limitation, any warranties or
conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE.
==============================================================================

1215
runtime/src/dllexports Normal file

File diff suppressed because it is too large Load Diff

126
runtime/src/exports_so.txt Normal file
View File

@ -0,0 +1,126 @@
# exports_so.txt #
#
#//===----------------------------------------------------------------------===//
#//
#// The LLVM Compiler Infrastructure
#//
#// This file is dual licensed under the MIT and the University of Illinois Open
#// Source Licenses. See LICENSE.txt for details.
#//
#//===----------------------------------------------------------------------===//
#
# This is version script for OMP RTL shared library (libomp*.so)
VERSION {
global: # Exported symbols.
#
# "Normal" symbols.
#
omp_*; # Standard OpenMP functions.
OMP_*; # Standard OpenMP symbols.
#
# OMPT API
#
ompt_start_tool; # OMPT start interface
# icc drops weak attribute at linking step without the following line:
Annotate*; # TSAN annotation
ompc_*; # omp.h renames some standard functions to ompc_*.
kmp_*; # Intel extensions.
kmpc_*; # Intel extensions.
__kmpc_*; # Functions called by compiler-generated code.
GOMP_*; # GNU C compatibility functions.
_You_must_link_with_*; # Mutual detection/MS compatibility symbols.
#
# Debugger support.
#
#if USE_DEBUGGER
__kmp_debugging;
__kmp_omp_debug_struct_info;
#endif /* USE_DEBUGGER */
#
# Internal functions exported for testing purposes.
#
__kmp_get_reduce_method;
___kmp_allocate;
___kmp_free;
__kmp_thread_pool;
__kmp_thread_pool_nth;
__kmp_reset_stats;
#if USE_ITT_BUILD
#
# ITT support.
#
# The following entry points are added so that the backtraces from
# the tools contain meaningful names for all the functions that might
# appear in a backtrace of a thread which is blocked in the RTL.
__kmp_acquire_drdpa_lock;
__kmp_acquire_nested_drdpa_lock;
__kmp_acquire_nested_queuing_lock;
__kmp_acquire_nested_tas_lock;
__kmp_acquire_nested_ticket_lock;
__kmp_acquire_queuing_lock;
__kmp_acquire_tas_lock;
__kmp_acquire_ticket_lock;
__kmp_fork_call;
__kmp_invoke_microtask;
#if KMP_USE_MONITOR
__kmp_launch_monitor;
__kmp_reap_monitor;
#endif
__kmp_launch_worker;
__kmp_reap_worker;
__kmp_release_64;
__kmp_wait_64;
__kmp_wait_yield_4;
# ittnotify symbols to be used by debugger
__kmp_itt_fini_ittlib;
__kmp_itt_init_ittlib;
#endif /* USE_ITT_BUILD */
local: # Non-exported symbols.
*; # All other symbols are not exported.
}; # VERSION
# sets up GCC OMP_ version dependency chain
OMP_1.0 {
};
OMP_2.0 {
} OMP_1.0;
OMP_3.0 {
} OMP_2.0;
OMP_3.1 {
} OMP_3.0;
OMP_4.0 {
} OMP_3.1;
OMP_4.5 {
} OMP_4.0;
# sets up GCC GOMP_ version dependency chain
GOMP_1.0 {
};
GOMP_2.0 {
} GOMP_1.0;
GOMP_3.0 {
} GOMP_2.0;
GOMP_4.0 {
} GOMP_3.0;
GOMP_4.5 {
} GOMP_4.0;
# end of file #

View File

@ -0,0 +1,484 @@
/*
* extractExternal.cpp
*/
//===----------------------------------------------------------------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is dual licensed under the MIT and the University of Illinois Open
// Source Licenses. See LICENSE.txt for details.
//
//===----------------------------------------------------------------------===//
#include <fstream>
#include <iostream>
#include <map>
#include <set>
#include <stdlib.h>
#include <string>
#include <strstream>
/* Given a set of n object files h ('external' object files) and a set of m
object files o ('internal' object files),
1. Determines r, the subset of h that o depends on, directly or indirectly
2. Removes the files in h - r from the file system
3. For each external symbol defined in some file in r, rename it in r U o
by prefixing it with "__kmp_external_"
Usage:
hide.exe <n> <filenames for h> <filenames for o>
Thus, the prefixed symbols become hidden in the sense that they now have a
special prefix.
*/
using namespace std;
void stop(char *errorMsg) {
printf("%s\n", errorMsg);
exit(1);
}
// an entry in the symbol table of a .OBJ file
class Symbol {
public:
__int64 name;
unsigned value;
unsigned short sectionNum, type;
char storageClass, nAux;
};
class _rstream : public istrstream {
private:
const char *buf;
protected:
_rstream(pair<const char *, streamsize> p)
: istrstream(p.first, p.second), buf(p.first) {}
~_rstream() { delete[] buf; }
};
// A stream encapuslating the content of a file or the content of a string,
// overriding the >> operator to read various integer types in binary form,
// as well as a symbol table entry.
class rstream : public _rstream {
private:
template <class T> inline rstream &doRead(T &x) {
read((char *)&x, sizeof(T));
return *this;
}
static pair<const char *, streamsize> getBuf(const char *fileName) {
ifstream raw(fileName, ios::binary | ios::in);
if (!raw.is_open())
stop("rstream.getBuf: Error opening file");
raw.seekg(0, ios::end);
streampos fileSize = raw.tellg();
if (fileSize < 0)
stop("rstream.getBuf: Error reading file");
char *buf = new char[fileSize];
raw.seekg(0, ios::beg);
raw.read(buf, fileSize);
return pair<const char *, streamsize>(buf, fileSize);
}
public:
// construct from a string
rstream(const char *buf, streamsize size)
: _rstream(pair<const char *, streamsize>(buf, size)) {}
// construct from a file whole content is fully read once to initialize the
// content of this stream
rstream(const char *fileName) : _rstream(getBuf(fileName)) {}
rstream &operator>>(int &x) { return doRead(x); }
rstream &operator>>(unsigned &x) { return doRead(x); }
rstream &operator>>(short &x) { return doRead(x); }
rstream &operator>>(unsigned short &x) { return doRead(x); }
rstream &operator>>(Symbol &e) {
read((char *)&e, 18);
return *this;
}
};
// string table in a .OBJ file
class StringTable {
private:
map<string, unsigned> directory;
size_t length;
char *data;
// make <directory> from <length> bytes in <data>
void makeDirectory(void) {
unsigned i = 4;
while (i < length) {
string s = string(data + i);
directory.insert(make_pair(s, i));
i += s.size() + 1;
}
}
// initialize <length> and <data> with contents specified by the arguments
void init(const char *_data) {
unsigned _length = *(unsigned *)_data;
if (_length < sizeof(unsigned) || _length != *(unsigned *)_data)
stop("StringTable.init: Invalid symbol table");
if (_data[_length - 1]) {
// to prevent runaway strings, make sure the data ends with a zero
data = new char[length = _length + 1];
data[_length] = 0;
} else {
data = new char[length = _length];
}
*(unsigned *)data = length;
KMP_MEMCPY(data + sizeof(unsigned), _data + sizeof(unsigned),
length - sizeof(unsigned));
makeDirectory();
}
public:
StringTable(rstream &f) {
// Construct string table by reading from f.
streampos s;
unsigned strSize;
char *strData;
s = f.tellg();
f >> strSize;
if (strSize < sizeof(unsigned))
stop("StringTable: Invalid string table");
strData = new char[strSize];
*(unsigned *)strData = strSize;
// read the raw data into <strData>
f.read(strData + sizeof(unsigned), strSize - sizeof(unsigned));
s = f.tellg() - s;
if (s < strSize)
stop("StringTable: Unexpected EOF");
init(strData);
delete[] strData;
}
StringTable(const set<string> &strings) {
// Construct string table from given strings.
char *p;
set<string>::const_iterator it;
size_t s;
// count required size for data
for (length = sizeof(unsigned), it = strings.begin(); it != strings.end();
++it) {
size_t l = (*it).size();
if (l > (unsigned)0xFFFFFFFF)
stop("StringTable: String too long");
if (l > 8) {
length += l + 1;
if (length > (unsigned)0xFFFFFFFF)
stop("StringTable: Symbol table too long");
}
}
data = new char[length];
*(unsigned *)data = length;
// populate data and directory
for (p = data + sizeof(unsigned), it = strings.begin(); it != strings.end();
++it) {
const string &str = *it;
size_t l = str.size();
if (l > 8) {
directory.insert(make_pair(str, p - data));
KMP_MEMCPY(p, str.c_str(), l);
p[l] = 0;
p += l + 1;
}
}
}
~StringTable() { delete[] data; }
// Returns encoding for given string based on this string table. Error if
// string length is greater than 8 but string is not in the string table
// -- returns 0.
__int64 encode(const string &str) {
__int64 r;
if (str.size() <= 8) {
// encoded directly
((char *)&r)[7] = 0;
KMP_STRNCPY_S((char *)&r, sizeof(r), str.c_str(), 8);
return r;
} else {
// represented as index into table
map<string, unsigned>::const_iterator it = directory.find(str);
if (it == directory.end())
stop("StringTable::encode: String now found in string table");
((unsigned *)&r)[0] = 0;
((unsigned *)&r)[1] = (*it).second;
return r;
}
}
// Returns string represented by x based on this string table. Error if x
// references an invalid position in the table--returns the empty string.
string decode(__int64 x) const {
if (*(unsigned *)&x == 0) {
// represented as index into table
unsigned &p = ((unsigned *)&x)[1];
if (p >= length)
stop("StringTable::decode: Invalid string table lookup");
return string(data + p);
} else {
// encoded directly
char *p = (char *)&x;
int i;
for (i = 0; i < 8 && p[i]; ++i)
;
return string(p, i);
}
}
void write(ostream &os) { os.write(data, length); }
};
// for the named object file, determines the set of defined symbols and the set
// of undefined external symbols and writes them to <defined> and <undefined>
// respectively
void computeExternalSymbols(const char *fileName, set<string> *defined,
set<string> *undefined) {
streampos fileSize;
size_t strTabStart;
unsigned symTabStart, symNEntries;
rstream f(fileName);
f.seekg(0, ios::end);
fileSize = f.tellg();
f.seekg(8);
f >> symTabStart >> symNEntries;
// seek to the string table
f.seekg(strTabStart = symTabStart + 18 * (size_t)symNEntries);
if (f.eof()) {
printf("computeExternalSymbols: fileName='%s', fileSize = %lu, symTabStart "
"= %u, symNEntries = %u\n",
fileName, (unsigned long)fileSize, symTabStart, symNEntries);
stop("computeExternalSymbols: Unexpected EOF 1");
}
StringTable stringTable(f); // read the string table
if (f.tellg() != fileSize)
stop("computeExternalSymbols: Unexpected data after string table");
f.clear();
f.seekg(symTabStart); // seek to the symbol table
defined->clear();
undefined->clear();
for (int i = 0; i < symNEntries; ++i) {
// process each entry
Symbol e;
if (f.eof())
stop("computeExternalSymbols: Unexpected EOF 2");
f >> e;
if (f.fail())
stop("computeExternalSymbols: File read error");
if (e.nAux) { // auxiliary entry: skip
f.seekg(e.nAux * 18, ios::cur);
i += e.nAux;
}
// if symbol is extern and defined in the current file, insert it
if (e.storageClass == 2)
if (e.sectionNum)
defined->insert(stringTable.decode(e.name));
else
undefined->insert(stringTable.decode(e.name));
}
}
// For each occurrence of an external symbol in the object file named by
// by <fileName> that is a member of <hide>, renames it by prefixing
// with "__kmp_external_", writing back the file in-place
void hideSymbols(char *fileName, const set<string> &hide) {
static const string prefix("__kmp_external_");
set<string> strings; // set of all occurring symbols, appropriately prefixed
streampos fileSize;
size_t strTabStart;
unsigned symTabStart, symNEntries;
int i;
rstream in(fileName);
in.seekg(0, ios::end);
fileSize = in.tellg();
in.seekg(8);
in >> symTabStart >> symNEntries;
in.seekg(strTabStart = symTabStart + 18 * (size_t)symNEntries);
if (in.eof())
stop("hideSymbols: Unexpected EOF");
StringTable stringTableOld(in); // read original string table
if (in.tellg() != fileSize)
stop("hideSymbols: Unexpected data after string table");
// compute set of occurring strings with prefix added
for (i = 0; i < symNEntries; ++i) {
Symbol e;
in.seekg(symTabStart + i * 18);
if (in.eof())
stop("hideSymbols: Unexpected EOF");
in >> e;
if (in.fail())
stop("hideSymbols: File read error");
if (e.nAux)
i += e.nAux;
const string &s = stringTableOld.decode(e.name);
// if symbol is extern and found in <hide>, prefix and insert into strings,
// otherwise, just insert into strings without prefix
strings.insert(
(e.storageClass == 2 && hide.find(s) != hide.end()) ? prefix + s : s);
}
ofstream out(fileName, ios::trunc | ios::out | ios::binary);
if (!out.is_open())
stop("hideSymbols: Error opening output file");
// make new string table from string set
StringTable stringTableNew = StringTable(strings);
// copy input file to output file up to just before the symbol table
in.seekg(0);
char *buf = new char[symTabStart];
in.read(buf, symTabStart);
out.write(buf, symTabStart);
delete[] buf;
// copy input symbol table to output symbol table with name translation
for (i = 0; i < symNEntries; ++i) {
Symbol e;
in.seekg(symTabStart + i * 18);
if (in.eof())
stop("hideSymbols: Unexpected EOF");
in >> e;
if (in.fail())
stop("hideSymbols: File read error");
const string &s = stringTableOld.decode(e.name);
out.seekp(symTabStart + i * 18);
e.name = stringTableNew.encode(
(e.storageClass == 2 && hide.find(s) != hide.end()) ? prefix + s : s);
out.write((char *)&e, 18);
if (out.fail())
stop("hideSymbols: File write error");
if (e.nAux) {
// copy auxiliary symbol table entries
int nAux = e.nAux;
for (int j = 1; j <= nAux; ++j) {
in >> e;
out.seekp(symTabStart + (i + j) * 18);
out.write((char *)&e, 18);
}
i += nAux;
}
}
// output string table
stringTableNew.write(out);
}
// returns true iff <a> and <b> have no common element
template <class T> bool isDisjoint(const set<T> &a, const set<T> &b) {
set<T>::const_iterator ita, itb;
for (ita = a.begin(), itb = b.begin(); ita != a.end() && itb != b.end();) {
const T &ta = *ita, &tb = *itb;
if (ta < tb)
++ita;
else if (tb < ta)
++itb;
else
return false;
}
return true;
}
// PRE: <defined> and <undefined> are arrays with <nTotal> elements where
// <nTotal> >= <nExternal>. The first <nExternal> elements correspond to the
// external object files and the rest correspond to the internal object files.
// POST: file x is said to depend on file y if undefined[x] and defined[y] are
// not disjoint. Returns the transitive closure of the set of internal object
// files, as a set of file indexes, under the 'depends on' relation, minus the
// set of internal object files.
set<int> *findRequiredExternal(int nExternal, int nTotal, set<string> *defined,
set<string> *undefined) {
set<int> *required = new set<int>;
set<int> fresh[2];
int i, cur = 0;
bool changed;
for (i = nTotal - 1; i >= nExternal; --i)
fresh[cur].insert(i);
do {
changed = false;
for (set<int>::iterator it = fresh[cur].begin(); it != fresh[cur].end();
++it) {
set<string> &s = undefined[*it];
for (i = 0; i < nExternal; ++i) {
if (required->find(i) == required->end()) {
if (!isDisjoint(defined[i], s)) {
// found a new qualifying element
required->insert(i);
fresh[1 - cur].insert(i);
changed = true;
}
}
}
}
fresh[cur].clear();
cur = 1 - cur;
} while (changed);
return required;
}
int main(int argc, char **argv) {
int nExternal, nInternal, i;
set<string> *defined, *undefined;
set<int>::iterator it;
if (argc < 3)
stop("Please specify a positive integer followed by a list of object "
"filenames");
nExternal = atoi(argv[1]);
if (nExternal <= 0)
stop("Please specify a positive integer followed by a list of object "
"filenames");
if (nExternal + 2 > argc)
stop("Too few external objects");
nInternal = argc - nExternal - 2;
defined = new set<string>[argc - 2];
undefined = new set<string>[argc - 2];
// determine the set of defined and undefined external symbols
for (i = 2; i < argc; ++i)
computeExternalSymbols(argv[i], defined + i - 2, undefined + i - 2);
// determine the set of required external files
set<int> *requiredExternal =
findRequiredExternal(nExternal, argc - 2, defined, undefined);
set<string> hide;
// determine the set of symbols to hide--namely defined external symbols of
// the required external files
for (it = requiredExternal->begin(); it != requiredExternal->end(); ++it) {
int idx = *it;
set<string>::iterator it2;
// We have to insert one element at a time instead of inserting a range
// because the insert member function taking a range doesn't exist on
// Windows* OS, at least at the time of this writing.
for (it2 = defined[idx].begin(); it2 != defined[idx].end(); ++it2)
hide.insert(*it2);
}
// process the external files--removing those that are not required and hiding
// the appropriate symbols in the others
for (i = 0; i < nExternal; ++i)
if (requiredExternal->find(i) != requiredExternal->end())
hideSymbols(argv[2 + i], hide);
else
remove(argv[2 + i]);
// hide the appropriate symbols in the internal files
for (i = nExternal + 2; i < argc; ++i)
hideSymbols(argv[i], hide);
return 0;
}

493
runtime/src/i18n/en_US.txt Normal file
View File

@ -0,0 +1,493 @@
# en_US.txt #
#
#//===----------------------------------------------------------------------===//
#//
#// The LLVM Compiler Infrastructure
#//
#// This file is dual licensed under the MIT and the University of Illinois Open
#// Source Licenses. See LICENSE.txt for details.
#//
#//===----------------------------------------------------------------------===//
#
# Default messages, embedded into the OpenMP RTL, and source for English catalog.
# Compatible changes (which does not require version bumping):
# * Editing message (number and type of placeholders must remain, relative order of
# placeholders may be changed, e.g. "File %1$s line %2$d" may be safely edited to
# "Line %2$d file %1$s").
# * Adding new message to the end of section.
# Incompatible changes (version must be bumbed by 1):
# * Introducing new placeholders to existing messages.
# * Changing type of placeholders (e.g. "line %1$d" -> "line %1$s").
# * Rearranging order of messages.
# * Deleting messages.
# Use special "OBSOLETE" pseudoidentifier for obsolete entries, which is kept only for backward
# compatibility. When version is bumped, do not forget to delete all obsolete entries.
# --------------------------------------------------------------------------------------------------
-*- META -*-
# --------------------------------------------------------------------------------------------------
# Meta information about message catalog.
Language "English"
Country "USA"
LangId "1033"
Version "2"
Revision "20170523"
# --------------------------------------------------------------------------------------------------
-*- STRINGS -*-
# --------------------------------------------------------------------------------------------------
# Strings are not complete messages, just fragments. We need to work on it and reduce number of
# strings (to zero?).
Error "Error"
UnknownFile "(unknown file)"
NotANumber "not a number"
BadUnit "bad unit"
IllegalCharacters "illegal characters"
ValueTooLarge "value too large"
ValueTooSmall "value too small"
NotMultiple4K "value is not a multiple of 4k"
UnknownTopology "Unknown processor topology"
CantOpenCpuinfo "Cannot open /proc/cpuinfo"
ProcCpuinfo "/proc/cpuinfo"
NoProcRecords "cpuinfo file invalid (No processor records)"
TooManyProcRecords "cpuinfo file invalid (Too many processor records)"
CantRewindCpuinfo "Cannot rewind cpuinfo file"
LongLineCpuinfo "cpuinfo file invalid (long line)"
TooManyEntries "cpuinfo file contains too many entries"
MissingProcField "cpuinfo file missing processor field"
MissingPhysicalIDField "cpuinfo file missing physical id field"
MissingValCpuinfo "cpuinfo file invalid (missing val)"
DuplicateFieldCpuinfo "cpuinfo file invalid (duplicate field)"
PhysicalIDsNotUnique "Physical node/pkg/core/thread ids not unique"
ApicNotPresent "APIC not present"
InvalidCpuidInfo "Invalid cpuid info"
OBSOLETE "APIC ids not unique"
InconsistentCpuidInfo "Inconsistent cpuid info"
OutOfHeapMemory "Out of heap memory"
MemoryAllocFailed "Memory allocation failed"
Core "core"
Thread "thread"
Package "package"
Node "node"
OBSOLETE "<undef>"
DecodingLegacyAPIC "decoding legacy APIC ids"
OBSOLETE "parsing /proc/cpuinfo"
NotDefined "value is not defined"
EffectiveSettings "Effective settings:"
UserSettings "User settings:"
StorageMapWarning "warning: pointers or size don't make sense"
OBSOLETE "CPU"
OBSOLETE "TPU"
OBSOLETE "TPUs per package"
OBSOLETE "HT enabled"
OBSOLETE "HT disabled"
Decodingx2APIC "decoding x2APIC ids"
NoLeaf11Support "cpuid leaf 11 not supported"
NoLeaf4Support "cpuid leaf 4 not supported"
ThreadIDsNotUnique "thread ids not unique"
UsingPthread "using pthread info"
LegacyApicIDsNotUnique "legacy APIC ids not unique"
x2ApicIDsNotUnique "x2APIC ids not unique"
DisplayEnvBegin "OPENMP DISPLAY ENVIRONMENT BEGIN"
DisplayEnvEnd "OPENMP DISPLAY ENVIRONMENT END"
Device "[device]"
Host "[host]"
Tile "tile"
# --------------------------------------------------------------------------------------------------
-*- FORMATS -*-
# --------------------------------------------------------------------------------------------------
Info "OMP: Info #%1$d: %2$s\n"
Warning "OMP: Warning #%1$d: %2$s\n"
Fatal "OMP: Error #%1$d: %2$s\n"
SysErr "OMP: System error #%1$d: %2$s\n"
Hint "OMP: Hint %1$s\n"
Pragma "%1$s pragma (at %2$s:%3$s():%4$s)"
# %1 is pragma name (like "parallel" or "master",
# %2 is file name,
# %3 is function (routine) name,
# %4 is the line number (as string, so "s" type specifier should be used).
# --------------------------------------------------------------------------------------------------
-*- MESSAGES -*-
# --------------------------------------------------------------------------------------------------
# Messages of any severity: informational, warning, or fatal.
# To maintain message numbers (they are visible to customers), add new messages to the end.
# Use following prefixes for messages and hints when appropriate:
# Aff -- Affinity messages.
# Cns -- Consistency check failures (KMP_CONSISTENCY_CHECK).
# Itt -- ITT Notify-related messages.
LibraryIsSerial "Library is \"serial\"."
CantOpenMessageCatalog "Cannot open message catalog \"%1$s\":"
WillUseDefaultMessages "Default messages will be used."
LockIsUninitialized "%1$s: Lock is uninitialized"
LockSimpleUsedAsNestable "%1$s: Lock was initialized as simple, but used as nestable"
LockNestableUsedAsSimple "%1$s: Lock was initialized as nestable, but used as simple"
LockIsAlreadyOwned "%1$s: Lock is already owned by requesting thread"
LockStillOwned "%1$s: Lock is still owned by a thread"
LockUnsettingFree "%1$s: Attempt to release a lock not owned by any thread"
LockUnsettingSetByAnother "%1$s: Attempt to release a lock owned by another thread"
StackOverflow "Stack overflow detected for OpenMP thread #%1$d"
StackOverlap "Stack overlap detected. "
AssertionFailure "Assertion failure at %1$s(%2$d)."
CantRegisterNewThread "Unable to register a new user thread."
DuplicateLibrary "Initializing %1$s, but found %2$s already initialized."
CantOpenFileForReading "Cannot open file \"%1$s\" for reading:"
CantGetEnvVar "Getting environment variable \"%1$s\" failed:"
CantSetEnvVar "Setting environment variable \"%1$s\" failed:"
CantGetEnvironment "Getting environment failed:"
BadBoolValue "%1$s=\"%2$s\": Wrong value, boolean expected."
SSPNotBuiltIn "No Helper Thread support built in this OMP library."
SPPSotfTerminateFailed "Helper thread failed to soft terminate."
BufferOverflow "Buffer overflow detected."
RealTimeSchedNotSupported "Real-time scheduling policy is not supported."
RunningAtMaxPriority "OMP application is running at maximum priority with real-time scheduling policy. "
CantChangeMonitorPriority "Changing priority of the monitor thread failed:"
MonitorWillStarve "Deadlocks are highly possible due to monitor thread starvation."
CantSetMonitorStackSize "Unable to set monitor thread stack size to %1$lu bytes:"
CantSetWorkerStackSize "Unable to set OMP thread stack size to %1$lu bytes:"
CantInitThreadAttrs "Thread attribute initialization failed:"
CantDestroyThreadAttrs "Thread attribute destroying failed:"
CantSetWorkerState "OMP thread joinable state setting failed:"
CantSetMonitorState "Monitor thread joinable state setting failed:"
NoResourcesForWorkerThread "System unable to allocate necessary resources for OMP thread:"
NoResourcesForMonitorThread "System unable to allocate necessary resources for the monitor thread:"
CantTerminateWorkerThread "Unable to terminate OMP thread:"
ScheduleKindOutOfRange "Wrong schedule type %1$d, see <omp.h> or <omp_lib.h> file for the list of values supported."
UnknownSchedulingType "Unknown scheduling type \"%1$d\"."
InvalidValue "%1$s value \"%2$s\" is invalid."
SmallValue "%1$s value \"%2$s\" is too small."
LargeValue "%1$s value \"%2$s\" is too large."
StgInvalidValue "%1$s: \"%2$s\" is an invalid value; ignored."
BarrReleaseValueInvalid "%1$s release value \"%2$s\" is invalid."
BarrGatherValueInvalid "%1$s gather value \"%2$s\" is invalid."
OBSOLETE "%1$s supported only on debug builds; ignored."
ParRangeSyntax "Syntax error: Usage: %1$s=[ routine=<func> | filename=<file> | range=<lb>:<ub> "
"| excl_range=<lb>:<ub> ],..."
UnbalancedQuotes "Unbalanced quotes in %1$s."
EmptyString "Empty string specified for %1$s; ignored."
LongValue "%1$s value is too long; ignored."
InvalidClause "%1$s: Invalid clause in \"%2$s\"."
EmptyClause "Empty clause in %1$s."
InvalidChunk "%1$s value \"%2$s\" is invalid chunk size."
LargeChunk "%1$s value \"%2$s\" is to large chunk size."
IgnoreChunk "%1$s value \"%2$s\" is ignored."
CantGetProcFreq "Cannot get processor frequency, using zero KMP_ITT_PREPARE_DELAY."
EnvParallelWarn "%1$s must be set prior to first parallel region; ignored."
AffParamDefined "%1$s: parameter has been specified already, ignoring \"%2$s\"."
AffInvalidParam "%1$s: parameter invalid, ignoring \"%2$s\"."
AffManyParams "%1$s: too many integer parameters specified, ignoring \"%2$s\"."
AffManyParamsForLogic "%1$s: too many integer parameters specified for logical or physical type, ignoring \"%2$d\"."
AffNoParam "%1$s: '%2$s' type does not take any integer parameters, ignoring them."
AffNoProcList "%1$s: proclist not specified with explicit affinity type, using \"none\"."
AffProcListNoType "%1$s: proclist specified, setting affinity type to \"explicit\"."
AffProcListNotExplicit "%1$s: proclist specified without \"explicit\" affinity type, proclist ignored."
AffSyntaxError "%1$s: syntax error, not using affinity."
AffZeroStride "%1$s: range error (zero stride), not using affinity."
AffStartGreaterEnd "%1$s: range error (%2$d > %3$d), not using affinity."
AffStrideLessZero "%1$s: range error (%2$d < %3$d & stride < 0), not using affinity."
AffRangeTooBig "%1$s: range error ((%2$d-%3$d)/%4$d too big), not using affinity."
OBSOLETE "%1$s: %2$s is defined. %3$s will be ignored."
AffNotSupported "%1$s: affinity not supported, using \"disabled\"."
OBSOLETE "%1$s: affinity only supported for Intel(R) Architecture Processors."
GetAffSysCallNotSupported "%1$s: getaffinity system call not supported."
SetAffSysCallNotSupported "%1$s: setaffinity system call not supported."
OBSOLETE "%1$s: pthread_aff_set_np call not found."
OBSOLETE "%1$s: pthread_get_num_resources_np call not found."
OBSOLETE "%1$s: the OS kernel does not support affinity."
OBSOLETE "%1$s: pthread_get_num_resources_np returned %2$d."
AffCantGetMaskSize "%1$s: cannot determine proper affinity mask size."
ParseSizeIntWarn "%1$s=\"%2$s\": %3$s."
ParseExtraCharsWarn "%1$s: extra trailing characters ignored: \"%2$s\"."
UnknownForceReduction "%1$s: unknown method \"%2$s\"."
TimerUseGettimeofday "KMP_STATS_TIMER: clock_gettime is undefined, using gettimeofday."
TimerNeedMoreParam "KMP_STATS_TIMER: \"%1$s\" needs additional parameter, e.g. 'clock_gettime,2'. Using gettimeofday."
TimerInvalidParam "KMP_STATS_TIMER: clock_gettime parameter \"%1$s\" is invalid, using gettimeofday."
TimerGettimeFailed "KMP_STATS_TIMER: clock_gettime failed, using gettimeofday."
TimerUnknownFunction "KMP_STATS_TIMER: clock function unknown (ignoring value \"%1$s\")."
UnknownSchedTypeDetected "Unknown scheduling type detected."
DispatchManyThreads "Too many threads to use analytical guided scheduling - switching to iterative guided scheduling."
IttLookupFailed "ittnotify: Lookup of \"%1$s\" function in \"%2$s\" library failed."
IttLoadLibFailed "ittnotify: Loading \"%1$s\" library failed."
IttAllNotifDisabled "ittnotify: All itt notifications disabled."
IttObjNotifDisabled "ittnotify: Object state itt notifications disabled."
IttMarkNotifDisabled "ittnotify: Mark itt notifications disabled."
IttUnloadLibFailed "ittnotify: Unloading \"%1$s\" library failed."
CantFormThrTeam "Cannot form a team with %1$d threads, using %2$d instead."
ActiveLevelsNegative "Requested number of active parallel levels \"%1$d\" is negative; ignored."
ActiveLevelsExceedLimit "Requested number of active parallel levels \"%1$d\" exceeds supported limit; "
"the following limit value will be used: \"%1$d\"."
SetLibraryIncorrectCall "kmp_set_library must only be called from the top level serial thread; ignored."
FatalSysError "Fatal system error detected."
OutOfHeapMemory "Out of heap memory."
OBSOLETE "Clearing __KMP_REGISTERED_LIB env var failed."
OBSOLETE "Registering library with env var failed."
Using_int_Value "%1$s value \"%2$d\" will be used."
Using_uint_Value "%1$s value \"%2$u\" will be used."
Using_uint64_Value "%1$s value \"%2$s\" will be used."
Using_str_Value "%1$s value \"%2$s\" will be used."
MaxValueUsing "%1$s maximum value \"%2$d\" will be used."
MinValueUsing "%1$s minimum value \"%2$d\" will be used."
MemoryAllocFailed "Memory allocation failed."
FileNameTooLong "File name too long."
OBSOLETE "Lock table overflow."
ManyThreadsForTPDirective "Too many threads to use threadprivate directive."
AffinityInvalidMask "%1$s: invalid mask."
WrongDefinition "Wrong definition."
TLSSetValueFailed "Windows* OS: TLS Set Value failed."
TLSOutOfIndexes "Windows* OS: TLS out of indexes."
OBSOLETE "PDONE directive must be nested within a DO directive."
CantGetNumAvailCPU "Cannot get number of available CPUs."
AssumedNumCPU "Assumed number of CPUs is 2."
ErrorInitializeAffinity "Error initializing affinity - not using affinity."
AffThreadsMayMigrate "Threads may migrate across all available OS procs (granularity setting too coarse)."
AffIgnoreInvalidProcID "Ignoring invalid OS proc ID %1$d."
AffNoValidProcID "No valid OS proc IDs specified - not using affinity."
UsingFlatOS "%1$s - using \"flat\" OS <-> physical proc mapping."
UsingFlatOSFile "%1$s: %2$s - using \"flat\" OS <-> physical proc mapping."
UsingFlatOSFileLine "%1$s, line %2$d: %3$s - using \"flat\" OS <-> physical proc mapping."
FileMsgExiting "%1$s: %2$s - exiting."
FileLineMsgExiting "%1$s, line %2$d: %3$s - exiting."
ConstructIdentInvalid "Construct identifier invalid."
ThreadIdentInvalid "Thread identifier invalid."
RTLNotInitialized "runtime library not initialized."
TPCommonBlocksInconsist "Inconsistent THREADPRIVATE common block declarations are non-conforming "
"and are unsupported. Either all threadprivate common blocks must be declared "
"identically, or the largest instance of each threadprivate common block "
"must be referenced first during the run."
CantSetThreadAffMask "Cannot set thread affinity mask."
CantSetThreadPriority "Cannot set thread priority."
CantCreateThread "Cannot create thread."
CantCreateEvent "Cannot create event."
CantSetEvent "Cannot set event."
CantCloseHandle "Cannot close handle."
UnknownLibraryType "Unknown library type: %1$d."
ReapMonitorError "Monitor did not reap properly."
ReapWorkerError "Worker thread failed to join."
ChangeThreadAffMaskError "Cannot change thread affinity mask."
ThreadsMigrate "%1$s: Threads may migrate across %2$d innermost levels of machine"
DecreaseToThreads "%1$s: decrease to %2$d threads"
IncreaseToThreads "%1$s: increase to %2$d threads"
OBSOLETE "%1$s: Internal thread %2$d bound to OS proc set %3$s"
AffCapableUseCpuinfo "%1$s: Affinity capable, using cpuinfo file"
AffUseGlobCpuid "%1$s: Affinity capable, using global cpuid info"
AffCapableUseFlat "%1$s: Affinity capable, using default \"flat\" topology"
AffNotCapableUseLocCpuid "%1$s: Affinity not capable, using local cpuid info"
AffNotCapableUseCpuinfo "%1$s: Affinity not capable, using cpuinfo file"
AffFlatTopology "%1$s: Affinity not capable, assumming \"flat\" topology"
InitOSProcSetRespect "%1$s: Initial OS proc set respected: %2$s"
InitOSProcSetNotRespect "%1$s: Initial OS proc set not respected: %2$s"
AvailableOSProc "%1$s: %2$d available OS procs"
Uniform "%1$s: Uniform topology"
NonUniform "%1$s: Nonuniform topology"
Topology "%1$s: %2$d packages x %3$d cores/pkg x %4$d threads/core (%5$d total cores)"
OBSOLETE "%1$s: OS proc to physical thread map ([] => level not in map):"
OSProcToPackage "%1$s: OS proc <n> maps to <n>th package core 0"
OBSOLETE "%1$s: OS proc %2$d maps to package %3$d [core %4$d] [thread %5$d]"
OBSOLETE "%1$s: OS proc %2$d maps to [package %3$d] [core %4$d] [thread %5$d]"
OBSOLETE "%1$s: OS proc %2$d maps to [package %3$d] [core %4$d] thread %5$d"
OBSOLETE "%1$s: OS proc %2$d maps to [package %3$d] core %4$d [thread %5$d]"
OBSOLETE "%1$s: OS proc %2$d maps to package %3$d [core %4$d] [thread %5$d]"
OBSOLETE "%1$s: OS proc %2$d maps to [package %3$d] core %4$d thread %5$d"
OBSOLETE "%1$s: OS proc %2$d maps to package %3$d core %4$d [thread %5$d]"
OBSOLETE "%1$s: OS proc %2$d maps to package %3$d [core %4$d] thread %5$d"
OBSOLETE "%1$s: OS proc %2$d maps to package %3$d core %4$d thread %5$d"
OSProcMapToPack "%1$s: OS proc %2$d maps to %3$s"
OBSOLETE "%1$s: Internal thread %2$d changed affinity mask from %3$s to %4$s"
OBSOLETE "%1$s: OS proc %2$d maps to package %3$d, CPU %4$d, TPU %5$d"
OBSOLETE "%1$s: OS proc %2$d maps to package %3$d, CPU %4$d"
OBSOLETE "%1$s: HT enabled; %2$d packages; %3$d TPU; %4$d TPUs per package"
OBSOLETE "%1$s: HT disabled; %2$d packages"
BarriersInDifferentOrder "Threads encountered barriers in different order. "
FunctionError "Function %1$s failed:"
TopologyExtra "%1$s: %2$s packages x %3$d cores/pkg x %4$d threads/core (%5$d total cores)"
WrongMessageCatalog "Incompatible message catalog \"%1$s\": Version \"%2$s\" found, version \"%3$s\" expected."
StgIgnored "%1$s: ignored because %2$s has been defined"
# %1, -- name of ignored variable, %2 -- name of variable with higher priority.
OBSOLETE "%1$s: overrides %3$s specified before"
# %1, %2 -- name and value of the overriding variable, %3 -- name of overriden variable.
AffTilesNoHWLOC "%1$s: Tiles are only supported if KMP_TOPOLOGY_METHOD=hwloc, using granularity=package instead"
AffTilesNoTiles "%1$s: Tiles requested but were not detected on this HW, using granularity=package instead"
TopologyExtraTile "%1$s: %2$d packages x %3$d tiles/pkg x %4$d cores/tile x %5$d threads/core (%6$d total cores)"
TopologyExtraNode "%1$s: %2$d packages x %3$d nodes/pkg x %4$d cores/node x %5$d threads/core (%6$d total cores)"
TopologyExtraNoTi "%1$s: %2$d packages x %3$d nodes/pkg x %4$d tiles/node x %5$d cores/tile x %6$d threads/core (%7$d total cores)"
OmptOutdatedWorkshare "OMPT: Cannot determine workshare type; using the default (loop) instead. "
"This issue is fixed in an up-to-date compiler."
OmpNoAllocator "Allocator %1$s is not available, will use default allocator."
# --- OpenMP errors detected at runtime ---
#
# %1 is the name of OpenMP construct (formatted with "Pragma" format).
#
CnsBoundToWorksharing "%1$s must be bound to a work-sharing or work-queuing construct with an \"ordered\" clause"
CnsDetectedEnd "Detected end of %1$s without first executing a corresponding beginning."
CnsIterationRangeTooLarge "Iteration range too large in %1$s."
CnsLoopIncrZeroProhibited "%1$s must not have a loop increment that evaluates to zero."
#
# %1 is the name of the first OpenMP construct, %2 -- the name of the second one (both formatted with "Pragma" format).
#
CnsExpectedEnd "Expected end of %1$s; %2$s, however, has most recently begun execution."
CnsInvalidNesting "%1$s is incorrectly nested within %2$s"
CnsMultipleNesting "%1$s cannot be executed multiple times during execution of one parallel iteration/section of %2$s"
CnsNestingSameName "%1$s is incorrectly nested within %2$s of the same name"
CnsNoOrderedClause "%1$s is incorrectly nested within %2$s that does not have an \"ordered\" clause"
CnsNotInTaskConstruct "%1$s is incorrectly nested within %2$s but not within any of its \"task\" constructs"
CnsThreadsAtBarrier "One thread at %1$s while another thread is at %2$s."
# New errors
CantConnect "Cannot connect to %1$s"
CantConnectUsing "Cannot connect to %1$s - Using %2$s"
LibNotSupport "%1$s does not support %2$s. Continuing without using %2$s."
LibNotSupportFor "%1$s does not support %2$s for %3$s. Continuing without using %2$s."
StaticLibNotSupport "Static %1$s does not support %2$s. Continuing without using %2$s."
OBSOLETE "KMP_DYNAMIC_MODE=irml cannot be used with KMP_USE_IRML=0"
IttUnknownGroup "ittnotify: Unknown group \"%2$s\" specified in environment variable \"%1$s\"."
IttEnvVarTooLong "ittnotify: Environment variable \"%1$s\" too long: Actual lengths is %2$lu, max allowed length is %3$lu."
AffUseGlobCpuidL11 "%1$s: Affinity capable, using global cpuid leaf 11 info"
AffNotCapableUseLocCpuidL11 "%1$s: Affinity not capable, using local cpuid leaf 11 info"
AffInfoStr "%1$s: %2$s."
AffInfoStrStr "%1$s: %2$s - %3$s."
OSProcToPhysicalThreadMap "%1$s: OS proc to physical thread map:"
AffUsingFlatOS "%1$s: using \"flat\" OS <-> physical proc mapping."
AffParseFilename "%1$s: parsing %2$s."
MsgExiting "%1$s - exiting."
IncompatibleLibrary "Incompatible %1$s library with version %2$s found."
IttFunctionError "ittnotify: Function %1$s failed:"
IttUnknownError "ittnofify: Error #%1$d."
EnvMiddleWarn "%1$s must be set prior to first parallel region or certain API calls; ignored."
CnsLockNotDestroyed "Lock initialized at %1$s(%2$d) was not destroyed"
# %1, %2, %3, %4 -- file, line, func, col
CantLoadBalUsing "Cannot determine machine load balance - Using %1$s"
AffNotCapableUsePthread "%1$s: Affinity not capable, using pthread info"
AffUsePthread "%1$s: Affinity capable, using pthread info"
OBSOLETE "Loading \"%1$s\" library failed:"
OBSOLETE "Lookup of \"%1$s\" function failed:"
OBSOLETE "Buffer too small."
OBSOLETE "Error #%1$d."
NthSyntaxError "%1$s: Invalid symbols found. Check the value \"%2$s\"."
NthSpacesNotAllowed "%1$s: Spaces between digits are not allowed \"%2$s\"."
AffStrParseFilename "%1$s: %2$s - parsing %3$s."
OBSOLETE "%1$s cannot be specified via kmp_set_defaults() on this machine because it has more than one processor group."
AffTypeCantUseMultGroups "Cannot use affinity type \"%1$s\" with multiple Windows* OS processor groups, using \"%2$s\"."
AffGranCantUseMultGroups "Cannot use affinity granularity \"%1$s\" with multiple Windows* OS processor groups, using \"%2$s\"."
AffWindowsProcGroupMap "%1$s: Mapping Windows* OS processor group <i> proc <j> to OS proc 64*<i>+<j>."
AffOSProcToGroup "%1$s: OS proc %2$d maps to Windows* OS processor group %3$d proc %4$d"
AffBalancedNotAvail "%1$s: Affinity balanced is not available."
OBSOLETE "%1$s: granularity=core will be used."
EnvLockWarn "%1$s must be set prior to first OMP lock call or critical section; ignored."
FutexNotSupported "futex system call not supported; %1$s=%2$s ignored."
AffGranUsing "%1$s: granularity=%2$s will be used."
AffHWSubsetInvalid "%1$s: invalid value \"%2$s\", valid format is \"N<item>[@N][,...][,Nt] "
"(<item> can be S, N, L2, C, T for Socket, NUMA Node, L2 Cache, Core, Thread)\"."
AffHWSubsetUnsupported "KMP_HW_SUBSET ignored: unsupported architecture."
AffHWSubsetManyCores "KMP_HW_SUBSET ignored: too many cores requested."
SyntaxErrorUsing "%1$s: syntax error, using %2$s."
AdaptiveNotSupported "%1$s: Adaptive locks are not supported; using queuing."
EnvSyntaxError "%1$s: Invalid symbols found. Check the value \"%2$s\"."
EnvSpacesNotAllowed "%1$s: Spaces between digits are not allowed \"%2$s\"."
BoundToOSProcSet "%1$s: pid %2$d tid %3$d thread %4$d bound to OS proc set %5$s"
CnsLoopIncrIllegal "%1$s error: parallel loop increment and condition are inconsistent."
NoGompCancellation "libgomp cancellation is not currently supported."
AffHWSubsetNonUniform "KMP_HW_SUBSET ignored: non-uniform topology."
AffHWSubsetNonThreeLevel "KMP_HW_SUBSET ignored: only three-level topology is supported."
AffGranTopGroup "%1$s: granularity=%2$s is not supported with KMP_TOPOLOGY_METHOD=group. Using \"granularity=fine\"."
AffGranGroupType "%1$s: granularity=group is not supported with KMP_AFFINITY=%2$s. Using \"granularity=core\"."
AffHWSubsetManySockets "KMP_HW_SUBSET ignored: too many sockets requested."
AffHWSubsetDeprecated "KMP_HW_SUBSET \"o\" offset designator deprecated, please use @ prefix for offset value."
AffUsingHwloc "%1$s: Affinity capable, using hwloc."
AffIgnoringHwloc "%1$s: Ignoring hwloc mechanism."
AffHwlocErrorOccurred "%1$s: Hwloc failed in %2$s. Relying on internal affinity mechanisms."
EnvSerialWarn "%1$s must be set prior to OpenMP runtime library initialization; ignored."
EnvVarDeprecated "%1$s variable deprecated, please use %2$s instead."
RedMethodNotSupported "KMP_FORCE_REDUCTION: %1$s method is not supported; using critical."
AffHWSubsetNoHWLOC "KMP_HW_SUBSET ignored: unsupported item requested for non-HWLOC topology method (KMP_TOPOLOGY_METHOD)"
AffHWSubsetManyNodes "KMP_HW_SUBSET ignored: too many NUMA Nodes requested."
AffHWSubsetManyTiles "KMP_HW_SUBSET ignored: too many L2 Caches requested."
AffHWSubsetManyProcs "KMP_HW_SUBSET ignored: too many Procs requested."
HierSchedInvalid "Hierarchy ignored: unsupported level: %1$s."
AffFormatDefault "OMP: pid %1$s tid %2$s thread %3$s bound to OS proc set {%4$s}"
# --------------------------------------------------------------------------------------------------
-*- HINTS -*-
# --------------------------------------------------------------------------------------------------
# Hints. Hint may be printed after a message. Usually it is longer explanation text or suggestion.
# To maintain hint numbers (they are visible to customers), add new hints to the end.
SubmitBugReport "Please submit a bug report with this message, compile and run "
"commands used, and machine configuration info including native "
"compiler and operating system versions. Faster response will be "
"obtained by including all program sources. For information on "
"submitting this issue, please see "
"https://bugs.llvm.org/."
OBSOLETE "Check NLSPATH environment variable, its value is \"%1$s\"."
ChangeStackLimit "Please try changing the shell stack limit or adjusting the "
"OMP_STACKSIZE environment variable."
Unset_ALL_THREADS "Consider unsetting KMP_DEVICE_THREAD_LIMIT (KMP_ALL_THREADS), KMP_TEAMS_THREAD_LIMIT, and OMP_THREAD_LIMIT (if any are set)."
Set_ALL_THREADPRIVATE "Consider setting KMP_ALL_THREADPRIVATE to a value larger than %1$d."
PossibleSystemLimitOnThreads "This could also be due to a system-related limit on the number of threads."
DuplicateLibrary "This means that multiple copies of the OpenMP runtime have been "
"linked into the program. That is dangerous, since it can degrade "
"performance or cause incorrect results. "
"The best thing to do is to ensure that only a single OpenMP runtime is "
"linked into the process, e.g. by avoiding static linking of the OpenMP "
"runtime in any library. As an unsafe, unsupported, undocumented workaround "
"you can set the environment variable KMP_DUPLICATE_LIB_OK=TRUE to allow "
"the program to continue to execute, but that may cause crashes or "
"silently produce incorrect results. "
"For more information, please see http://openmp.llvm.org/"
NameComesFrom_CPUINFO_FILE "This name is specified in environment variable KMP_CPUINFO_FILE."
NotEnoughMemory "Seems application required too much memory."
ValidBoolValues "Use \"0\", \"FALSE\". \".F.\", \"off\", \"no\" as false values, "
"\"1\", \"TRUE\", \".T.\", \"on\", \"yes\" as true values."
BufferOverflow "Perhaps too many threads."
RunningAtMaxPriority "Decrease priority of application. "
"This will allow the monitor thread run at higher priority than other threads."
ChangeMonitorStackSize "Try changing KMP_MONITOR_STACKSIZE or the shell stack limit."
ChangeWorkerStackSize "Try changing OMP_STACKSIZE and/or the shell stack limit."
IncreaseWorkerStackSize "Try increasing OMP_STACKSIZE or the shell stack limit."
DecreaseWorkerStackSize "Try decreasing OMP_STACKSIZE."
Decrease_NUM_THREADS "Try decreasing the value of OMP_NUM_THREADS."
IncreaseMonitorStackSize "Try increasing KMP_MONITOR_STACKSIZE."
DecreaseMonitorStackSize "Try decreasing KMP_MONITOR_STACKSIZE."
DecreaseNumberOfThreadsInUse "Try decreasing the number of threads in use simultaneously."
DefaultScheduleKindUsed "Will use default schedule type (%1$s)."
GetNewerLibrary "It could be a result of using an older OMP library with a newer "
"compiler or memory corruption. You may check the proper OMP library "
"is linked to the application."
CheckEnvVar "Check %1$s environment variable, its value is \"%2$s\"."
OBSOLETE "You may want to use an %1$s library that supports %2$s interface with version %3$s."
OBSOLETE "You may want to use an %1$s library with version %2$s."
BadExeFormat "System error #193 is \"Bad format of EXE or DLL file\". "
"Usually it means the file is found, but it is corrupted or "
"a file for another architecture. "
"Check whether \"%1$s\" is a file for %2$s architecture."
SystemLimitOnThreads "System-related limit on the number of threads."
# --------------------------------------------------------------------------------------------------
# end of file #
# --------------------------------------------------------------------------------------------------

View File

@ -0,0 +1,165 @@
/*
* include/30/omp.h.var
*/
//===----------------------------------------------------------------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is dual licensed under the MIT and the University of Illinois Open
// Source Licenses. See LICENSE.txt for details.
//
//===----------------------------------------------------------------------===//
#ifndef __OMP_H
# define __OMP_H
# define KMP_VERSION_MAJOR @LIBOMP_VERSION_MAJOR@
# define KMP_VERSION_MINOR @LIBOMP_VERSION_MINOR@
# define KMP_VERSION_BUILD @LIBOMP_VERSION_BUILD@
# define KMP_BUILD_DATE "@LIBOMP_BUILD_DATE@"
# ifdef __cplusplus
extern "C" {
# endif
# define omp_set_num_threads ompc_set_num_threads
# define omp_set_dynamic ompc_set_dynamic
# define omp_set_nested ompc_set_nested
# define omp_set_max_active_levels ompc_set_max_active_levels
# define omp_set_schedule ompc_set_schedule
# define omp_get_ancestor_thread_num ompc_get_ancestor_thread_num
# define omp_get_team_size ompc_get_team_size
# define kmp_set_stacksize kmpc_set_stacksize
# define kmp_set_stacksize_s kmpc_set_stacksize_s
# define kmp_set_blocktime kmpc_set_blocktime
# define kmp_set_library kmpc_set_library
# define kmp_set_defaults kmpc_set_defaults
# define kmp_set_affinity_mask_proc kmpc_set_affinity_mask_proc
# define kmp_unset_affinity_mask_proc kmpc_unset_affinity_mask_proc
# define kmp_get_affinity_mask_proc kmpc_get_affinity_mask_proc
# define kmp_malloc kmpc_malloc
# define kmp_calloc kmpc_calloc
# define kmp_realloc kmpc_realloc
# define kmp_free kmpc_free
# if defined(_WIN32)
# define __KAI_KMPC_CONVENTION __cdecl
# else
# define __KAI_KMPC_CONVENTION
# endif
/* schedule kind constants */
typedef enum omp_sched_t {
omp_sched_static = 1,
omp_sched_dynamic = 2,
omp_sched_guided = 3,
omp_sched_auto = 4
} omp_sched_t;
/* set API functions */
extern void __KAI_KMPC_CONVENTION omp_set_num_threads (int);
extern void __KAI_KMPC_CONVENTION omp_set_dynamic (int);
extern void __KAI_KMPC_CONVENTION omp_set_nested (int);
extern void __KAI_KMPC_CONVENTION omp_set_max_active_levels (int);
extern void __KAI_KMPC_CONVENTION omp_set_schedule (omp_sched_t, int);
/* query API functions */
extern int __KAI_KMPC_CONVENTION omp_get_num_threads (void);
extern int __KAI_KMPC_CONVENTION omp_get_dynamic (void);
extern int __KAI_KMPC_CONVENTION omp_get_nested (void);
extern int __KAI_KMPC_CONVENTION omp_get_max_threads (void);
extern int __KAI_KMPC_CONVENTION omp_get_thread_num (void);
extern int __KAI_KMPC_CONVENTION omp_get_num_procs (void);
extern int __KAI_KMPC_CONVENTION omp_in_parallel (void);
extern int __KAI_KMPC_CONVENTION omp_in_final (void);
extern int __KAI_KMPC_CONVENTION omp_get_active_level (void);
extern int __KAI_KMPC_CONVENTION omp_get_level (void);
extern int __KAI_KMPC_CONVENTION omp_get_ancestor_thread_num (int);
extern int __KAI_KMPC_CONVENTION omp_get_team_size (int);
extern int __KAI_KMPC_CONVENTION omp_get_thread_limit (void);
extern int __KAI_KMPC_CONVENTION omp_get_max_active_levels (void);
extern void __KAI_KMPC_CONVENTION omp_get_schedule (omp_sched_t *, int *);
/* lock API functions */
typedef struct omp_lock_t {
void * _lk;
} omp_lock_t;
extern void __KAI_KMPC_CONVENTION omp_init_lock (omp_lock_t *);
extern void __KAI_KMPC_CONVENTION omp_set_lock (omp_lock_t *);
extern void __KAI_KMPC_CONVENTION omp_unset_lock (omp_lock_t *);
extern void __KAI_KMPC_CONVENTION omp_destroy_lock (omp_lock_t *);
extern int __KAI_KMPC_CONVENTION omp_test_lock (omp_lock_t *);
/* nested lock API functions */
typedef struct omp_nest_lock_t {
void * _lk;
} omp_nest_lock_t;
extern void __KAI_KMPC_CONVENTION omp_init_nest_lock (omp_nest_lock_t *);
extern void __KAI_KMPC_CONVENTION omp_set_nest_lock (omp_nest_lock_t *);
extern void __KAI_KMPC_CONVENTION omp_unset_nest_lock (omp_nest_lock_t *);
extern void __KAI_KMPC_CONVENTION omp_destroy_nest_lock (omp_nest_lock_t *);
extern int __KAI_KMPC_CONVENTION omp_test_nest_lock (omp_nest_lock_t *);
/* time API functions */
extern double __KAI_KMPC_CONVENTION omp_get_wtime (void);
extern double __KAI_KMPC_CONVENTION omp_get_wtick (void);
# include <stdlib.h>
/* kmp API functions */
extern int __KAI_KMPC_CONVENTION kmp_get_stacksize (void);
extern void __KAI_KMPC_CONVENTION kmp_set_stacksize (int);
extern size_t __KAI_KMPC_CONVENTION kmp_get_stacksize_s (void);
extern void __KAI_KMPC_CONVENTION kmp_set_stacksize_s (size_t);
extern int __KAI_KMPC_CONVENTION kmp_get_blocktime (void);
extern int __KAI_KMPC_CONVENTION kmp_get_library (void);
extern void __KAI_KMPC_CONVENTION kmp_set_blocktime (int);
extern void __KAI_KMPC_CONVENTION kmp_set_library (int);
extern void __KAI_KMPC_CONVENTION kmp_set_library_serial (void);
extern void __KAI_KMPC_CONVENTION kmp_set_library_turnaround (void);
extern void __KAI_KMPC_CONVENTION kmp_set_library_throughput (void);
extern void __KAI_KMPC_CONVENTION kmp_set_defaults (char const *);
/* affinity API functions */
typedef void * kmp_affinity_mask_t;
extern int __KAI_KMPC_CONVENTION kmp_set_affinity (kmp_affinity_mask_t *);
extern int __KAI_KMPC_CONVENTION kmp_get_affinity (kmp_affinity_mask_t *);
extern int __KAI_KMPC_CONVENTION kmp_get_affinity_max_proc (void);
extern void __KAI_KMPC_CONVENTION kmp_create_affinity_mask (kmp_affinity_mask_t *);
extern void __KAI_KMPC_CONVENTION kmp_destroy_affinity_mask (kmp_affinity_mask_t *);
extern int __KAI_KMPC_CONVENTION kmp_set_affinity_mask_proc (int, kmp_affinity_mask_t *);
extern int __KAI_KMPC_CONVENTION kmp_unset_affinity_mask_proc (int, kmp_affinity_mask_t *);
extern int __KAI_KMPC_CONVENTION kmp_get_affinity_mask_proc (int, kmp_affinity_mask_t *);
extern void * __KAI_KMPC_CONVENTION kmp_malloc (size_t);
extern void * __KAI_KMPC_CONVENTION kmp_aligned_malloc (size_t, size_t);
extern void * __KAI_KMPC_CONVENTION kmp_calloc (size_t, size_t);
extern void * __KAI_KMPC_CONVENTION kmp_realloc (void *, size_t);
extern void __KAI_KMPC_CONVENTION kmp_free (void *);
extern void __KAI_KMPC_CONVENTION kmp_set_warnings_on(void);
extern void __KAI_KMPC_CONVENTION kmp_set_warnings_off(void);
# undef __KAI_KMPC_CONVENTION
/* Warning:
The following typedefs are not standard, deprecated and will be removed in a future release.
*/
typedef int omp_int_t;
typedef double omp_wtime_t;
# ifdef __cplusplus
}
# endif
#endif /* __OMP_H */

View File

@ -0,0 +1,644 @@
! include/30/omp_lib.f.var
!
!//===----------------------------------------------------------------------===//
!//
!// The LLVM Compiler Infrastructure
!//
!// This file is dual licensed under the MIT and the University of Illinois Open
!// Source Licenses. See LICENSE.txt for details.
!//
!//===----------------------------------------------------------------------===//
!
!***
!*** Some of the directives for the following routine extend past column 72,
!*** so process this file in 132-column mode.
!***
!dec$ fixedformlinesize:132
module omp_lib_kinds
integer, parameter :: omp_integer_kind = 4
integer, parameter :: omp_logical_kind = 4
integer, parameter :: omp_real_kind = 4
integer, parameter :: omp_lock_kind = int_ptr_kind()
integer, parameter :: omp_nest_lock_kind = int_ptr_kind()
integer, parameter :: omp_sched_kind = omp_integer_kind
integer, parameter :: kmp_pointer_kind = int_ptr_kind()
integer, parameter :: kmp_size_t_kind = int_ptr_kind()
integer, parameter :: kmp_affinity_mask_kind = int_ptr_kind()
end module omp_lib_kinds
module omp_lib
use omp_lib_kinds
integer (kind=omp_integer_kind), parameter :: kmp_version_major = @LIBOMP_VERSION_MAJOR@
integer (kind=omp_integer_kind), parameter :: kmp_version_minor = @LIBOMP_VERSION_MINOR@
integer (kind=omp_integer_kind), parameter :: kmp_version_build = @LIBOMP_VERSION_BUILD@
character(*), parameter :: kmp_build_date = '@LIBOMP_BUILD_DATE@'
integer (kind=omp_integer_kind), parameter :: openmp_version = @LIBOMP_OMP_YEAR_MONTH@
integer(kind=omp_sched_kind), parameter :: omp_sched_static = 1
integer(kind=omp_sched_kind), parameter :: omp_sched_dynamic = 2
integer(kind=omp_sched_kind), parameter :: omp_sched_guided = 3
integer(kind=omp_sched_kind), parameter :: omp_sched_auto = 4
interface
! ***
! *** omp_* entry points
! ***
subroutine omp_set_num_threads(nthreads)
use omp_lib_kinds
integer (kind=omp_integer_kind) nthreads
end subroutine omp_set_num_threads
subroutine omp_set_dynamic(enable)
use omp_lib_kinds
logical (kind=omp_logical_kind) enable
end subroutine omp_set_dynamic
subroutine omp_set_nested(enable)
use omp_lib_kinds
logical (kind=omp_logical_kind) enable
end subroutine omp_set_nested
function omp_get_num_threads()
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_num_threads
end function omp_get_num_threads
function omp_get_max_threads()
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_max_threads
end function omp_get_max_threads
function omp_get_thread_num()
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_thread_num
end function omp_get_thread_num
function omp_get_num_procs()
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_num_procs
end function omp_get_num_procs
function omp_in_parallel()
use omp_lib_kinds
logical (kind=omp_logical_kind) omp_in_parallel
end function omp_in_parallel
function omp_get_dynamic()
use omp_lib_kinds
logical (kind=omp_logical_kind) omp_get_dynamic
end function omp_get_dynamic
function omp_get_nested()
use omp_lib_kinds
logical (kind=omp_logical_kind) omp_get_nested
end function omp_get_nested
function omp_get_thread_limit()
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_thread_limit
end function omp_get_thread_limit
subroutine omp_set_max_active_levels(max_levels)
use omp_lib_kinds
integer (kind=omp_integer_kind) max_levels
end subroutine omp_set_max_active_levels
function omp_get_max_active_levels()
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_max_active_levels
end function omp_get_max_active_levels
function omp_get_level()
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_level
end function omp_get_level
function omp_get_active_level()
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_active_level
end function omp_get_active_level
function omp_get_ancestor_thread_num(level)
use omp_lib_kinds
integer (kind=omp_integer_kind) level
integer (kind=omp_integer_kind) omp_get_ancestor_thread_num
end function omp_get_ancestor_thread_num
function omp_get_team_size(level)
use omp_lib_kinds
integer (kind=omp_integer_kind) level
integer (kind=omp_integer_kind) omp_get_team_size
end function omp_get_team_size
subroutine omp_set_schedule(kind, modifier)
use omp_lib_kinds
integer (kind=omp_sched_kind) kind
integer (kind=omp_integer_kind) modifier
end subroutine omp_set_schedule
subroutine omp_get_schedule(kind, modifier)
use omp_lib_kinds
integer (kind=omp_sched_kind) kind
integer (kind=omp_integer_kind) modifier
end subroutine omp_get_schedule
function omp_get_wtime()
double precision omp_get_wtime
end function omp_get_wtime
function omp_get_wtick ()
double precision omp_get_wtick
end function omp_get_wtick
subroutine omp_init_lock(lockvar)
use omp_lib_kinds
integer (kind=omp_lock_kind) lockvar
end subroutine omp_init_lock
subroutine omp_destroy_lock(lockvar)
use omp_lib_kinds
integer (kind=omp_lock_kind) lockvar
end subroutine omp_destroy_lock
subroutine omp_set_lock(lockvar)
use omp_lib_kinds
integer (kind=omp_lock_kind) lockvar
end subroutine omp_set_lock
subroutine omp_unset_lock(lockvar)
use omp_lib_kinds
integer (kind=omp_lock_kind) lockvar
end subroutine omp_unset_lock
function omp_test_lock(lockvar)
use omp_lib_kinds
logical (kind=omp_logical_kind) omp_test_lock
integer (kind=omp_lock_kind) lockvar
end function omp_test_lock
subroutine omp_init_nest_lock(lockvar)
use omp_lib_kinds
integer (kind=omp_nest_lock_kind) lockvar
end subroutine omp_init_nest_lock
subroutine omp_destroy_nest_lock(lockvar)
use omp_lib_kinds
integer (kind=omp_nest_lock_kind) lockvar
end subroutine omp_destroy_nest_lock
subroutine omp_set_nest_lock(lockvar)
use omp_lib_kinds
integer (kind=omp_nest_lock_kind) lockvar
end subroutine omp_set_nest_lock
subroutine omp_unset_nest_lock(lockvar)
use omp_lib_kinds
integer (kind=omp_nest_lock_kind) lockvar
end subroutine omp_unset_nest_lock
function omp_test_nest_lock(lockvar)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_test_nest_lock
integer (kind=omp_nest_lock_kind) lockvar
end function omp_test_nest_lock
! ***
! *** kmp_* entry points
! ***
subroutine kmp_set_stacksize(size)
use omp_lib_kinds
integer (kind=omp_integer_kind) size
end subroutine kmp_set_stacksize
subroutine kmp_set_stacksize_s(size)
use omp_lib_kinds
integer (kind=kmp_size_t_kind) size
end subroutine kmp_set_stacksize_s
subroutine kmp_set_blocktime(msec)
use omp_lib_kinds
integer (kind=omp_integer_kind) msec
end subroutine kmp_set_blocktime
subroutine kmp_set_library_serial()
end subroutine kmp_set_library_serial
subroutine kmp_set_library_turnaround()
end subroutine kmp_set_library_turnaround
subroutine kmp_set_library_throughput()
end subroutine kmp_set_library_throughput
subroutine kmp_set_library(libnum)
use omp_lib_kinds
integer (kind=omp_integer_kind) libnum
end subroutine kmp_set_library
subroutine kmp_set_defaults(string)
character*(*) string
end subroutine kmp_set_defaults
function kmp_get_stacksize()
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_get_stacksize
end function kmp_get_stacksize
function kmp_get_stacksize_s()
use omp_lib_kinds
integer (kind=kmp_size_t_kind) kmp_get_stacksize_s
end function kmp_get_stacksize_s
function kmp_get_blocktime()
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_get_blocktime
end function kmp_get_blocktime
function kmp_get_library()
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_get_library
end function kmp_get_library
function kmp_set_affinity(mask)
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_set_affinity
integer (kind=kmp_affinity_mask_kind) mask
end function kmp_set_affinity
function kmp_get_affinity(mask)
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_get_affinity
integer (kind=kmp_affinity_mask_kind) mask
end function kmp_get_affinity
function kmp_get_affinity_max_proc()
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_get_affinity_max_proc
end function kmp_get_affinity_max_proc
subroutine kmp_create_affinity_mask(mask)
use omp_lib_kinds
integer (kind=kmp_affinity_mask_kind) mask
end subroutine kmp_create_affinity_mask
subroutine kmp_destroy_affinity_mask(mask)
use omp_lib_kinds
integer (kind=kmp_affinity_mask_kind) mask
end subroutine kmp_destroy_affinity_mask
function kmp_set_affinity_mask_proc(proc, mask)
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_set_affinity_mask_proc
integer (kind=omp_integer_kind) proc
integer (kind=kmp_affinity_mask_kind) mask
end function kmp_set_affinity_mask_proc
function kmp_unset_affinity_mask_proc(proc, mask)
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_unset_affinity_mask_proc
integer (kind=omp_integer_kind) proc
integer (kind=kmp_affinity_mask_kind) mask
end function kmp_unset_affinity_mask_proc
function kmp_get_affinity_mask_proc(proc, mask)
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_get_affinity_mask_proc
integer (kind=omp_integer_kind) proc
integer (kind=kmp_affinity_mask_kind) mask
end function kmp_get_affinity_mask_proc
function kmp_malloc(size)
use omp_lib_kinds
integer (kind=kmp_pointer_kind) kmp_malloc
integer (kind=kmp_size_t_kind) size
end function kmp_malloc
function kmp_aligned_malloc(size, alignment)
use omp_lib_kinds
integer (kind=kmp_pointer_kind) kmp_aligned_malloc
integer (kind=kmp_size_t_kind) size
integer (kind=kmp_size_t_kind) alignment
end function kmp_aligned_malloc
function kmp_calloc(nelem, elsize)
use omp_lib_kinds
integer (kind=kmp_pointer_kind) kmp_calloc
integer (kind=kmp_size_t_kind) nelem
integer (kind=kmp_size_t_kind) elsize
end function kmp_calloc
function kmp_realloc(ptr, size)
use omp_lib_kinds
integer (kind=kmp_pointer_kind) kmp_realloc
integer (kind=kmp_pointer_kind) ptr
integer (kind=kmp_size_t_kind) size
end function kmp_realloc
subroutine kmp_free(ptr)
use omp_lib_kinds
integer (kind=kmp_pointer_kind) ptr
end subroutine kmp_free
subroutine kmp_set_warnings_on()
end subroutine kmp_set_warnings_on
subroutine kmp_set_warnings_off()
end subroutine kmp_set_warnings_off
end interface
!dec$ if defined(_WIN32)
!dec$ if defined(_WIN64) .or. defined(_M_AMD64)
!***
!*** The Fortran entry points must be in uppercase, even if the /Qlowercase
!*** option is specified. The alias attribute ensures that the specified
!*** string is used as the entry point.
!***
!*** On the Windows* OS IA-32 architecture, the Fortran entry points have an
!*** underscore prepended. On the Windows* OS Intel(R) 64
!*** architecture, no underscore is prepended.
!***
!dec$ attributes alias:'OMP_SET_NUM_THREADS' :: omp_set_num_threads
!dec$ attributes alias:'OMP_SET_DYNAMIC' :: omp_set_dynamic
!dec$ attributes alias:'OMP_SET_NESTED' :: omp_set_nested
!dec$ attributes alias:'OMP_GET_NUM_THREADS' :: omp_get_num_threads
!dec$ attributes alias:'OMP_GET_MAX_THREADS' :: omp_get_max_threads
!dec$ attributes alias:'OMP_GET_THREAD_NUM' :: omp_get_thread_num
!dec$ attributes alias:'OMP_GET_NUM_PROCS' :: omp_get_num_procs
!dec$ attributes alias:'OMP_IN_PARALLEL' :: omp_in_parallel
!dec$ attributes alias:'OMP_GET_DYNAMIC' :: omp_get_dynamic
!dec$ attributes alias:'OMP_GET_NESTED' :: omp_get_nested
!dec$ attributes alias:'OMP_GET_THREAD_LIMIT' :: omp_get_thread_limit
!dec$ attributes alias:'OMP_SET_MAX_ACTIVE_LEVELS' :: omp_set_max_active_levels
!dec$ attributes alias:'OMP_GET_MAX_ACTIVE_LEVELS' :: omp_get_max_active_levels
!dec$ attributes alias:'OMP_GET_LEVEL' :: omp_get_level
!dec$ attributes alias:'OMP_GET_ACTIVE_LEVEL' :: omp_get_active_level
!dec$ attributes alias:'OMP_GET_ANCESTOR_THREAD_NUM' :: omp_get_ancestor_thread_num
!dec$ attributes alias:'OMP_GET_TEAM_SIZE' :: omp_get_team_size
!dec$ attributes alias:'OMP_SET_SCHEDULE' :: omp_set_schedule
!dec$ attributes alias:'OMP_GET_SCHEDULE' :: omp_get_schedule
!dec$ attributes alias:'OMP_GET_WTIME' :: omp_get_wtime
!dec$ attributes alias:'OMP_GET_WTICK' :: omp_get_wtick
!dec$ attributes alias:'omp_init_lock' :: omp_init_lock
!dec$ attributes alias:'omp_destroy_lock' :: omp_destroy_lock
!dec$ attributes alias:'omp_set_lock' :: omp_set_lock
!dec$ attributes alias:'omp_unset_lock' :: omp_unset_lock
!dec$ attributes alias:'omp_test_lock' :: omp_test_lock
!dec$ attributes alias:'omp_init_nest_lock' :: omp_init_nest_lock
!dec$ attributes alias:'omp_destroy_nest_lock' :: omp_destroy_nest_lock
!dec$ attributes alias:'omp_set_nest_lock' :: omp_set_nest_lock
!dec$ attributes alias:'omp_unset_nest_lock' :: omp_unset_nest_lock
!dec$ attributes alias:'omp_test_nest_lock' :: omp_test_nest_lock
!dec$ attributes alias:'KMP_SET_STACKSIZE'::kmp_set_stacksize
!dec$ attributes alias:'KMP_SET_STACKSIZE_S'::kmp_set_stacksize_s
!dec$ attributes alias:'KMP_SET_BLOCKTIME'::kmp_set_blocktime
!dec$ attributes alias:'KMP_SET_LIBRARY_SERIAL'::kmp_set_library_serial
!dec$ attributes alias:'KMP_SET_LIBRARY_TURNAROUND'::kmp_set_library_turnaround
!dec$ attributes alias:'KMP_SET_LIBRARY_THROUGHPUT'::kmp_set_library_throughput
!dec$ attributes alias:'KMP_SET_LIBRARY'::kmp_set_library
!dec$ attributes alias:'KMP_GET_STACKSIZE'::kmp_get_stacksize
!dec$ attributes alias:'KMP_GET_STACKSIZE_S'::kmp_get_stacksize_s
!dec$ attributes alias:'KMP_GET_BLOCKTIME'::kmp_get_blocktime
!dec$ attributes alias:'KMP_GET_LIBRARY'::kmp_get_library
!dec$ attributes alias:'KMP_SET_AFFINITY'::kmp_set_affinity
!dec$ attributes alias:'KMP_GET_AFFINITY'::kmp_get_affinity
!dec$ attributes alias:'KMP_GET_AFFINITY_MAX_PROC'::kmp_get_affinity_max_proc
!dec$ attributes alias:'KMP_CREATE_AFFINITY_MASK'::kmp_create_affinity_mask
!dec$ attributes alias:'KMP_DESTROY_AFFINITY_MASK'::kmp_destroy_affinity_mask
!dec$ attributes alias:'KMP_SET_AFFINITY_MASK_PROC'::kmp_set_affinity_mask_proc
!dec$ attributes alias:'KMP_UNSET_AFFINITY_MASK_PROC'::kmp_unset_affinity_mask_proc
!dec$ attributes alias:'KMP_GET_AFFINITY_MASK_PROC'::kmp_get_affinity_mask_proc
!dec$ attributes alias:'KMP_MALLOC'::kmp_malloc
!dec$ attributes alias:'KMP_ALIGNED_MALLOC'::kmp_aligned_malloc
!dec$ attributes alias:'KMP_CALLOC'::kmp_calloc
!dec$ attributes alias:'KMP_REALLOC'::kmp_realloc
!dec$ attributes alias:'KMP_FREE'::kmp_free
!dec$ attributes alias:'KMP_SET_WARNINGS_ON'::kmp_set_warnings_on
!dec$ attributes alias:'KMP_SET_WARNINGS_OFF'::kmp_set_warnings_off
!dec$ else
!***
!*** On Windows* OS IA-32 architecture, the Fortran entry points have an underscore prepended.
!***
!dec$ attributes alias:'_OMP_SET_NUM_THREADS' :: omp_set_num_threads
!dec$ attributes alias:'_OMP_SET_DYNAMIC' :: omp_set_dynamic
!dec$ attributes alias:'_OMP_SET_NESTED' :: omp_set_nested
!dec$ attributes alias:'_OMP_GET_NUM_THREADS' :: omp_get_num_threads
!dec$ attributes alias:'_OMP_GET_MAX_THREADS' :: omp_get_max_threads
!dec$ attributes alias:'_OMP_GET_THREAD_NUM' :: omp_get_thread_num
!dec$ attributes alias:'_OMP_GET_NUM_PROCS' :: omp_get_num_procs
!dec$ attributes alias:'_OMP_IN_PARALLEL' :: omp_in_parallel
!dec$ attributes alias:'_OMP_GET_DYNAMIC' :: omp_get_dynamic
!dec$ attributes alias:'_OMP_GET_NESTED' :: omp_get_nested
!dec$ attributes alias:'_OMP_GET_THREAD_LIMIT' :: omp_get_thread_limit
!dec$ attributes alias:'_OMP_SET_MAX_ACTIVE_LEVELS' :: omp_set_max_active_levels
!dec$ attributes alias:'_OMP_GET_MAX_ACTIVE_LEVELS' :: omp_get_max_active_levels
!dec$ attributes alias:'_OMP_GET_LEVEL' :: omp_get_level
!dec$ attributes alias:'_OMP_GET_ACTIVE_LEVEL' :: omp_get_active_level
!dec$ attributes alias:'_OMP_GET_ANCESTOR_THREAD_NUM' :: omp_get_ancestor_thread_num
!dec$ attributes alias:'_OMP_GET_TEAM_SIZE' :: omp_get_team_size
!dec$ attributes alias:'_OMP_SET_SCHEDULE' :: omp_set_schedule
!dec$ attributes alias:'_OMP_GET_SCHEDULE' :: omp_get_schedule
!dec$ attributes alias:'_OMP_GET_WTIME' :: omp_get_wtime
!dec$ attributes alias:'_OMP_GET_WTICK' :: omp_get_wtick
!dec$ attributes alias:'_omp_init_lock' :: omp_init_lock
!dec$ attributes alias:'_omp_destroy_lock' :: omp_destroy_lock
!dec$ attributes alias:'_omp_set_lock' :: omp_set_lock
!dec$ attributes alias:'_omp_unset_lock' :: omp_unset_lock
!dec$ attributes alias:'_omp_test_lock' :: omp_test_lock
!dec$ attributes alias:'_omp_init_nest_lock' :: omp_init_nest_lock
!dec$ attributes alias:'_omp_destroy_nest_lock' :: omp_destroy_nest_lock
!dec$ attributes alias:'_omp_set_nest_lock' :: omp_set_nest_lock
!dec$ attributes alias:'_omp_unset_nest_lock' :: omp_unset_nest_lock
!dec$ attributes alias:'_omp_test_nest_lock' :: omp_test_nest_lock
!dec$ attributes alias:'_KMP_SET_STACKSIZE'::kmp_set_stacksize
!dec$ attributes alias:'_KMP_SET_STACKSIZE_S'::kmp_set_stacksize_s
!dec$ attributes alias:'_KMP_SET_BLOCKTIME'::kmp_set_blocktime
!dec$ attributes alias:'_KMP_SET_LIBRARY_SERIAL'::kmp_set_library_serial
!dec$ attributes alias:'_KMP_SET_LIBRARY_TURNAROUND'::kmp_set_library_turnaround
!dec$ attributes alias:'_KMP_SET_LIBRARY_THROUGHPUT'::kmp_set_library_throughput
!dec$ attributes alias:'_KMP_SET_LIBRARY'::kmp_set_library
!dec$ attributes alias:'_KMP_GET_STACKSIZE'::kmp_get_stacksize
!dec$ attributes alias:'_KMP_GET_STACKSIZE_S'::kmp_get_stacksize_s
!dec$ attributes alias:'_KMP_GET_BLOCKTIME'::kmp_get_blocktime
!dec$ attributes alias:'_KMP_GET_LIBRARY'::kmp_get_library
!dec$ attributes alias:'_KMP_SET_AFFINITY'::kmp_set_affinity
!dec$ attributes alias:'_KMP_GET_AFFINITY'::kmp_get_affinity
!dec$ attributes alias:'_KMP_GET_AFFINITY_MAX_PROC'::kmp_get_affinity_max_proc
!dec$ attributes alias:'_KMP_CREATE_AFFINITY_MASK'::kmp_create_affinity_mask
!dec$ attributes alias:'_KMP_DESTROY_AFFINITY_MASK'::kmp_destroy_affinity_mask
!dec$ attributes alias:'_KMP_SET_AFFINITY_MASK_PROC'::kmp_set_affinity_mask_proc
!dec$ attributes alias:'_KMP_UNSET_AFFINITY_MASK_PROC'::kmp_unset_affinity_mask_proc
!dec$ attributes alias:'_KMP_GET_AFFINITY_MASK_PROC'::kmp_get_affinity_mask_proc
!dec$ attributes alias:'_KMP_MALLOC'::kmp_malloc
!dec$ attributes alias:'_KMP_ALIGNED_MALLOC'::kmp_aligned_malloc
!dec$ attributes alias:'_KMP_CALLOC'::kmp_calloc
!dec$ attributes alias:'_KMP_REALLOC'::kmp_realloc
!dec$ attributes alias:'_KMP_FREE'::kmp_free
!dec$ attributes alias:'_KMP_SET_WARNINGS_ON'::kmp_set_warnings_on
!dec$ attributes alias:'_KMP_SET_WARNINGS_OFF'::kmp_set_warnings_off
!dec$ endif
!dec$ endif
!dec$ if defined(__linux)
!***
!*** The Linux* OS entry points are in lowercase, with an underscore appended.
!***
!dec$ attributes alias:'omp_set_num_threads_'::omp_set_num_threads
!dec$ attributes alias:'omp_set_dynamic_'::omp_set_dynamic
!dec$ attributes alias:'omp_set_nested_'::omp_set_nested
!dec$ attributes alias:'omp_get_num_threads_'::omp_get_num_threads
!dec$ attributes alias:'omp_get_max_threads_'::omp_get_max_threads
!dec$ attributes alias:'omp_get_thread_num_'::omp_get_thread_num
!dec$ attributes alias:'omp_get_num_procs_'::omp_get_num_procs
!dec$ attributes alias:'omp_in_parallel_'::omp_in_parallel
!dec$ attributes alias:'omp_get_dynamic_'::omp_get_dynamic
!dec$ attributes alias:'omp_get_nested_'::omp_get_nested
!dec$ attributes alias:'omp_get_thread_limit_'::omp_get_thread_limit
!dec$ attributes alias:'omp_set_max_active_levels_'::omp_set_max_active_levels
!dec$ attributes alias:'omp_get_max_active_levels_'::omp_get_max_active_levels
!dec$ attributes alias:'omp_get_level_'::omp_get_level
!dec$ attributes alias:'omp_get_active_level_'::omp_get_active_level
!dec$ attributes alias:'omp_get_ancestor_thread_num_'::omp_get_ancestor_thread_num
!dec$ attributes alias:'omp_get_team_size_'::omp_get_team_size
!dec$ attributes alias:'omp_set_schedule_'::omp_set_schedule
!dec$ attributes alias:'omp_get_schedule_'::omp_get_schedule
!dec$ attributes alias:'omp_get_wtime_'::omp_get_wtime
!dec$ attributes alias:'omp_get_wtick_'::omp_get_wtick
!dec$ attributes alias:'omp_init_lock_'::omp_init_lock
!dec$ attributes alias:'omp_destroy_lock_'::omp_destroy_lock
!dec$ attributes alias:'omp_set_lock_'::omp_set_lock
!dec$ attributes alias:'omp_unset_lock_'::omp_unset_lock
!dec$ attributes alias:'omp_test_lock_'::omp_test_lock
!dec$ attributes alias:'omp_init_nest_lock_'::omp_init_nest_lock
!dec$ attributes alias:'omp_destroy_nest_lock_'::omp_destroy_nest_lock
!dec$ attributes alias:'omp_set_nest_lock_'::omp_set_nest_lock
!dec$ attributes alias:'omp_unset_nest_lock_'::omp_unset_nest_lock
!dec$ attributes alias:'omp_test_nest_lock_'::omp_test_nest_lock
!dec$ attributes alias:'kmp_set_stacksize_'::kmp_set_stacksize
!dec$ attributes alias:'kmp_set_stacksize_s_'::kmp_set_stacksize_s
!dec$ attributes alias:'kmp_set_blocktime_'::kmp_set_blocktime
!dec$ attributes alias:'kmp_set_library_serial_'::kmp_set_library_serial
!dec$ attributes alias:'kmp_set_library_turnaround_'::kmp_set_library_turnaround
!dec$ attributes alias:'kmp_set_library_throughput_'::kmp_set_library_throughput
!dec$ attributes alias:'kmp_set_library_'::kmp_set_library
!dec$ attributes alias:'kmp_get_stacksize_'::kmp_get_stacksize
!dec$ attributes alias:'kmp_get_stacksize_s_'::kmp_get_stacksize_s
!dec$ attributes alias:'kmp_get_blocktime_'::kmp_get_blocktime
!dec$ attributes alias:'kmp_get_library_'::kmp_get_library
!dec$ attributes alias:'kmp_set_affinity_'::kmp_set_affinity
!dec$ attributes alias:'kmp_get_affinity_'::kmp_get_affinity
!dec$ attributes alias:'kmp_get_affinity_max_proc_'::kmp_get_affinity_max_proc
!dec$ attributes alias:'kmp_create_affinity_mask_'::kmp_create_affinity_mask
!dec$ attributes alias:'kmp_destroy_affinity_mask_'::kmp_destroy_affinity_mask
!dec$ attributes alias:'kmp_set_affinity_mask_proc_'::kmp_set_affinity_mask_proc
!dec$ attributes alias:'kmp_unset_affinity_mask_proc_'::kmp_unset_affinity_mask_proc
!dec$ attributes alias:'kmp_get_affinity_mask_proc_'::kmp_get_affinity_mask_proc
!dec$ attributes alias:'kmp_malloc_'::kmp_malloc
!dec$ attributes alias:'kmp_aligned_malloc_'::kmp_aligned_malloc
!dec$ attributes alias:'kmp_calloc_'::kmp_calloc
!dec$ attributes alias:'kmp_realloc_'::kmp_realloc
!dec$ attributes alias:'kmp_free_'::kmp_free
!dec$ attributes alias:'kmp_set_warnings_on_'::kmp_set_warnings_on
!dec$ attributes alias:'kmp_set_warnings_off_'::kmp_set_warnings_off
!dec$ endif
!dec$ if defined(__APPLE__)
!***
!*** The Mac entry points are in lowercase, with an both an underscore
!*** appended and an underscore prepended.
!***
!dec$ attributes alias:'_omp_set_num_threads_'::omp_set_num_threads
!dec$ attributes alias:'_omp_set_dynamic_'::omp_set_dynamic
!dec$ attributes alias:'_omp_set_nested_'::omp_set_nested
!dec$ attributes alias:'_omp_get_num_threads_'::omp_get_num_threads
!dec$ attributes alias:'_omp_get_max_threads_'::omp_get_max_threads
!dec$ attributes alias:'_omp_get_thread_num_'::omp_get_thread_num
!dec$ attributes alias:'_omp_get_num_procs_'::omp_get_num_procs
!dec$ attributes alias:'_omp_in_parallel_'::omp_in_parallel
!dec$ attributes alias:'_omp_get_dynamic_'::omp_get_dynamic
!dec$ attributes alias:'_omp_get_nested_'::omp_get_nested
!dec$ attributes alias:'_omp_get_thread_limit_'::omp_get_thread_limit
!dec$ attributes alias:'_omp_set_max_active_levels_'::omp_set_max_active_levels
!dec$ attributes alias:'_omp_get_max_active_levels_'::omp_get_max_active_levels
!dec$ attributes alias:'_omp_get_level_'::omp_get_level
!dec$ attributes alias:'_omp_get_active_level_'::omp_get_active_level
!dec$ attributes alias:'_omp_get_ancestor_thread_num_'::omp_get_ancestor_thread_num
!dec$ attributes alias:'_omp_get_team_size_'::omp_get_team_size
!dec$ attributes alias:'_omp_set_schedule_'::omp_set_schedule
!dec$ attributes alias:'_omp_get_schedule_'::omp_get_schedule
!dec$ attributes alias:'_omp_get_wtime_'::omp_get_wtime
!dec$ attributes alias:'_omp_get_wtick_'::omp_get_wtick
!dec$ attributes alias:'_omp_init_lock_'::omp_init_lock
!dec$ attributes alias:'_omp_destroy_lock_'::omp_destroy_lock
!dec$ attributes alias:'_omp_set_lock_'::omp_set_lock
!dec$ attributes alias:'_omp_unset_lock_'::omp_unset_lock
!dec$ attributes alias:'_omp_test_lock_'::omp_test_lock
!dec$ attributes alias:'_omp_init_nest_lock_'::omp_init_nest_lock
!dec$ attributes alias:'_omp_destroy_nest_lock_'::omp_destroy_nest_lock
!dec$ attributes alias:'_omp_set_nest_lock_'::omp_set_nest_lock
!dec$ attributes alias:'_omp_unset_nest_lock_'::omp_unset_nest_lock
!dec$ attributes alias:'_omp_test_nest_lock_'::omp_test_nest_lock
!dec$ attributes alias:'_kmp_set_stacksize_'::kmp_set_stacksize
!dec$ attributes alias:'_kmp_set_stacksize_s_'::kmp_set_stacksize_s
!dec$ attributes alias:'_kmp_set_blocktime_'::kmp_set_blocktime
!dec$ attributes alias:'_kmp_set_library_serial_'::kmp_set_library_serial
!dec$ attributes alias:'_kmp_set_library_turnaround_'::kmp_set_library_turnaround
!dec$ attributes alias:'_kmp_set_library_throughput_'::kmp_set_library_throughput
!dec$ attributes alias:'_kmp_set_library_'::kmp_set_library
!dec$ attributes alias:'_kmp_get_stacksize_'::kmp_get_stacksize
!dec$ attributes alias:'_kmp_get_stacksize_s_'::kmp_get_stacksize_s
!dec$ attributes alias:'_kmp_get_blocktime_'::kmp_get_blocktime
!dec$ attributes alias:'_kmp_get_library_'::kmp_get_library
!dec$ attributes alias:'_kmp_set_affinity_'::kmp_set_affinity
!dec$ attributes alias:'_kmp_get_affinity_'::kmp_get_affinity
!dec$ attributes alias:'_kmp_get_affinity_max_proc_'::kmp_get_affinity_max_proc
!dec$ attributes alias:'_kmp_create_affinity_mask_'::kmp_create_affinity_mask
!dec$ attributes alias:'_kmp_destroy_affinity_mask_'::kmp_destroy_affinity_mask
!dec$ attributes alias:'_kmp_set_affinity_mask_proc_'::kmp_set_affinity_mask_proc
!dec$ attributes alias:'_kmp_unset_affinity_mask_proc_'::kmp_unset_affinity_mask_proc
!dec$ attributes alias:'_kmp_get_affinity_mask_proc_'::kmp_get_affinity_mask_proc
!dec$ attributes alias:'_kmp_malloc_'::kmp_malloc
!dec$ attributes alias:'_kmp_aligned_malloc_'::kmp_aligned_malloc
!dec$ attributes alias:'_kmp_calloc_'::kmp_calloc
!dec$ attributes alias:'_kmp_realloc_'::kmp_realloc
!dec$ attributes alias:'_kmp_free_'::kmp_free
!dec$ attributes alias:'_kmp_set_warnings_on_'::kmp_set_warnings_on
!dec$ attributes alias:'_kmp_set_warnings_off_'::kmp_set_warnings_off
!dec$ endif
end module omp_lib

View File

@ -0,0 +1,365 @@
! include/30/omp_lib.f90.var
!
!//===----------------------------------------------------------------------===//
!//
!// The LLVM Compiler Infrastructure
!//
!// This file is dual licensed under the MIT and the University of Illinois Open
!// Source Licenses. See LICENSE.txt for details.
!//
!//===----------------------------------------------------------------------===//
!
module omp_lib_kinds
use, intrinsic :: iso_c_binding
integer, parameter :: omp_integer_kind = c_int
integer, parameter :: omp_logical_kind = 4
integer, parameter :: omp_real_kind = c_float
integer, parameter :: kmp_double_kind = c_double
integer, parameter :: omp_lock_kind = c_intptr_t
integer, parameter :: omp_nest_lock_kind = c_intptr_t
integer, parameter :: omp_sched_kind = omp_integer_kind
integer, parameter :: kmp_pointer_kind = c_intptr_t
integer, parameter :: kmp_size_t_kind = c_size_t
integer, parameter :: kmp_affinity_mask_kind = c_intptr_t
end module omp_lib_kinds
module omp_lib
use omp_lib_kinds
integer (kind=omp_integer_kind), parameter :: openmp_version = @LIBOMP_OMP_YEAR_MONTH@
integer (kind=omp_integer_kind), parameter :: kmp_version_major = @LIBOMP_VERSION_MAJOR@
integer (kind=omp_integer_kind), parameter :: kmp_version_minor = @LIBOMP_VERSION_MINOR@
integer (kind=omp_integer_kind), parameter :: kmp_version_build = @LIBOMP_VERSION_BUILD@
character(*) kmp_build_date
parameter( kmp_build_date = '@LIBOMP_BUILD_DATE@' )
integer(kind=omp_sched_kind), parameter :: omp_sched_static = 1
integer(kind=omp_sched_kind), parameter :: omp_sched_dynamic = 2
integer(kind=omp_sched_kind), parameter :: omp_sched_guided = 3
integer(kind=omp_sched_kind), parameter :: omp_sched_auto = 4
interface
! ***
! *** omp_* entry points
! ***
subroutine omp_set_num_threads(nthreads) bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind), value :: nthreads
end subroutine omp_set_num_threads
subroutine omp_set_dynamic(enable) bind(c)
use omp_lib_kinds
logical (kind=omp_logical_kind), value :: enable
end subroutine omp_set_dynamic
subroutine omp_set_nested(enable) bind(c)
use omp_lib_kinds
logical (kind=omp_logical_kind), value :: enable
end subroutine omp_set_nested
function omp_get_num_threads() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_num_threads
end function omp_get_num_threads
function omp_get_max_threads() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_max_threads
end function omp_get_max_threads
function omp_get_thread_num() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_thread_num
end function omp_get_thread_num
function omp_get_num_procs() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_num_procs
end function omp_get_num_procs
function omp_in_parallel() bind(c)
use omp_lib_kinds
logical (kind=omp_logical_kind) omp_in_parallel
end function omp_in_parallel
function omp_in_final() bind(c)
use omp_lib_kinds
logical (kind=omp_logical_kind) omp_in_final
end function omp_in_final
function omp_get_dynamic() bind(c)
use omp_lib_kinds
logical (kind=omp_logical_kind) omp_get_dynamic
end function omp_get_dynamic
function omp_get_nested() bind(c)
use omp_lib_kinds
logical (kind=omp_logical_kind) omp_get_nested
end function omp_get_nested
function omp_get_thread_limit() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_thread_limit
end function omp_get_thread_limit
subroutine omp_set_max_active_levels(max_levels) bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind), value :: max_levels
end subroutine omp_set_max_active_levels
function omp_get_max_active_levels() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_max_active_levels
end function omp_get_max_active_levels
function omp_get_level() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) :: omp_get_level
end function omp_get_level
function omp_get_active_level() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) :: omp_get_active_level
end function omp_get_active_level
function omp_get_ancestor_thread_num(level) bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_ancestor_thread_num
integer (kind=omp_integer_kind), value :: level
end function omp_get_ancestor_thread_num
function omp_get_team_size(level) bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_team_size
integer (kind=omp_integer_kind), value :: level
end function omp_get_team_size
subroutine omp_set_schedule(kind, modifier) bind(c)
use omp_lib_kinds
integer (kind=omp_sched_kind), value :: kind
integer (kind=omp_integer_kind), value :: modifier
end subroutine omp_set_schedule
subroutine omp_get_schedule(kind, modifier) bind(c)
use omp_lib_kinds
integer (kind=omp_sched_kind) :: kind
integer (kind=omp_integer_kind) :: modifier
end subroutine omp_get_schedule
function omp_get_wtime() bind(c)
use omp_lib_kinds
real (kind=kmp_double_kind) omp_get_wtime
end function omp_get_wtime
function omp_get_wtick() bind(c)
use omp_lib_kinds
real (kind=kmp_double_kind) omp_get_wtick
end function omp_get_wtick
subroutine omp_init_lock(lockvar) bind(c)
use omp_lib_kinds
integer (kind=omp_lock_kind) lockvar
end subroutine omp_init_lock
subroutine omp_destroy_lock(lockvar) bind(c)
use omp_lib_kinds
integer (kind=omp_lock_kind) lockvar
end subroutine omp_destroy_lock
subroutine omp_set_lock(lockvar) bind(c)
use omp_lib_kinds
integer (kind=omp_lock_kind) lockvar
end subroutine omp_set_lock
subroutine omp_unset_lock(lockvar) bind(c)
use omp_lib_kinds
integer (kind=omp_lock_kind) lockvar
end subroutine omp_unset_lock
function omp_test_lock(lockvar) bind(c)
use omp_lib_kinds
logical (kind=omp_logical_kind) omp_test_lock
integer (kind=omp_lock_kind) lockvar
end function omp_test_lock
subroutine omp_init_nest_lock(lockvar) bind(c)
use omp_lib_kinds
integer (kind=omp_nest_lock_kind) lockvar
end subroutine omp_init_nest_lock
subroutine omp_destroy_nest_lock(lockvar) bind(c)
use omp_lib_kinds
integer (kind=omp_nest_lock_kind) lockvar
end subroutine omp_destroy_nest_lock
subroutine omp_set_nest_lock(lockvar) bind(c)
use omp_lib_kinds
integer (kind=omp_nest_lock_kind) lockvar
end subroutine omp_set_nest_lock
subroutine omp_unset_nest_lock(lockvar) bind(c)
use omp_lib_kinds
integer (kind=omp_nest_lock_kind) lockvar
end subroutine omp_unset_nest_lock
function omp_test_nest_lock(lockvar) bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_test_nest_lock
integer (kind=omp_nest_lock_kind) lockvar
end function omp_test_nest_lock
! ***
! *** kmp_* entry points
! ***
subroutine kmp_set_stacksize(size) bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind), value :: size
end subroutine kmp_set_stacksize
subroutine kmp_set_stacksize_s(size) bind(c)
use omp_lib_kinds
integer (kind=kmp_size_t_kind), value :: size
end subroutine kmp_set_stacksize_s
subroutine kmp_set_blocktime(msec) bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind), value :: msec
end subroutine kmp_set_blocktime
subroutine kmp_set_library_serial() bind(c)
end subroutine kmp_set_library_serial
subroutine kmp_set_library_turnaround() bind(c)
end subroutine kmp_set_library_turnaround
subroutine kmp_set_library_throughput() bind(c)
end subroutine kmp_set_library_throughput
subroutine kmp_set_library(libnum) bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind), value :: libnum
end subroutine kmp_set_library
subroutine kmp_set_defaults(string) bind(c)
use, intrinsic :: iso_c_binding
character (kind=c_char) :: string(*)
end subroutine kmp_set_defaults
function kmp_get_stacksize() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_get_stacksize
end function kmp_get_stacksize
function kmp_get_stacksize_s() bind(c)
use omp_lib_kinds
integer (kind=kmp_size_t_kind) kmp_get_stacksize_s
end function kmp_get_stacksize_s
function kmp_get_blocktime() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_get_blocktime
end function kmp_get_blocktime
function kmp_get_library() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_get_library
end function kmp_get_library
function kmp_set_affinity(mask) bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_set_affinity
integer (kind=kmp_affinity_mask_kind) mask
end function kmp_set_affinity
function kmp_get_affinity(mask) bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_get_affinity
integer (kind=kmp_affinity_mask_kind) mask
end function kmp_get_affinity
function kmp_get_affinity_max_proc() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_get_affinity_max_proc
end function kmp_get_affinity_max_proc
subroutine kmp_create_affinity_mask(mask) bind(c)
use omp_lib_kinds
integer (kind=kmp_affinity_mask_kind) mask
end subroutine kmp_create_affinity_mask
subroutine kmp_destroy_affinity_mask(mask) bind(c)
use omp_lib_kinds
integer (kind=kmp_affinity_mask_kind) mask
end subroutine kmp_destroy_affinity_mask
function kmp_set_affinity_mask_proc(proc, mask) bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_set_affinity_mask_proc
integer (kind=omp_integer_kind), value :: proc
integer (kind=kmp_affinity_mask_kind) mask
end function kmp_set_affinity_mask_proc
function kmp_unset_affinity_mask_proc(proc, mask) bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_unset_affinity_mask_proc
integer (kind=omp_integer_kind), value :: proc
integer (kind=kmp_affinity_mask_kind) mask
end function kmp_unset_affinity_mask_proc
function kmp_get_affinity_mask_proc(proc, mask) bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_get_affinity_mask_proc
integer (kind=omp_integer_kind), value :: proc
integer (kind=kmp_affinity_mask_kind) mask
end function kmp_get_affinity_mask_proc
function kmp_malloc(size) bind(c)
use omp_lib_kinds
integer (kind=kmp_pointer_kind) kmp_malloc
integer (kind=kmp_size_t_kind), value :: size
end function kmp_malloc
function kmp_aligned_malloc(size, alignment) bind(c)
use omp_lib_kinds
integer (kind=kmp_pointer_kind) kmp_aligned_malloc
integer (kind=kmp_size_t_kind), value :: size
integer (kind=kmp_size_t_kind), value :: alignment
end function kmp_aligned_malloc
function kmp_calloc(nelem, elsize) bind(c)
use omp_lib_kinds
integer (kind=kmp_pointer_kind) kmp_calloc
integer (kind=kmp_size_t_kind), value :: nelem
integer (kind=kmp_size_t_kind), value :: elsize
end function kmp_calloc
function kmp_realloc(ptr, size) bind(c)
use omp_lib_kinds
integer (kind=kmp_pointer_kind) kmp_realloc
integer (kind=kmp_pointer_kind), value :: ptr
integer (kind=kmp_size_t_kind), value :: size
end function kmp_realloc
subroutine kmp_free(ptr) bind(c)
use omp_lib_kinds
integer (kind=kmp_pointer_kind), value :: ptr
end subroutine kmp_free
subroutine kmp_set_warnings_on() bind(c)
end subroutine kmp_set_warnings_on
subroutine kmp_set_warnings_off() bind(c)
end subroutine kmp_set_warnings_off
end interface
end module omp_lib

View File

@ -0,0 +1,649 @@
! include/30/omp_lib.h.var
!
!//===----------------------------------------------------------------------===//
!//
!// The LLVM Compiler Infrastructure
!//
!// This file is dual licensed under the MIT and the University of Illinois Open
!// Source Licenses. See LICENSE.txt for details.
!//
!//===----------------------------------------------------------------------===//
!
!***
!*** Some of the directives for the following routine extend past column 72,
!*** so process this file in 132-column mode.
!***
!dec$ fixedformlinesize:132
integer, parameter :: omp_integer_kind = 4
integer, parameter :: omp_logical_kind = 4
integer, parameter :: omp_real_kind = 4
integer, parameter :: omp_lock_kind = int_ptr_kind()
integer, parameter :: omp_nest_lock_kind = int_ptr_kind()
integer, parameter :: omp_sched_kind = omp_integer_kind
integer, parameter :: kmp_pointer_kind = int_ptr_kind()
integer, parameter :: kmp_size_t_kind = int_ptr_kind()
integer, parameter :: kmp_affinity_mask_kind = int_ptr_kind()
integer(kind=omp_sched_kind), parameter :: omp_sched_static = 1
integer(kind=omp_sched_kind), parameter :: omp_sched_dynamic = 2
integer(kind=omp_sched_kind), parameter :: omp_sched_guided = 3
integer(kind=omp_sched_kind), parameter :: omp_sched_auto = 4
integer (kind=omp_integer_kind), parameter :: kmp_version_major = @LIBOMP_VERSION_MAJOR@
integer (kind=omp_integer_kind), parameter :: kmp_version_minor = @LIBOMP_VERSION_MINOR@
integer (kind=omp_integer_kind), parameter :: kmp_version_build = @LIBOMP_VERSION_BUILD@
character(*) kmp_build_date
parameter( kmp_build_date = '@LIBOMP_BUILD_DATE@' )
integer (kind=omp_integer_kind), parameter :: openmp_version = @LIBOMP_OMP_YEAR_MONTH@
interface
! ***
! *** omp_* entry points
! ***
subroutine omp_set_num_threads(nthreads)
import
integer (kind=omp_integer_kind) nthreads
end subroutine omp_set_num_threads
subroutine omp_set_dynamic(enable)
import
logical (kind=omp_logical_kind) enable
end subroutine omp_set_dynamic
subroutine omp_set_nested(enable)
import
logical (kind=omp_logical_kind) enable
end subroutine omp_set_nested
function omp_get_num_threads()
import
integer (kind=omp_integer_kind) omp_get_num_threads
end function omp_get_num_threads
function omp_get_max_threads()
import
integer (kind=omp_integer_kind) omp_get_max_threads
end function omp_get_max_threads
function omp_get_thread_num()
import
integer (kind=omp_integer_kind) omp_get_thread_num
end function omp_get_thread_num
function omp_get_num_procs()
import
integer (kind=omp_integer_kind) omp_get_num_procs
end function omp_get_num_procs
function omp_in_parallel()
import
logical (kind=omp_logical_kind) omp_in_parallel
end function omp_in_parallel
function omp_in_final()
import
logical (kind=omp_logical_kind) omp_in_final
end function omp_in_final
function omp_get_dynamic()
import
logical (kind=omp_logical_kind) omp_get_dynamic
end function omp_get_dynamic
function omp_get_nested()
import
logical (kind=omp_logical_kind) omp_get_nested
end function omp_get_nested
function omp_get_thread_limit()
import
integer (kind=omp_integer_kind) omp_get_thread_limit
end function omp_get_thread_limit
subroutine omp_set_max_active_levels(max_levels)
import
integer (kind=omp_integer_kind) max_levels
end subroutine omp_set_max_active_levels
function omp_get_max_active_levels()
import
integer (kind=omp_integer_kind) omp_get_max_active_levels
end function omp_get_max_active_levels
function omp_get_level()
import
integer (kind=omp_integer_kind) omp_get_level
end function omp_get_level
function omp_get_active_level()
import
integer (kind=omp_integer_kind) omp_get_active_level
end function omp_get_active_level
function omp_get_ancestor_thread_num(level)
import
integer (kind=omp_integer_kind) level
integer (kind=omp_integer_kind) omp_get_ancestor_thread_num
end function omp_get_ancestor_thread_num
function omp_get_team_size(level)
import
integer (kind=omp_integer_kind) level
integer (kind=omp_integer_kind) omp_get_team_size
end function omp_get_team_size
subroutine omp_set_schedule(kind, modifier)
import
integer (kind=omp_sched_kind) kind
integer (kind=omp_integer_kind) modifier
end subroutine omp_set_schedule
subroutine omp_get_schedule(kind, modifier)
import
integer (kind=omp_sched_kind) kind
integer (kind=omp_integer_kind) modifier
end subroutine omp_get_schedule
function omp_get_wtime()
double precision omp_get_wtime
end function omp_get_wtime
function omp_get_wtick ()
double precision omp_get_wtick
end function omp_get_wtick
subroutine omp_init_lock(lockvar)
import
integer (kind=omp_lock_kind) lockvar
end subroutine omp_init_lock
subroutine omp_destroy_lock(lockvar)
import
integer (kind=omp_lock_kind) lockvar
end subroutine omp_destroy_lock
subroutine omp_set_lock(lockvar)
import
integer (kind=omp_lock_kind) lockvar
end subroutine omp_set_lock
subroutine omp_unset_lock(lockvar)
import
integer (kind=omp_lock_kind) lockvar
end subroutine omp_unset_lock
function omp_test_lock(lockvar)
import
logical (kind=omp_logical_kind) omp_test_lock
integer (kind=omp_lock_kind) lockvar
end function omp_test_lock
subroutine omp_init_nest_lock(lockvar)
import
integer (kind=omp_nest_lock_kind) lockvar
end subroutine omp_init_nest_lock
subroutine omp_destroy_nest_lock(lockvar)
import
integer (kind=omp_nest_lock_kind) lockvar
end subroutine omp_destroy_nest_lock
subroutine omp_set_nest_lock(lockvar)
import
integer (kind=omp_nest_lock_kind) lockvar
end subroutine omp_set_nest_lock
subroutine omp_unset_nest_lock(lockvar)
import
integer (kind=omp_nest_lock_kind) lockvar
end subroutine omp_unset_nest_lock
function omp_test_nest_lock(lockvar)
import
integer (kind=omp_integer_kind) omp_test_nest_lock
integer (kind=omp_nest_lock_kind) lockvar
end function omp_test_nest_lock
! ***
! *** kmp_* entry points
! ***
subroutine kmp_set_stacksize(size)
import
integer (kind=omp_integer_kind) size
end subroutine kmp_set_stacksize
subroutine kmp_set_stacksize_s(size)
import
integer (kind=kmp_size_t_kind) size
end subroutine kmp_set_stacksize_s
subroutine kmp_set_blocktime(msec)
import
integer (kind=omp_integer_kind) msec
end subroutine kmp_set_blocktime
subroutine kmp_set_library_serial()
end subroutine kmp_set_library_serial
subroutine kmp_set_library_turnaround()
end subroutine kmp_set_library_turnaround
subroutine kmp_set_library_throughput()
end subroutine kmp_set_library_throughput
subroutine kmp_set_library(libnum)
import
integer (kind=omp_integer_kind) libnum
end subroutine kmp_set_library
subroutine kmp_set_defaults(string)
character*(*) string
end subroutine kmp_set_defaults
function kmp_get_stacksize()
import
integer (kind=omp_integer_kind) kmp_get_stacksize
end function kmp_get_stacksize
function kmp_get_stacksize_s()
import
integer (kind=kmp_size_t_kind) kmp_get_stacksize_s
end function kmp_get_stacksize_s
function kmp_get_blocktime()
import
integer (kind=omp_integer_kind) kmp_get_blocktime
end function kmp_get_blocktime
function kmp_get_library()
import
integer (kind=omp_integer_kind) kmp_get_library
end function kmp_get_library
function kmp_set_affinity(mask)
import
integer (kind=omp_integer_kind) kmp_set_affinity
integer (kind=kmp_affinity_mask_kind) mask
end function kmp_set_affinity
function kmp_get_affinity(mask)
import
integer (kind=omp_integer_kind) kmp_get_affinity
integer (kind=kmp_affinity_mask_kind) mask
end function kmp_get_affinity
function kmp_get_affinity_max_proc()
import
integer (kind=omp_integer_kind) kmp_get_affinity_max_proc
end function kmp_get_affinity_max_proc
subroutine kmp_create_affinity_mask(mask)
import
integer (kind=kmp_affinity_mask_kind) mask
end subroutine kmp_create_affinity_mask
subroutine kmp_destroy_affinity_mask(mask)
import
integer (kind=kmp_affinity_mask_kind) mask
end subroutine kmp_destroy_affinity_mask
function kmp_set_affinity_mask_proc(proc, mask)
import
integer (kind=omp_integer_kind) kmp_set_affinity_mask_proc
integer (kind=omp_integer_kind) proc
integer (kind=kmp_affinity_mask_kind) mask
end function kmp_set_affinity_mask_proc
function kmp_unset_affinity_mask_proc(proc, mask)
import
integer (kind=omp_integer_kind) kmp_unset_affinity_mask_proc
integer (kind=omp_integer_kind) proc
integer (kind=kmp_affinity_mask_kind) mask
end function kmp_unset_affinity_mask_proc
function kmp_get_affinity_mask_proc(proc, mask)
import
integer (kind=omp_integer_kind) kmp_get_affinity_mask_proc
integer (kind=omp_integer_kind) proc
integer (kind=kmp_affinity_mask_kind) mask
end function kmp_get_affinity_mask_proc
function kmp_malloc(size)
import
integer (kind=kmp_pointer_kind) kmp_malloc
integer (kind=kmp_size_t_kind) size
end function kmp_malloc
function kmp_aligned_malloc(size, alignment)
import
integer (kind=kmp_pointer_kind) kmp_aligned_malloc
integer (kind=kmp_size_t_kind) size
integer (kind=kmp_size_t_kind) alignment
end function kmp_aligned_malloc
function kmp_calloc(nelem, elsize)
import
integer (kind=kmp_pointer_kind) kmp_calloc
integer (kind=kmp_size_t_kind) nelem
integer (kind=kmp_size_t_kind) elsize
end function kmp_calloc
function kmp_realloc(ptr, size)
import
integer (kind=kmp_pointer_kind) kmp_realloc
integer (kind=kmp_pointer_kind) ptr
integer (kind=kmp_size_t_kind) size
end function kmp_realloc
subroutine kmp_free(ptr)
import
integer (kind=kmp_pointer_kind) ptr
end subroutine kmp_free
subroutine kmp_set_warnings_on()
end subroutine kmp_set_warnings_on
subroutine kmp_set_warnings_off()
end subroutine kmp_set_warnings_off
end interface
!dec$ if defined(_WIN32)
!dec$ if defined(_WIN64) .or. defined(_M_AMD64)
!***
!*** The Fortran entry points must be in uppercase, even if the /Qlowercase
!*** option is specified. The alias attribute ensures that the specified
!*** string is used as the entry point.
!***
!*** On the Windows* OS IA-32 architecture, the Fortran entry points have an
!*** underscore prepended. On the Windows* OS Intel(R) 64
!*** architecture, no underscore is prepended.
!***
!dec$ attributes alias:'OMP_SET_NUM_THREADS'::omp_set_num_threads
!dec$ attributes alias:'OMP_SET_DYNAMIC'::omp_set_dynamic
!dec$ attributes alias:'OMP_SET_NESTED'::omp_set_nested
!dec$ attributes alias:'OMP_GET_NUM_THREADS'::omp_get_num_threads
!dec$ attributes alias:'OMP_GET_MAX_THREADS'::omp_get_max_threads
!dec$ attributes alias:'OMP_GET_THREAD_NUM'::omp_get_thread_num
!dec$ attributes alias:'OMP_GET_NUM_PROCS'::omp_get_num_procs
!dec$ attributes alias:'OMP_IN_PARALLEL'::omp_in_parallel
!dec$ attributes alias:'OMP_IN_FINAL'::omp_in_final
!dec$ attributes alias:'OMP_GET_DYNAMIC'::omp_get_dynamic
!dec$ attributes alias:'OMP_GET_NESTED'::omp_get_nested
!dec$ attributes alias:'OMP_GET_THREAD_LIMIT'::omp_get_thread_limit
!dec$ attributes alias:'OMP_SET_MAX_ACTIVE_LEVELS'::omp_set_max_active_levels
!dec$ attributes alias:'OMP_GET_MAX_ACTIVE_LEVELS'::omp_get_max_active_levels
!dec$ attributes alias:'OMP_GET_LEVEL'::omp_get_level
!dec$ attributes alias:'OMP_GET_ACTIVE_LEVEL'::omp_get_active_level
!dec$ attributes alias:'OMP_GET_ANCESTOR_THREAD_NUM'::omp_get_ancestor_thread_num
!dec$ attributes alias:'OMP_GET_TEAM_SIZE'::omp_get_team_size
!dec$ attributes alias:'OMP_SET_SCHEDULE'::omp_set_schedule
!dec$ attributes alias:'OMP_GET_SCHEDULE'::omp_get_schedule
!dec$ attributes alias:'OMP_GET_WTIME'::omp_get_wtime
!dec$ attributes alias:'OMP_GET_WTICK'::omp_get_wtick
!dec$ attributes alias:'omp_init_lock'::omp_init_lock
!dec$ attributes alias:'omp_destroy_lock'::omp_destroy_lock
!dec$ attributes alias:'omp_set_lock'::omp_set_lock
!dec$ attributes alias:'omp_unset_lock'::omp_unset_lock
!dec$ attributes alias:'omp_test_lock'::omp_test_lock
!dec$ attributes alias:'omp_init_nest_lock'::omp_init_nest_lock
!dec$ attributes alias:'omp_destroy_nest_lock'::omp_destroy_nest_lock
!dec$ attributes alias:'omp_set_nest_lock'::omp_set_nest_lock
!dec$ attributes alias:'omp_unset_nest_lock'::omp_unset_nest_lock
!dec$ attributes alias:'omp_test_nest_lock'::omp_test_nest_lock
!dec$ attributes alias:'KMP_SET_STACKSIZE'::kmp_set_stacksize
!dec$ attributes alias:'KMP_SET_STACKSIZE_S'::kmp_set_stacksize_s
!dec$ attributes alias:'KMP_SET_BLOCKTIME'::kmp_set_blocktime
!dec$ attributes alias:'KMP_SET_LIBRARY_SERIAL'::kmp_set_library_serial
!dec$ attributes alias:'KMP_SET_LIBRARY_TURNAROUND'::kmp_set_library_turnaround
!dec$ attributes alias:'KMP_SET_LIBRARY_THROUGHPUT'::kmp_set_library_throughput
!dec$ attributes alias:'KMP_SET_LIBRARY'::kmp_set_library
!dec$ attributes alias:'KMP_SET_DEFAULTS'::kmp_set_defaults
!dec$ attributes alias:'KMP_GET_STACKSIZE'::kmp_get_stacksize
!dec$ attributes alias:'KMP_GET_STACKSIZE_S'::kmp_get_stacksize_s
!dec$ attributes alias:'KMP_GET_BLOCKTIME'::kmp_get_blocktime
!dec$ attributes alias:'KMP_GET_LIBRARY'::kmp_get_library
!dec$ attributes alias:'KMP_SET_AFFINITY'::kmp_set_affinity
!dec$ attributes alias:'KMP_GET_AFFINITY'::kmp_get_affinity
!dec$ attributes alias:'KMP_GET_AFFINITY_MAX_PROC'::kmp_get_affinity_max_proc
!dec$ attributes alias:'KMP_CREATE_AFFINITY_MASK'::kmp_create_affinity_mask
!dec$ attributes alias:'KMP_DESTROY_AFFINITY_MASK'::kmp_destroy_affinity_mask
!dec$ attributes alias:'KMP_SET_AFFINITY_MASK_PROC'::kmp_set_affinity_mask_proc
!dec$ attributes alias:'KMP_UNSET_AFFINITY_MASK_PROC'::kmp_unset_affinity_mask_proc
!dec$ attributes alias:'KMP_GET_AFFINITY_MASK_PROC'::kmp_get_affinity_mask_proc
!dec$ attributes alias:'KMP_MALLOC'::kmp_malloc
!dec$ attributes alias:'KMP_ALIGNED_MALLOC'::kmp_aligned_malloc
!dec$ attributes alias:'KMP_CALLOC'::kmp_calloc
!dec$ attributes alias:'KMP_REALLOC'::kmp_realloc
!dec$ attributes alias:'KMP_FREE'::kmp_free
!dec$ attributes alias:'KMP_SET_WARNINGS_ON'::kmp_set_warnings_on
!dec$ attributes alias:'KMP_SET_WARNINGS_OFF'::kmp_set_warnings_off
!dec$ else
!***
!*** On Windows* OS IA-32 architecture, the Fortran entry points have an underscore prepended.
!***
!dec$ attributes alias:'_OMP_SET_NUM_THREADS'::omp_set_num_threads
!dec$ attributes alias:'_OMP_SET_DYNAMIC'::omp_set_dynamic
!dec$ attributes alias:'_OMP_SET_NESTED'::omp_set_nested
!dec$ attributes alias:'_OMP_GET_NUM_THREADS'::omp_get_num_threads
!dec$ attributes alias:'_OMP_GET_MAX_THREADS'::omp_get_max_threads
!dec$ attributes alias:'_OMP_GET_THREAD_NUM'::omp_get_thread_num
!dec$ attributes alias:'_OMP_GET_NUM_PROCS'::omp_get_num_procs
!dec$ attributes alias:'_OMP_IN_PARALLEL'::omp_in_parallel
!dec$ attributes alias:'_OMP_IN_FINAL'::omp_in_final
!dec$ attributes alias:'_OMP_GET_DYNAMIC'::omp_get_dynamic
!dec$ attributes alias:'_OMP_GET_NESTED'::omp_get_nested
!dec$ attributes alias:'_OMP_GET_THREAD_LIMIT'::omp_get_thread_limit
!dec$ attributes alias:'_OMP_SET_MAX_ACTIVE_LEVELS'::omp_set_max_active_levels
!dec$ attributes alias:'_OMP_GET_MAX_ACTIVE_LEVELS'::omp_get_max_active_levels
!dec$ attributes alias:'_OMP_GET_LEVEL'::omp_get_level
!dec$ attributes alias:'_OMP_GET_ACTIVE_LEVEL'::omp_get_active_level
!dec$ attributes alias:'_OMP_GET_ANCESTOR_THREAD_NUM'::omp_get_ancestor_thread_num
!dec$ attributes alias:'_OMP_GET_TEAM_SIZE'::omp_get_team_size
!dec$ attributes alias:'_OMP_SET_SCHEDULE'::omp_set_schedule
!dec$ attributes alias:'_OMP_GET_SCHEDULE'::omp_get_schedule
!dec$ attributes alias:'_OMP_GET_WTIME'::omp_get_wtime
!dec$ attributes alias:'_OMP_GET_WTICK'::omp_get_wtick
!dec$ attributes alias:'_omp_init_lock'::omp_init_lock
!dec$ attributes alias:'_omp_destroy_lock'::omp_destroy_lock
!dec$ attributes alias:'_omp_set_lock'::omp_set_lock
!dec$ attributes alias:'_omp_unset_lock'::omp_unset_lock
!dec$ attributes alias:'_omp_test_lock'::omp_test_lock
!dec$ attributes alias:'_omp_init_nest_lock'::omp_init_nest_lock
!dec$ attributes alias:'_omp_destroy_nest_lock'::omp_destroy_nest_lock
!dec$ attributes alias:'_omp_set_nest_lock'::omp_set_nest_lock
!dec$ attributes alias:'_omp_unset_nest_lock'::omp_unset_nest_lock
!dec$ attributes alias:'_omp_test_nest_lock'::omp_test_nest_lock
!dec$ attributes alias:'_KMP_SET_STACKSIZE'::kmp_set_stacksize
!dec$ attributes alias:'_KMP_SET_STACKSIZE_S'::kmp_set_stacksize_s
!dec$ attributes alias:'_KMP_SET_BLOCKTIME'::kmp_set_blocktime
!dec$ attributes alias:'_KMP_SET_LIBRARY_SERIAL'::kmp_set_library_serial
!dec$ attributes alias:'_KMP_SET_LIBRARY_TURNAROUND'::kmp_set_library_turnaround
!dec$ attributes alias:'_KMP_SET_LIBRARY_THROUGHPUT'::kmp_set_library_throughput
!dec$ attributes alias:'_KMP_SET_LIBRARY'::kmp_set_library
!dec$ attributes alias:'_KMP_SET_DEFAULTS'::kmp_set_defaults
!dec$ attributes alias:'_KMP_GET_STACKSIZE'::kmp_get_stacksize
!dec$ attributes alias:'_KMP_GET_STACKSIZE_S'::kmp_get_stacksize_s
!dec$ attributes alias:'_KMP_GET_BLOCKTIME'::kmp_get_blocktime
!dec$ attributes alias:'_KMP_GET_LIBRARY'::kmp_get_library
!dec$ attributes alias:'_KMP_SET_AFFINITY'::kmp_set_affinity
!dec$ attributes alias:'_KMP_GET_AFFINITY'::kmp_get_affinity
!dec$ attributes alias:'_KMP_GET_AFFINITY_MAX_PROC'::kmp_get_affinity_max_proc
!dec$ attributes alias:'_KMP_CREATE_AFFINITY_MASK'::kmp_create_affinity_mask
!dec$ attributes alias:'_KMP_DESTROY_AFFINITY_MASK'::kmp_destroy_affinity_mask
!dec$ attributes alias:'_KMP_SET_AFFINITY_MASK_PROC'::kmp_set_affinity_mask_proc
!dec$ attributes alias:'_KMP_UNSET_AFFINITY_MASK_PROC'::kmp_unset_affinity_mask_proc
!dec$ attributes alias:'_KMP_GET_AFFINITY_MASK_PROC'::kmp_get_affinity_mask_proc
!dec$ attributes alias:'_KMP_MALLOC'::kmp_malloc
!dec$ attributes alias:'_KMP_ALIGNED_MALLOC'::kmp_aligned_malloc
!dec$ attributes alias:'_KMP_CALLOC'::kmp_calloc
!dec$ attributes alias:'_KMP_REALLOC'::kmp_realloc
!dec$ attributes alias:'_KMP_FREE'::kmp_free
!dec$ attributes alias:'_KMP_SET_WARNINGS_ON'::kmp_set_warnings_on
!dec$ attributes alias:'_KMP_SET_WARNINGS_OFF'::kmp_set_warnings_off
!dec$ endif
!dec$ endif
!dec$ if defined(__linux)
!***
!*** The Linux* OS entry points are in lowercase, with an underscore appended.
!***
!dec$ attributes alias:'omp_set_num_threads_'::omp_set_num_threads
!dec$ attributes alias:'omp_set_dynamic_'::omp_set_dynamic
!dec$ attributes alias:'omp_set_nested_'::omp_set_nested
!dec$ attributes alias:'omp_get_num_threads_'::omp_get_num_threads
!dec$ attributes alias:'omp_get_max_threads_'::omp_get_max_threads
!dec$ attributes alias:'omp_get_thread_num_'::omp_get_thread_num
!dec$ attributes alias:'omp_get_num_procs_'::omp_get_num_procs
!dec$ attributes alias:'omp_in_parallel_'::omp_in_parallel
!dec$ attributes alias:'omp_in_final_'::omp_in_final
!dec$ attributes alias:'omp_get_dynamic_'::omp_get_dynamic
!dec$ attributes alias:'omp_get_nested_'::omp_get_nested
!dec$ attributes alias:'omp_get_thread_limit_'::omp_get_thread_limit
!dec$ attributes alias:'omp_set_max_active_levels_'::omp_set_max_active_levels
!dec$ attributes alias:'omp_get_max_active_levels_'::omp_get_max_active_levels
!dec$ attributes alias:'omp_get_level_'::omp_get_level
!dec$ attributes alias:'omp_get_active_level_'::omp_get_active_level
!dec$ attributes alias:'omp_get_ancestor_thread_num_'::omp_get_ancestor_thread_num
!dec$ attributes alias:'omp_get_team_size_'::omp_get_team_size
!dec$ attributes alias:'omp_set_schedule_'::omp_set_schedule
!dec$ attributes alias:'omp_get_schedule_'::omp_get_schedule
!dec$ attributes alias:'omp_get_wtime_'::omp_get_wtime
!dec$ attributes alias:'omp_get_wtick_'::omp_get_wtick
!dec$ attributes alias:'omp_init_lock_'::omp_init_lock
!dec$ attributes alias:'omp_destroy_lock_'::omp_destroy_lock
!dec$ attributes alias:'omp_set_lock_'::omp_set_lock
!dec$ attributes alias:'omp_unset_lock_'::omp_unset_lock
!dec$ attributes alias:'omp_test_lock_'::omp_test_lock
!dec$ attributes alias:'omp_init_nest_lock_'::omp_init_nest_lock
!dec$ attributes alias:'omp_destroy_nest_lock_'::omp_destroy_nest_lock
!dec$ attributes alias:'omp_set_nest_lock_'::omp_set_nest_lock
!dec$ attributes alias:'omp_unset_nest_lock_'::omp_unset_nest_lock
!dec$ attributes alias:'omp_test_nest_lock_'::omp_test_nest_lock
!dec$ attributes alias:'kmp_set_stacksize_'::kmp_set_stacksize
!dec$ attributes alias:'kmp_set_stacksize_s_'::kmp_set_stacksize_s
!dec$ attributes alias:'kmp_set_blocktime_'::kmp_set_blocktime
!dec$ attributes alias:'kmp_set_library_serial_'::kmp_set_library_serial
!dec$ attributes alias:'kmp_set_library_turnaround_'::kmp_set_library_turnaround
!dec$ attributes alias:'kmp_set_library_throughput_'::kmp_set_library_throughput
!dec$ attributes alias:'kmp_set_library_'::kmp_set_library
!dec$ attributes alias:'kmp_set_defaults_'::kmp_set_defaults
!dec$ attributes alias:'kmp_get_stacksize_'::kmp_get_stacksize
!dec$ attributes alias:'kmp_get_stacksize_s_'::kmp_get_stacksize_s
!dec$ attributes alias:'kmp_get_blocktime_'::kmp_get_blocktime
!dec$ attributes alias:'kmp_get_library_'::kmp_get_library
!dec$ attributes alias:'kmp_set_affinity_'::kmp_set_affinity
!dec$ attributes alias:'kmp_get_affinity_'::kmp_get_affinity
!dec$ attributes alias:'kmp_get_affinity_max_proc_'::kmp_get_affinity_max_proc
!dec$ attributes alias:'kmp_create_affinity_mask_'::kmp_create_affinity_mask
!dec$ attributes alias:'kmp_destroy_affinity_mask_'::kmp_destroy_affinity_mask
!dec$ attributes alias:'kmp_set_affinity_mask_proc_'::kmp_set_affinity_mask_proc
!dec$ attributes alias:'kmp_unset_affinity_mask_proc_'::kmp_unset_affinity_mask_proc
!dec$ attributes alias:'kmp_get_affinity_mask_proc_'::kmp_get_affinity_mask_proc
!dec$ attributes alias:'kmp_malloc_'::kmp_malloc
!dec$ attributes alias:'kmp_aligned_malloc_'::kmp_aligned_malloc
!dec$ attributes alias:'kmp_calloc_'::kmp_calloc
!dec$ attributes alias:'kmp_realloc_'::kmp_realloc
!dec$ attributes alias:'kmp_free_'::kmp_free
!dec$ attributes alias:'kmp_set_warnings_on_'::kmp_set_warnings_on
!dec$ attributes alias:'kmp_set_warnings_off_'::kmp_set_warnings_off
!dec$ endif
!dec$ if defined(__APPLE__)
!***
!*** The Mac entry points are in lowercase, with an both an underscore
!*** appended and an underscore prepended.
!***
!dec$ attributes alias:'_omp_set_num_threads_'::omp_set_num_threads
!dec$ attributes alias:'_omp_set_dynamic_'::omp_set_dynamic
!dec$ attributes alias:'_omp_set_nested_'::omp_set_nested
!dec$ attributes alias:'_omp_get_num_threads_'::omp_get_num_threads
!dec$ attributes alias:'_omp_get_max_threads_'::omp_get_max_threads
!dec$ attributes alias:'_omp_get_thread_num_'::omp_get_thread_num
!dec$ attributes alias:'_omp_get_num_procs_'::omp_get_num_procs
!dec$ attributes alias:'_omp_in_parallel_'::omp_in_parallel
!dec$ attributes alias:'_omp_in_final_'::omp_in_final
!dec$ attributes alias:'_omp_get_dynamic_'::omp_get_dynamic
!dec$ attributes alias:'_omp_get_nested_'::omp_get_nested
!dec$ attributes alias:'_omp_get_thread_limit_'::omp_get_thread_limit
!dec$ attributes alias:'_omp_set_max_active_levels_'::omp_set_max_active_levels
!dec$ attributes alias:'_omp_get_max_active_levels_'::omp_get_max_active_levels
!dec$ attributes alias:'_omp_get_level_'::omp_get_level
!dec$ attributes alias:'_omp_get_active_level_'::omp_get_active_level
!dec$ attributes alias:'_omp_get_ancestor_thread_num_'::omp_get_ancestor_thread_num
!dec$ attributes alias:'_omp_get_team_size_'::omp_get_team_size
!dec$ attributes alias:'_omp_set_schedule_'::omp_set_schedule
!dec$ attributes alias:'_omp_get_schedule_'::omp_get_schedule
!dec$ attributes alias:'_omp_get_wtime_'::omp_get_wtime
!dec$ attributes alias:'_omp_get_wtick_'::omp_get_wtick
!dec$ attributes alias:'_omp_init_lock_'::omp_init_lock
!dec$ attributes alias:'_omp_destroy_lock_'::omp_destroy_lock
!dec$ attributes alias:'_omp_set_lock_'::omp_set_lock
!dec$ attributes alias:'_omp_unset_lock_'::omp_unset_lock
!dec$ attributes alias:'_omp_test_lock_'::omp_test_lock
!dec$ attributes alias:'_omp_init_nest_lock_'::omp_init_nest_lock
!dec$ attributes alias:'_omp_destroy_nest_lock_'::omp_destroy_nest_lock
!dec$ attributes alias:'_omp_set_nest_lock_'::omp_set_nest_lock
!dec$ attributes alias:'_omp_unset_nest_lock_'::omp_unset_nest_lock
!dec$ attributes alias:'_omp_test_nest_lock_'::omp_test_nest_lock
!dec$ attributes alias:'_kmp_set_stacksize_'::kmp_set_stacksize
!dec$ attributes alias:'_kmp_set_stacksize_s_'::kmp_set_stacksize_s
!dec$ attributes alias:'_kmp_set_blocktime_'::kmp_set_blocktime
!dec$ attributes alias:'_kmp_set_library_serial_'::kmp_set_library_serial
!dec$ attributes alias:'_kmp_set_library_turnaround_'::kmp_set_library_turnaround
!dec$ attributes alias:'_kmp_set_library_throughput_'::kmp_set_library_throughput
!dec$ attributes alias:'_kmp_set_library_'::kmp_set_library
!dec$ attributes alias:'_kmp_set_defaults_'::kmp_set_defaults
!dec$ attributes alias:'_kmp_get_stacksize_'::kmp_get_stacksize
!dec$ attributes alias:'_kmp_get_stacksize_s_'::kmp_get_stacksize_s
!dec$ attributes alias:'_kmp_get_blocktime_'::kmp_get_blocktime
!dec$ attributes alias:'_kmp_get_library_'::kmp_get_library
!dec$ attributes alias:'_kmp_set_affinity_'::kmp_set_affinity
!dec$ attributes alias:'_kmp_get_affinity_'::kmp_get_affinity
!dec$ attributes alias:'_kmp_get_affinity_max_proc_'::kmp_get_affinity_max_proc
!dec$ attributes alias:'_kmp_create_affinity_mask_'::kmp_create_affinity_mask
!dec$ attributes alias:'_kmp_destroy_affinity_mask_'::kmp_destroy_affinity_mask
!dec$ attributes alias:'_kmp_set_affinity_mask_proc_'::kmp_set_affinity_mask_proc
!dec$ attributes alias:'_kmp_unset_affinity_mask_proc_'::kmp_unset_affinity_mask_proc
!dec$ attributes alias:'_kmp_get_affinity_mask_proc_'::kmp_get_affinity_mask_proc
!dec$ attributes alias:'_kmp_malloc_'::kmp_malloc
!dec$ attributes alias:'_kmp_aligned_malloc_'::kmp_aligned_malloc
!dec$ attributes alias:'_kmp_calloc_'::kmp_calloc
!dec$ attributes alias:'_kmp_realloc_'::kmp_realloc
!dec$ attributes alias:'_kmp_free_'::kmp_free
!dec$ attributes alias:'_kmp_set_warnings_on_'::kmp_set_warnings_on
!dec$ attributes alias:'_kmp_set_warnings_off_'::kmp_set_warnings_off
!dec$ endif

View File

@ -0,0 +1,161 @@
/*
* include/40/omp.h.var
*/
//===----------------------------------------------------------------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is dual licensed under the MIT and the University of Illinois Open
// Source Licenses. See LICENSE.txt for details.
//
//===----------------------------------------------------------------------===//
#ifndef __OMP_H
# define __OMP_H
# define KMP_VERSION_MAJOR @LIBOMP_VERSION_MAJOR@
# define KMP_VERSION_MINOR @LIBOMP_VERSION_MINOR@
# define KMP_VERSION_BUILD @LIBOMP_VERSION_BUILD@
# define KMP_BUILD_DATE "@LIBOMP_BUILD_DATE@"
# ifdef __cplusplus
extern "C" {
# endif
# if defined(_WIN32)
# define __KAI_KMPC_CONVENTION __cdecl
# else
# define __KAI_KMPC_CONVENTION
# endif
/* schedule kind constants */
typedef enum omp_sched_t {
omp_sched_static = 1,
omp_sched_dynamic = 2,
omp_sched_guided = 3,
omp_sched_auto = 4
} omp_sched_t;
/* set API functions */
extern void __KAI_KMPC_CONVENTION omp_set_num_threads (int);
extern void __KAI_KMPC_CONVENTION omp_set_dynamic (int);
extern void __KAI_KMPC_CONVENTION omp_set_nested (int);
extern void __KAI_KMPC_CONVENTION omp_set_max_active_levels (int);
extern void __KAI_KMPC_CONVENTION omp_set_schedule (omp_sched_t, int);
/* query API functions */
extern int __KAI_KMPC_CONVENTION omp_get_num_threads (void);
extern int __KAI_KMPC_CONVENTION omp_get_dynamic (void);
extern int __KAI_KMPC_CONVENTION omp_get_nested (void);
extern int __KAI_KMPC_CONVENTION omp_get_max_threads (void);
extern int __KAI_KMPC_CONVENTION omp_get_thread_num (void);
extern int __KAI_KMPC_CONVENTION omp_get_num_procs (void);
extern int __KAI_KMPC_CONVENTION omp_in_parallel (void);
extern int __KAI_KMPC_CONVENTION omp_in_final (void);
extern int __KAI_KMPC_CONVENTION omp_get_active_level (void);
extern int __KAI_KMPC_CONVENTION omp_get_level (void);
extern int __KAI_KMPC_CONVENTION omp_get_ancestor_thread_num (int);
extern int __KAI_KMPC_CONVENTION omp_get_team_size (int);
extern int __KAI_KMPC_CONVENTION omp_get_thread_limit (void);
extern int __KAI_KMPC_CONVENTION omp_get_max_active_levels (void);
extern void __KAI_KMPC_CONVENTION omp_get_schedule (omp_sched_t *, int *);
/* lock API functions */
typedef struct omp_lock_t {
void * _lk;
} omp_lock_t;
extern void __KAI_KMPC_CONVENTION omp_init_lock (omp_lock_t *);
extern void __KAI_KMPC_CONVENTION omp_set_lock (omp_lock_t *);
extern void __KAI_KMPC_CONVENTION omp_unset_lock (omp_lock_t *);
extern void __KAI_KMPC_CONVENTION omp_destroy_lock (omp_lock_t *);
extern int __KAI_KMPC_CONVENTION omp_test_lock (omp_lock_t *);
/* nested lock API functions */
typedef struct omp_nest_lock_t {
void * _lk;
} omp_nest_lock_t;
extern void __KAI_KMPC_CONVENTION omp_init_nest_lock (omp_nest_lock_t *);
extern void __KAI_KMPC_CONVENTION omp_set_nest_lock (omp_nest_lock_t *);
extern void __KAI_KMPC_CONVENTION omp_unset_nest_lock (omp_nest_lock_t *);
extern void __KAI_KMPC_CONVENTION omp_destroy_nest_lock (omp_nest_lock_t *);
extern int __KAI_KMPC_CONVENTION omp_test_nest_lock (omp_nest_lock_t *);
/* time API functions */
extern double __KAI_KMPC_CONVENTION omp_get_wtime (void);
extern double __KAI_KMPC_CONVENTION omp_get_wtick (void);
/* OpenMP 4.0 */
extern int __KAI_KMPC_CONVENTION omp_get_default_device (void);
extern void __KAI_KMPC_CONVENTION omp_set_default_device (int);
extern int __KAI_KMPC_CONVENTION omp_is_initial_device (void);
extern int __KAI_KMPC_CONVENTION omp_get_num_devices (void);
extern int __KAI_KMPC_CONVENTION omp_get_num_teams (void);
extern int __KAI_KMPC_CONVENTION omp_get_team_num (void);
extern int __KAI_KMPC_CONVENTION omp_get_cancellation (void);
# include <stdlib.h>
/* kmp API functions */
extern int __KAI_KMPC_CONVENTION kmp_get_stacksize (void);
extern void __KAI_KMPC_CONVENTION kmp_set_stacksize (int);
extern size_t __KAI_KMPC_CONVENTION kmp_get_stacksize_s (void);
extern void __KAI_KMPC_CONVENTION kmp_set_stacksize_s (size_t);
extern int __KAI_KMPC_CONVENTION kmp_get_blocktime (void);
extern int __KAI_KMPC_CONVENTION kmp_get_library (void);
extern void __KAI_KMPC_CONVENTION kmp_set_blocktime (int);
extern void __KAI_KMPC_CONVENTION kmp_set_library (int);
extern void __KAI_KMPC_CONVENTION kmp_set_library_serial (void);
extern void __KAI_KMPC_CONVENTION kmp_set_library_turnaround (void);
extern void __KAI_KMPC_CONVENTION kmp_set_library_throughput (void);
extern void __KAI_KMPC_CONVENTION kmp_set_defaults (char const *);
/* Intel affinity API */
typedef void * kmp_affinity_mask_t;
extern int __KAI_KMPC_CONVENTION kmp_set_affinity (kmp_affinity_mask_t *);
extern int __KAI_KMPC_CONVENTION kmp_get_affinity (kmp_affinity_mask_t *);
extern int __KAI_KMPC_CONVENTION kmp_get_affinity_max_proc (void);
extern void __KAI_KMPC_CONVENTION kmp_create_affinity_mask (kmp_affinity_mask_t *);
extern void __KAI_KMPC_CONVENTION kmp_destroy_affinity_mask (kmp_affinity_mask_t *);
extern int __KAI_KMPC_CONVENTION kmp_set_affinity_mask_proc (int, kmp_affinity_mask_t *);
extern int __KAI_KMPC_CONVENTION kmp_unset_affinity_mask_proc (int, kmp_affinity_mask_t *);
extern int __KAI_KMPC_CONVENTION kmp_get_affinity_mask_proc (int, kmp_affinity_mask_t *);
/* OpenMP 4.0 affinity API */
typedef enum omp_proc_bind_t {
omp_proc_bind_false = 0,
omp_proc_bind_true = 1,
omp_proc_bind_master = 2,
omp_proc_bind_close = 3,
omp_proc_bind_spread = 4
} omp_proc_bind_t;
extern omp_proc_bind_t __KAI_KMPC_CONVENTION omp_get_proc_bind (void);
extern void * __KAI_KMPC_CONVENTION kmp_malloc (size_t);
extern void * __KAI_KMPC_CONVENTION kmp_aligned_malloc (size_t, size_t);
extern void * __KAI_KMPC_CONVENTION kmp_calloc (size_t, size_t);
extern void * __KAI_KMPC_CONVENTION kmp_realloc (void *, size_t);
extern void __KAI_KMPC_CONVENTION kmp_free (void *);
extern void __KAI_KMPC_CONVENTION kmp_set_warnings_on(void);
extern void __KAI_KMPC_CONVENTION kmp_set_warnings_off(void);
# undef __KAI_KMPC_CONVENTION
/* Warning:
The following typedefs are not standard, deprecated and will be removed in a future release.
*/
typedef int omp_int_t;
typedef double omp_wtime_t;
# ifdef __cplusplus
}
# endif
#endif /* __OMP_H */

View File

@ -0,0 +1,774 @@
! include/40/omp_lib.f.var
!
!//===----------------------------------------------------------------------===//
!//
!// The LLVM Compiler Infrastructure
!//
!// This file is dual licensed under the MIT and the University of Illinois Open
!// Source Licenses. See LICENSE.txt for details.
!//
!//===----------------------------------------------------------------------===//
!
!***
!*** Some of the directives for the following routine extend past column 72,
!*** so process this file in 132-column mode.
!***
!dec$ fixedformlinesize:132
module omp_lib_kinds
integer, parameter :: omp_integer_kind = 4
integer, parameter :: omp_logical_kind = 4
integer, parameter :: omp_real_kind = 4
integer, parameter :: omp_lock_kind = int_ptr_kind()
integer, parameter :: omp_nest_lock_kind = int_ptr_kind()
integer, parameter :: omp_sched_kind = omp_integer_kind
integer, parameter :: omp_proc_bind_kind = omp_integer_kind
integer, parameter :: kmp_pointer_kind = int_ptr_kind()
integer, parameter :: kmp_size_t_kind = int_ptr_kind()
integer, parameter :: kmp_affinity_mask_kind = int_ptr_kind()
integer, parameter :: kmp_cancel_kind = omp_integer_kind
end module omp_lib_kinds
module omp_lib
use omp_lib_kinds
integer (kind=omp_integer_kind), parameter :: kmp_version_major = @LIBOMP_VERSION_MAJOR@
integer (kind=omp_integer_kind), parameter :: kmp_version_minor = @LIBOMP_VERSION_MINOR@
integer (kind=omp_integer_kind), parameter :: kmp_version_build = @LIBOMP_VERSION_BUILD@
character(*), parameter :: kmp_build_date = '@LIBOMP_BUILD_DATE@'
integer (kind=omp_integer_kind), parameter :: openmp_version = @LIBOMP_OMP_YEAR_MONTH@
integer(kind=omp_sched_kind), parameter :: omp_sched_static = 1
integer(kind=omp_sched_kind), parameter :: omp_sched_dynamic = 2
integer(kind=omp_sched_kind), parameter :: omp_sched_guided = 3
integer(kind=omp_sched_kind), parameter :: omp_sched_auto = 4
integer (kind=omp_proc_bind_kind), parameter :: omp_proc_bind_false = 0
integer (kind=omp_proc_bind_kind), parameter :: omp_proc_bind_true = 1
integer (kind=omp_proc_bind_kind), parameter :: omp_proc_bind_master = 2
integer (kind=omp_proc_bind_kind), parameter :: omp_proc_bind_close = 3
integer (kind=omp_proc_bind_kind), parameter :: omp_proc_bind_spread = 4
integer (kind=kmp_cancel_kind), parameter :: kmp_cancel_parallel = 1
integer (kind=kmp_cancel_kind), parameter :: kmp_cancel_loop = 2
integer (kind=kmp_cancel_kind), parameter :: kmp_cancel_sections = 3
integer (kind=kmp_cancel_kind), parameter :: kmp_cancel_taskgroup = 4
interface
! ***
! *** omp_* entry points
! ***
subroutine omp_set_num_threads(num_threads)
use omp_lib_kinds
integer (kind=omp_integer_kind) num_threads
end subroutine omp_set_num_threads
subroutine omp_set_dynamic(dynamic_threads)
use omp_lib_kinds
logical (kind=omp_logical_kind) dynamic_threads
end subroutine omp_set_dynamic
subroutine omp_set_nested(nested)
use omp_lib_kinds
logical (kind=omp_logical_kind) nested
end subroutine omp_set_nested
function omp_get_num_threads()
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_num_threads
end function omp_get_num_threads
function omp_get_max_threads()
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_max_threads
end function omp_get_max_threads
function omp_get_thread_num()
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_thread_num
end function omp_get_thread_num
function omp_get_num_procs()
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_num_procs
end function omp_get_num_procs
function omp_in_parallel()
use omp_lib_kinds
logical (kind=omp_logical_kind) omp_in_parallel
end function omp_in_parallel
function omp_in_final()
use omp_lib_kinds
logical (kind=omp_logical_kind) omp_in_final
end function omp_in_final
function omp_get_dynamic()
use omp_lib_kinds
logical (kind=omp_logical_kind) omp_get_dynamic
end function omp_get_dynamic
function omp_get_nested()
use omp_lib_kinds
logical (kind=omp_logical_kind) omp_get_nested
end function omp_get_nested
function omp_get_thread_limit()
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_thread_limit
end function omp_get_thread_limit
subroutine omp_set_max_active_levels(max_levels)
use omp_lib_kinds
integer (kind=omp_integer_kind) max_levels
end subroutine omp_set_max_active_levels
function omp_get_max_active_levels()
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_max_active_levels
end function omp_get_max_active_levels
function omp_get_level()
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_level
end function omp_get_level
function omp_get_active_level()
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_active_level
end function omp_get_active_level
function omp_get_ancestor_thread_num(level)
use omp_lib_kinds
integer (kind=omp_integer_kind) level
integer (kind=omp_integer_kind) omp_get_ancestor_thread_num
end function omp_get_ancestor_thread_num
function omp_get_team_size(level)
use omp_lib_kinds
integer (kind=omp_integer_kind) level
integer (kind=omp_integer_kind) omp_get_team_size
end function omp_get_team_size
subroutine omp_set_schedule(kind, chunk_size)
use omp_lib_kinds
integer (kind=omp_sched_kind) kind
integer (kind=omp_integer_kind) chunk_size
end subroutine omp_set_schedule
subroutine omp_get_schedule(kind, chunk_size)
use omp_lib_kinds
integer (kind=omp_sched_kind) kind
integer (kind=omp_integer_kind) chunk_size
end subroutine omp_get_schedule
function omp_get_proc_bind()
use omp_lib_kinds
integer (kind=omp_proc_bind_kind) omp_get_proc_bind
end function omp_get_proc_bind
function omp_get_wtime()
double precision omp_get_wtime
end function omp_get_wtime
function omp_get_wtick ()
double precision omp_get_wtick
end function omp_get_wtick
function omp_get_default_device()
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_default_device
end function omp_get_default_device
subroutine omp_set_default_device(device_num)
use omp_lib_kinds
integer (kind=omp_integer_kind) device_num
end subroutine omp_set_default_device
function omp_get_num_devices()
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_num_devices
end function omp_get_num_devices
function omp_get_num_teams()
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_num_teams
end function omp_get_num_teams
function omp_get_team_num()
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_team_num
end function omp_get_team_num
function omp_get_cancellation()
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_cancellation
end function omp_get_cancellation
function omp_is_initial_device()
use omp_lib_kinds
logical (kind=omp_logical_kind) omp_is_initial_device
end function omp_is_initial_device
subroutine omp_init_lock(svar)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_init_lock
!DIR$ ENDIF
use omp_lib_kinds
integer (kind=omp_lock_kind) svar
end subroutine omp_init_lock
subroutine omp_destroy_lock(svar)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_destroy_lock
!DIR$ ENDIF
use omp_lib_kinds
integer (kind=omp_lock_kind) svar
end subroutine omp_destroy_lock
subroutine omp_set_lock(svar)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_set_lock
!DIR$ ENDIF
use omp_lib_kinds
integer (kind=omp_lock_kind) svar
end subroutine omp_set_lock
subroutine omp_unset_lock(svar)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_unset_lock
!DIR$ ENDIF
use omp_lib_kinds
integer (kind=omp_lock_kind) svar
end subroutine omp_unset_lock
function omp_test_lock(svar)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_test_lock
!DIR$ ENDIF
use omp_lib_kinds
logical (kind=omp_logical_kind) omp_test_lock
integer (kind=omp_lock_kind) svar
end function omp_test_lock
subroutine omp_init_nest_lock(nvar)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_init_nest_lock
!DIR$ ENDIF
use omp_lib_kinds
integer (kind=omp_nest_lock_kind) nvar
end subroutine omp_init_nest_lock
subroutine omp_destroy_nest_lock(nvar)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_destroy_nest_lock
!DIR$ ENDIF
use omp_lib_kinds
integer (kind=omp_nest_lock_kind) nvar
end subroutine omp_destroy_nest_lock
subroutine omp_set_nest_lock(nvar)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_set_nest_lock
!DIR$ ENDIF
use omp_lib_kinds
integer (kind=omp_nest_lock_kind) nvar
end subroutine omp_set_nest_lock
subroutine omp_unset_nest_lock(nvar)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_unset_nest_lock
!DIR$ ENDIF
use omp_lib_kinds
integer (kind=omp_nest_lock_kind) nvar
end subroutine omp_unset_nest_lock
function omp_test_nest_lock(nvar)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_test_nest_lock
!DIR$ ENDIF
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_test_nest_lock
integer (kind=omp_nest_lock_kind) nvar
end function omp_test_nest_lock
! ***
! *** kmp_* entry points
! ***
subroutine kmp_set_stacksize(size)
use omp_lib_kinds
integer (kind=omp_integer_kind) size
end subroutine kmp_set_stacksize
subroutine kmp_set_stacksize_s(size)
use omp_lib_kinds
integer (kind=kmp_size_t_kind) size
end subroutine kmp_set_stacksize_s
subroutine kmp_set_blocktime(msec)
use omp_lib_kinds
integer (kind=omp_integer_kind) msec
end subroutine kmp_set_blocktime
subroutine kmp_set_library_serial()
end subroutine kmp_set_library_serial
subroutine kmp_set_library_turnaround()
end subroutine kmp_set_library_turnaround
subroutine kmp_set_library_throughput()
end subroutine kmp_set_library_throughput
subroutine kmp_set_library(libnum)
use omp_lib_kinds
integer (kind=omp_integer_kind) libnum
end subroutine kmp_set_library
subroutine kmp_set_defaults(string)
character*(*) string
end subroutine kmp_set_defaults
function kmp_get_stacksize()
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_get_stacksize
end function kmp_get_stacksize
function kmp_get_stacksize_s()
use omp_lib_kinds
integer (kind=kmp_size_t_kind) kmp_get_stacksize_s
end function kmp_get_stacksize_s
function kmp_get_blocktime()
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_get_blocktime
end function kmp_get_blocktime
function kmp_get_library()
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_get_library
end function kmp_get_library
function kmp_set_affinity(mask)
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_set_affinity
integer (kind=kmp_affinity_mask_kind) mask
end function kmp_set_affinity
function kmp_get_affinity(mask)
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_get_affinity
integer (kind=kmp_affinity_mask_kind) mask
end function kmp_get_affinity
function kmp_get_affinity_max_proc()
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_get_affinity_max_proc
end function kmp_get_affinity_max_proc
subroutine kmp_create_affinity_mask(mask)
use omp_lib_kinds
integer (kind=kmp_affinity_mask_kind) mask
end subroutine kmp_create_affinity_mask
subroutine kmp_destroy_affinity_mask(mask)
use omp_lib_kinds
integer (kind=kmp_affinity_mask_kind) mask
end subroutine kmp_destroy_affinity_mask
function kmp_set_affinity_mask_proc(proc, mask)
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_set_affinity_mask_proc
integer (kind=omp_integer_kind) proc
integer (kind=kmp_affinity_mask_kind) mask
end function kmp_set_affinity_mask_proc
function kmp_unset_affinity_mask_proc(proc, mask)
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_unset_affinity_mask_proc
integer (kind=omp_integer_kind) proc
integer (kind=kmp_affinity_mask_kind) mask
end function kmp_unset_affinity_mask_proc
function kmp_get_affinity_mask_proc(proc, mask)
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_get_affinity_mask_proc
integer (kind=omp_integer_kind) proc
integer (kind=kmp_affinity_mask_kind) mask
end function kmp_get_affinity_mask_proc
function kmp_malloc(size)
use omp_lib_kinds
integer (kind=kmp_pointer_kind) kmp_malloc
integer (kind=kmp_size_t_kind) size
end function kmp_malloc
function kmp_aligned_malloc(size, alignment)
use omp_lib_kinds
integer (kind=kmp_pointer_kind) kmp_aligned_malloc
integer (kind=kmp_size_t_kind) size
integer (kind=kmp_size_t_kind) alignment
end function kmp_aligned_malloc
function kmp_calloc(nelem, elsize)
use omp_lib_kinds
integer (kind=kmp_pointer_kind) kmp_calloc
integer (kind=kmp_size_t_kind) nelem
integer (kind=kmp_size_t_kind) elsize
end function kmp_calloc
function kmp_realloc(ptr, size)
use omp_lib_kinds
integer (kind=kmp_pointer_kind) kmp_realloc
integer (kind=kmp_pointer_kind) ptr
integer (kind=kmp_size_t_kind) size
end function kmp_realloc
subroutine kmp_free(ptr)
use omp_lib_kinds
integer (kind=kmp_pointer_kind) ptr
end subroutine kmp_free
subroutine kmp_set_warnings_on()
end subroutine kmp_set_warnings_on
subroutine kmp_set_warnings_off()
end subroutine kmp_set_warnings_off
function kmp_get_cancellation_status(cancelkind)
use omp_lib_kinds
integer (kind=kmp_cancel_kind) cancelkind
logical (kind=omp_logical_kind) kmp_get_cancellation_status
end function kmp_get_cancellation_status
end interface
!dec$ if defined(_WIN32)
!dec$ if defined(_WIN64) .or. defined(_M_AMD64)
!***
!*** The Fortran entry points must be in uppercase, even if the /Qlowercase
!*** option is specified. The alias attribute ensures that the specified
!*** string is used as the entry point.
!***
!*** On the Windows* OS IA-32 architecture, the Fortran entry points have an
!*** underscore prepended. On the Windows* OS Intel(R) 64
!*** architecture, no underscore is prepended.
!***
!dec$ attributes alias:'OMP_SET_NUM_THREADS' :: omp_set_num_threads
!dec$ attributes alias:'OMP_SET_DYNAMIC' :: omp_set_dynamic
!dec$ attributes alias:'OMP_SET_NESTED' :: omp_set_nested
!dec$ attributes alias:'OMP_GET_NUM_THREADS' :: omp_get_num_threads
!dec$ attributes alias:'OMP_GET_MAX_THREADS' :: omp_get_max_threads
!dec$ attributes alias:'OMP_GET_THREAD_NUM' :: omp_get_thread_num
!dec$ attributes alias:'OMP_GET_NUM_PROCS' :: omp_get_num_procs
!dec$ attributes alias:'OMP_IN_PARALLEL' :: omp_in_parallel
!dec$ attributes alias:'OMP_GET_DYNAMIC' :: omp_get_dynamic
!dec$ attributes alias:'OMP_GET_NESTED' :: omp_get_nested
!dec$ attributes alias:'OMP_GET_THREAD_LIMIT' :: omp_get_thread_limit
!dec$ attributes alias:'OMP_SET_MAX_ACTIVE_LEVELS' :: omp_set_max_active_levels
!dec$ attributes alias:'OMP_GET_MAX_ACTIVE_LEVELS' :: omp_get_max_active_levels
!dec$ attributes alias:'OMP_GET_LEVEL' :: omp_get_level
!dec$ attributes alias:'OMP_GET_ACTIVE_LEVEL' :: omp_get_active_level
!dec$ attributes alias:'OMP_GET_ANCESTOR_THREAD_NUM' :: omp_get_ancestor_thread_num
!dec$ attributes alias:'OMP_GET_TEAM_SIZE' :: omp_get_team_size
!dec$ attributes alias:'OMP_SET_SCHEDULE' :: omp_set_schedule
!dec$ attributes alias:'OMP_GET_SCHEDULE' :: omp_get_schedule
!dec$ attributes alias:'OMP_GET_PROC_BIND' :: omp_get_proc_bind
!dec$ attributes alias:'OMP_GET_WTIME' :: omp_get_wtime
!dec$ attributes alias:'OMP_GET_WTICK' :: omp_get_wtick
!dec$ attributes alias:'OMP_GET_DEFAULT_DEVICE' :: omp_get_default_device
!dec$ attributes alias:'OMP_SET_DEFAULT_DEVICE' :: omp_set_default_device
!dec$ attributes alias:'OMP_GET_NUM_DEVICES' :: omp_get_num_devices
!dec$ attributes alias:'OMP_GET_NUM_TEAMS' :: omp_get_num_teams
!dec$ attributes alias:'OMP_GET_TEAM_NUM' :: omp_get_team_num
!dec$ attributes alias:'OMP_GET_CANCELLATION' :: omp_get_cancellation
!dec$ attributes alias:'OMP_IS_INITIAL_DEVICE' :: omp_is_initial_device
!dec$ attributes alias:'omp_init_lock' :: omp_init_lock
!dec$ attributes alias:'omp_destroy_lock' :: omp_destroy_lock
!dec$ attributes alias:'omp_set_lock' :: omp_set_lock
!dec$ attributes alias:'omp_unset_lock' :: omp_unset_lock
!dec$ attributes alias:'omp_test_lock' :: omp_test_lock
!dec$ attributes alias:'omp_init_nest_lock' :: omp_init_nest_lock
!dec$ attributes alias:'omp_destroy_nest_lock' :: omp_destroy_nest_lock
!dec$ attributes alias:'omp_set_nest_lock' :: omp_set_nest_lock
!dec$ attributes alias:'omp_unset_nest_lock' :: omp_unset_nest_lock
!dec$ attributes alias:'omp_test_nest_lock' :: omp_test_nest_lock
!dec$ attributes alias:'KMP_SET_STACKSIZE'::kmp_set_stacksize
!dec$ attributes alias:'KMP_SET_STACKSIZE_S'::kmp_set_stacksize_s
!dec$ attributes alias:'KMP_SET_BLOCKTIME'::kmp_set_blocktime
!dec$ attributes alias:'KMP_SET_LIBRARY_SERIAL'::kmp_set_library_serial
!dec$ attributes alias:'KMP_SET_LIBRARY_TURNAROUND'::kmp_set_library_turnaround
!dec$ attributes alias:'KMP_SET_LIBRARY_THROUGHPUT'::kmp_set_library_throughput
!dec$ attributes alias:'KMP_SET_LIBRARY'::kmp_set_library
!dec$ attributes alias:'KMP_GET_STACKSIZE'::kmp_get_stacksize
!dec$ attributes alias:'KMP_GET_STACKSIZE_S'::kmp_get_stacksize_s
!dec$ attributes alias:'KMP_GET_BLOCKTIME'::kmp_get_blocktime
!dec$ attributes alias:'KMP_GET_LIBRARY'::kmp_get_library
!dec$ attributes alias:'KMP_SET_AFFINITY'::kmp_set_affinity
!dec$ attributes alias:'KMP_GET_AFFINITY'::kmp_get_affinity
!dec$ attributes alias:'KMP_GET_AFFINITY_MAX_PROC'::kmp_get_affinity_max_proc
!dec$ attributes alias:'KMP_CREATE_AFFINITY_MASK'::kmp_create_affinity_mask
!dec$ attributes alias:'KMP_DESTROY_AFFINITY_MASK'::kmp_destroy_affinity_mask
!dec$ attributes alias:'KMP_SET_AFFINITY_MASK_PROC'::kmp_set_affinity_mask_proc
!dec$ attributes alias:'KMP_UNSET_AFFINITY_MASK_PROC'::kmp_unset_affinity_mask_proc
!dec$ attributes alias:'KMP_GET_AFFINITY_MASK_PROC'::kmp_get_affinity_mask_proc
!dec$ attributes alias:'KMP_MALLOC'::kmp_malloc
!dec$ attributes alias:'KMP_ALIGNED_MALLOC'::kmp_aligned_malloc
!dec$ attributes alias:'KMP_CALLOC'::kmp_calloc
!dec$ attributes alias:'KMP_REALLOC'::kmp_realloc
!dec$ attributes alias:'KMP_FREE'::kmp_free
!dec$ attributes alias:'KMP_SET_WARNINGS_ON'::kmp_set_warnings_on
!dec$ attributes alias:'KMP_SET_WARNINGS_OFF'::kmp_set_warnings_off
!dec$ attributes alias:'KMP_GET_CANCELLATION_STATUS' :: kmp_get_cancellation_status
!dec$ else
!***
!*** On Windows* OS IA-32 architecture, the Fortran entry points have an underscore prepended.
!***
!dec$ attributes alias:'_OMP_SET_NUM_THREADS' :: omp_set_num_threads
!dec$ attributes alias:'_OMP_SET_DYNAMIC' :: omp_set_dynamic
!dec$ attributes alias:'_OMP_SET_NESTED' :: omp_set_nested
!dec$ attributes alias:'_OMP_GET_NUM_THREADS' :: omp_get_num_threads
!dec$ attributes alias:'_OMP_GET_MAX_THREADS' :: omp_get_max_threads
!dec$ attributes alias:'_OMP_GET_THREAD_NUM' :: omp_get_thread_num
!dec$ attributes alias:'_OMP_GET_NUM_PROCS' :: omp_get_num_procs
!dec$ attributes alias:'_OMP_IN_PARALLEL' :: omp_in_parallel
!dec$ attributes alias:'_OMP_GET_DYNAMIC' :: omp_get_dynamic
!dec$ attributes alias:'_OMP_GET_NESTED' :: omp_get_nested
!dec$ attributes alias:'_OMP_GET_THREAD_LIMIT' :: omp_get_thread_limit
!dec$ attributes alias:'_OMP_SET_MAX_ACTIVE_LEVELS' :: omp_set_max_active_levels
!dec$ attributes alias:'_OMP_GET_MAX_ACTIVE_LEVELS' :: omp_get_max_active_levels
!dec$ attributes alias:'_OMP_GET_LEVEL' :: omp_get_level
!dec$ attributes alias:'_OMP_GET_ACTIVE_LEVEL' :: omp_get_active_level
!dec$ attributes alias:'_OMP_GET_ANCESTOR_THREAD_NUM' :: omp_get_ancestor_thread_num
!dec$ attributes alias:'_OMP_GET_TEAM_SIZE' :: omp_get_team_size
!dec$ attributes alias:'_OMP_SET_SCHEDULE' :: omp_set_schedule
!dec$ attributes alias:'_OMP_GET_SCHEDULE' :: omp_get_schedule
!dec$ attributes alias:'_OMP_GET_PROC_BIND' :: omp_get_proc_bind
!dec$ attributes alias:'_OMP_GET_WTIME' :: omp_get_wtime
!dec$ attributes alias:'_OMP_GET_WTICK' :: omp_get_wtick
!dec$ attributes alias:'_OMP_GET_DEFAULT_DEVICE' :: omp_get_default_device
!dec$ attributes alias:'_OMP_SET_DEFAULT_DEVICE' :: omp_set_default_device
!dec$ attributes alias:'_OMP_GET_NUM_DEVICES' :: omp_get_num_devices
!dec$ attributes alias:'_OMP_GET_NUM_TEAMS' :: omp_get_num_teams
!dec$ attributes alias:'_OMP_GET_TEAM_NUM' :: omp_get_team_num
!dec$ attributes alias:'_OMP_GET_CANCELLATION' :: omp_get_cancellation
!dec$ attributes alias:'_OMP_IS_INITIAL_DEVICE' :: omp_is_initial_device
!dec$ attributes alias:'_omp_init_lock' :: omp_init_lock
!dec$ attributes alias:'_omp_destroy_lock' :: omp_destroy_lock
!dec$ attributes alias:'_omp_set_lock' :: omp_set_lock
!dec$ attributes alias:'_omp_unset_lock' :: omp_unset_lock
!dec$ attributes alias:'_omp_test_lock' :: omp_test_lock
!dec$ attributes alias:'_omp_init_nest_lock' :: omp_init_nest_lock
!dec$ attributes alias:'_omp_destroy_nest_lock' :: omp_destroy_nest_lock
!dec$ attributes alias:'_omp_set_nest_lock' :: omp_set_nest_lock
!dec$ attributes alias:'_omp_unset_nest_lock' :: omp_unset_nest_lock
!dec$ attributes alias:'_omp_test_nest_lock' :: omp_test_nest_lock
!dec$ attributes alias:'_KMP_SET_STACKSIZE'::kmp_set_stacksize
!dec$ attributes alias:'_KMP_SET_STACKSIZE_S'::kmp_set_stacksize_s
!dec$ attributes alias:'_KMP_SET_BLOCKTIME'::kmp_set_blocktime
!dec$ attributes alias:'_KMP_SET_LIBRARY_SERIAL'::kmp_set_library_serial
!dec$ attributes alias:'_KMP_SET_LIBRARY_TURNAROUND'::kmp_set_library_turnaround
!dec$ attributes alias:'_KMP_SET_LIBRARY_THROUGHPUT'::kmp_set_library_throughput
!dec$ attributes alias:'_KMP_SET_LIBRARY'::kmp_set_library
!dec$ attributes alias:'_KMP_GET_STACKSIZE'::kmp_get_stacksize
!dec$ attributes alias:'_KMP_GET_STACKSIZE_S'::kmp_get_stacksize_s
!dec$ attributes alias:'_KMP_GET_BLOCKTIME'::kmp_get_blocktime
!dec$ attributes alias:'_KMP_GET_LIBRARY'::kmp_get_library
!dec$ attributes alias:'_KMP_SET_AFFINITY'::kmp_set_affinity
!dec$ attributes alias:'_KMP_GET_AFFINITY'::kmp_get_affinity
!dec$ attributes alias:'_KMP_GET_AFFINITY_MAX_PROC'::kmp_get_affinity_max_proc
!dec$ attributes alias:'_KMP_CREATE_AFFINITY_MASK'::kmp_create_affinity_mask
!dec$ attributes alias:'_KMP_DESTROY_AFFINITY_MASK'::kmp_destroy_affinity_mask
!dec$ attributes alias:'_KMP_SET_AFFINITY_MASK_PROC'::kmp_set_affinity_mask_proc
!dec$ attributes alias:'_KMP_UNSET_AFFINITY_MASK_PROC'::kmp_unset_affinity_mask_proc
!dec$ attributes alias:'_KMP_GET_AFFINITY_MASK_PROC'::kmp_get_affinity_mask_proc
!dec$ attributes alias:'_KMP_MALLOC'::kmp_malloc
!dec$ attributes alias:'_KMP_ALIGNED_MALLOC'::kmp_aligned_malloc
!dec$ attributes alias:'_KMP_CALLOC'::kmp_calloc
!dec$ attributes alias:'_KMP_REALLOC'::kmp_realloc
!dec$ attributes alias:'_KMP_FREE'::kmp_free
!dec$ attributes alias:'_KMP_SET_WARNINGS_ON'::kmp_set_warnings_on
!dec$ attributes alias:'_KMP_SET_WARNINGS_OFF'::kmp_set_warnings_off
!dec$ attributes alias:'_KMP_GET_CANCELLATION_STATUS' :: kmp_get_cancellation_status
!dec$ endif
!dec$ endif
!dec$ if defined(__linux)
!***
!*** The Linux* OS entry points are in lowercase, with an underscore appended.
!***
!dec$ attributes alias:'omp_set_num_threads_'::omp_set_num_threads
!dec$ attributes alias:'omp_set_dynamic_'::omp_set_dynamic
!dec$ attributes alias:'omp_set_nested_'::omp_set_nested
!dec$ attributes alias:'omp_get_num_threads_'::omp_get_num_threads
!dec$ attributes alias:'omp_get_max_threads_'::omp_get_max_threads
!dec$ attributes alias:'omp_get_thread_num_'::omp_get_thread_num
!dec$ attributes alias:'omp_get_num_procs_'::omp_get_num_procs
!dec$ attributes alias:'omp_in_parallel_'::omp_in_parallel
!dec$ attributes alias:'omp_get_dynamic_'::omp_get_dynamic
!dec$ attributes alias:'omp_get_nested_'::omp_get_nested
!dec$ attributes alias:'omp_get_thread_limit_'::omp_get_thread_limit
!dec$ attributes alias:'omp_set_max_active_levels_'::omp_set_max_active_levels
!dec$ attributes alias:'omp_get_max_active_levels_'::omp_get_max_active_levels
!dec$ attributes alias:'omp_get_level_'::omp_get_level
!dec$ attributes alias:'omp_get_active_level_'::omp_get_active_level
!dec$ attributes alias:'omp_get_ancestor_thread_num_'::omp_get_ancestor_thread_num
!dec$ attributes alias:'omp_get_team_size_'::omp_get_team_size
!dec$ attributes alias:'omp_set_schedule_'::omp_set_schedule
!dec$ attributes alias:'omp_get_schedule_'::omp_get_schedule
!dec$ attributes alias:'omp_get_proc_bind_' :: omp_get_proc_bind
!dec$ attributes alias:'omp_get_wtime_'::omp_get_wtime
!dec$ attributes alias:'omp_get_wtick_'::omp_get_wtick
!dec$ attributes alias:'omp_get_default_device_'::omp_get_default_device
!dec$ attributes alias:'omp_set_default_device_'::omp_set_default_device
!dec$ attributes alias:'omp_get_num_devices_'::omp_get_num_devices
!dec$ attributes alias:'omp_get_num_teams_'::omp_get_num_teams
!dec$ attributes alias:'omp_get_team_num_'::omp_get_team_num
!dec$ attributes alias:'omp_get_cancellation_'::omp_get_cancellation
!dec$ attributes alias:'omp_is_initial_device_'::omp_is_initial_device
!dec$ attributes alias:'omp_init_lock_'::omp_init_lock
!dec$ attributes alias:'omp_destroy_lock_'::omp_destroy_lock
!dec$ attributes alias:'omp_set_lock_'::omp_set_lock
!dec$ attributes alias:'omp_unset_lock_'::omp_unset_lock
!dec$ attributes alias:'omp_test_lock_'::omp_test_lock
!dec$ attributes alias:'omp_init_nest_lock_'::omp_init_nest_lock
!dec$ attributes alias:'omp_destroy_nest_lock_'::omp_destroy_nest_lock
!dec$ attributes alias:'omp_set_nest_lock_'::omp_set_nest_lock
!dec$ attributes alias:'omp_unset_nest_lock_'::omp_unset_nest_lock
!dec$ attributes alias:'omp_test_nest_lock_'::omp_test_nest_lock
!dec$ attributes alias:'kmp_set_stacksize_'::kmp_set_stacksize
!dec$ attributes alias:'kmp_set_stacksize_s_'::kmp_set_stacksize_s
!dec$ attributes alias:'kmp_set_blocktime_'::kmp_set_blocktime
!dec$ attributes alias:'kmp_set_library_serial_'::kmp_set_library_serial
!dec$ attributes alias:'kmp_set_library_turnaround_'::kmp_set_library_turnaround
!dec$ attributes alias:'kmp_set_library_throughput_'::kmp_set_library_throughput
!dec$ attributes alias:'kmp_set_library_'::kmp_set_library
!dec$ attributes alias:'kmp_get_stacksize_'::kmp_get_stacksize
!dec$ attributes alias:'kmp_get_stacksize_s_'::kmp_get_stacksize_s
!dec$ attributes alias:'kmp_get_blocktime_'::kmp_get_blocktime
!dec$ attributes alias:'kmp_get_library_'::kmp_get_library
!dec$ attributes alias:'kmp_set_affinity_'::kmp_set_affinity
!dec$ attributes alias:'kmp_get_affinity_'::kmp_get_affinity
!dec$ attributes alias:'kmp_get_affinity_max_proc_'::kmp_get_affinity_max_proc
!dec$ attributes alias:'kmp_create_affinity_mask_'::kmp_create_affinity_mask
!dec$ attributes alias:'kmp_destroy_affinity_mask_'::kmp_destroy_affinity_mask
!dec$ attributes alias:'kmp_set_affinity_mask_proc_'::kmp_set_affinity_mask_proc
!dec$ attributes alias:'kmp_unset_affinity_mask_proc_'::kmp_unset_affinity_mask_proc
!dec$ attributes alias:'kmp_get_affinity_mask_proc_'::kmp_get_affinity_mask_proc
!dec$ attributes alias:'kmp_malloc_'::kmp_malloc
!dec$ attributes alias:'kmp_aligned_malloc_'::kmp_aligned_malloc
!dec$ attributes alias:'kmp_calloc_'::kmp_calloc
!dec$ attributes alias:'kmp_realloc_'::kmp_realloc
!dec$ attributes alias:'kmp_free_'::kmp_free
!dec$ attributes alias:'kmp_set_warnings_on_'::kmp_set_warnings_on
!dec$ attributes alias:'kmp_set_warnings_off_'::kmp_set_warnings_off
!dec$ attributes alias:'kmp_get_cancellation_status_'::kmp_get_cancellation_status
!dec$ endif
!dec$ if defined(__APPLE__)
!***
!*** The Mac entry points are in lowercase, with an both an underscore
!*** appended and an underscore prepended.
!***
!dec$ attributes alias:'_omp_set_num_threads_'::omp_set_num_threads
!dec$ attributes alias:'_omp_set_dynamic_'::omp_set_dynamic
!dec$ attributes alias:'_omp_set_nested_'::omp_set_nested
!dec$ attributes alias:'_omp_get_num_threads_'::omp_get_num_threads
!dec$ attributes alias:'_omp_get_max_threads_'::omp_get_max_threads
!dec$ attributes alias:'_omp_get_thread_num_'::omp_get_thread_num
!dec$ attributes alias:'_omp_get_num_procs_'::omp_get_num_procs
!dec$ attributes alias:'_omp_in_parallel_'::omp_in_parallel
!dec$ attributes alias:'_omp_get_dynamic_'::omp_get_dynamic
!dec$ attributes alias:'_omp_get_nested_'::omp_get_nested
!dec$ attributes alias:'_omp_get_thread_limit_'::omp_get_thread_limit
!dec$ attributes alias:'_omp_set_max_active_levels_'::omp_set_max_active_levels
!dec$ attributes alias:'_omp_get_max_active_levels_'::omp_get_max_active_levels
!dec$ attributes alias:'_omp_get_level_'::omp_get_level
!dec$ attributes alias:'_omp_get_active_level_'::omp_get_active_level
!dec$ attributes alias:'_omp_get_ancestor_thread_num_'::omp_get_ancestor_thread_num
!dec$ attributes alias:'_omp_get_team_size_'::omp_get_team_size
!dec$ attributes alias:'_omp_set_schedule_'::omp_set_schedule
!dec$ attributes alias:'_omp_get_schedule_'::omp_get_schedule
!dec$ attributes alias:'_omp_get_proc_bind_' :: omp_get_proc_bind
!dec$ attributes alias:'_omp_get_wtime_'::omp_get_wtime
!dec$ attributes alias:'_omp_get_wtick_'::omp_get_wtick
!dec$ attributes alias:'_omp_get_num_teams_'::omp_get_num_teams
!dec$ attributes alias:'_omp_get_team_num_'::omp_get_team_num
!dec$ attributes alias:'_omp_get_cancellation_'::omp_get_cancellation
!dec$ attributes alias:'_omp_is_initial_device_'::omp_is_initial_device
!dec$ attributes alias:'_omp_init_lock_'::omp_init_lock
!dec$ attributes alias:'_omp_destroy_lock_'::omp_destroy_lock
!dec$ attributes alias:'_omp_set_lock_'::omp_set_lock
!dec$ attributes alias:'_omp_unset_lock_'::omp_unset_lock
!dec$ attributes alias:'_omp_test_lock_'::omp_test_lock
!dec$ attributes alias:'_omp_init_nest_lock_'::omp_init_nest_lock
!dec$ attributes alias:'_omp_destroy_nest_lock_'::omp_destroy_nest_lock
!dec$ attributes alias:'_omp_set_nest_lock_'::omp_set_nest_lock
!dec$ attributes alias:'_omp_unset_nest_lock_'::omp_unset_nest_lock
!dec$ attributes alias:'_omp_test_nest_lock_'::omp_test_nest_lock
!dec$ attributes alias:'_kmp_set_stacksize_'::kmp_set_stacksize
!dec$ attributes alias:'_kmp_set_stacksize_s_'::kmp_set_stacksize_s
!dec$ attributes alias:'_kmp_set_blocktime_'::kmp_set_blocktime
!dec$ attributes alias:'_kmp_set_library_serial_'::kmp_set_library_serial
!dec$ attributes alias:'_kmp_set_library_turnaround_'::kmp_set_library_turnaround
!dec$ attributes alias:'_kmp_set_library_throughput_'::kmp_set_library_throughput
!dec$ attributes alias:'_kmp_set_library_'::kmp_set_library
!dec$ attributes alias:'_kmp_get_stacksize_'::kmp_get_stacksize
!dec$ attributes alias:'_kmp_get_stacksize_s_'::kmp_get_stacksize_s
!dec$ attributes alias:'_kmp_get_blocktime_'::kmp_get_blocktime
!dec$ attributes alias:'_kmp_get_library_'::kmp_get_library
!dec$ attributes alias:'_kmp_set_affinity_'::kmp_set_affinity
!dec$ attributes alias:'_kmp_get_affinity_'::kmp_get_affinity
!dec$ attributes alias:'_kmp_get_affinity_max_proc_'::kmp_get_affinity_max_proc
!dec$ attributes alias:'_kmp_create_affinity_mask_'::kmp_create_affinity_mask
!dec$ attributes alias:'_kmp_destroy_affinity_mask_'::kmp_destroy_affinity_mask
!dec$ attributes alias:'_kmp_set_affinity_mask_proc_'::kmp_set_affinity_mask_proc
!dec$ attributes alias:'_kmp_unset_affinity_mask_proc_'::kmp_unset_affinity_mask_proc
!dec$ attributes alias:'_kmp_get_affinity_mask_proc_'::kmp_get_affinity_mask_proc
!dec$ attributes alias:'_kmp_malloc_'::kmp_malloc
!dec$ attributes alias:'_kmp_aligned_malloc_'::kmp_aligned_malloc
!dec$ attributes alias:'_kmp_calloc_'::kmp_calloc
!dec$ attributes alias:'_kmp_realloc_'::kmp_realloc
!dec$ attributes alias:'_kmp_free_'::kmp_free
!dec$ attributes alias:'_kmp_set_warnings_on_'::kmp_set_warnings_on
!dec$ attributes alias:'_kmp_set_warnings_off_'::kmp_set_warnings_off
!dec$ attributes alias:'_kmp_get_cancellation_status_'::kmp_get_cancellation_status
!dec$ endif
end module omp_lib

View File

@ -0,0 +1,455 @@
! include/40/omp_lib.f90.var
!
!//===----------------------------------------------------------------------===//
!//
!// The LLVM Compiler Infrastructure
!//
!// This file is dual licensed under the MIT and the University of Illinois Open
!// Source Licenses. See LICENSE.txt for details.
!//
!//===----------------------------------------------------------------------===//
!
module omp_lib_kinds
use, intrinsic :: iso_c_binding
integer, parameter :: omp_integer_kind = c_int
integer, parameter :: omp_logical_kind = 4
integer, parameter :: omp_real_kind = c_float
integer, parameter :: kmp_double_kind = c_double
integer, parameter :: omp_lock_kind = c_intptr_t
integer, parameter :: omp_nest_lock_kind = c_intptr_t
integer, parameter :: omp_sched_kind = omp_integer_kind
integer, parameter :: omp_proc_bind_kind = omp_integer_kind
integer, parameter :: kmp_pointer_kind = c_intptr_t
integer, parameter :: kmp_size_t_kind = c_size_t
integer, parameter :: kmp_affinity_mask_kind = c_intptr_t
integer, parameter :: kmp_cancel_kind = omp_integer_kind
end module omp_lib_kinds
module omp_lib
use omp_lib_kinds
integer (kind=omp_integer_kind), parameter :: openmp_version = @LIBOMP_OMP_YEAR_MONTH@
integer (kind=omp_integer_kind), parameter :: kmp_version_major = @LIBOMP_VERSION_MAJOR@
integer (kind=omp_integer_kind), parameter :: kmp_version_minor = @LIBOMP_VERSION_MINOR@
integer (kind=omp_integer_kind), parameter :: kmp_version_build = @LIBOMP_VERSION_BUILD@
character(*) kmp_build_date
parameter( kmp_build_date = '@LIBOMP_BUILD_DATE@' )
integer(kind=omp_sched_kind), parameter :: omp_sched_static = 1
integer(kind=omp_sched_kind), parameter :: omp_sched_dynamic = 2
integer(kind=omp_sched_kind), parameter :: omp_sched_guided = 3
integer(kind=omp_sched_kind), parameter :: omp_sched_auto = 4
integer (kind=omp_proc_bind_kind), parameter :: omp_proc_bind_false = 0
integer (kind=omp_proc_bind_kind), parameter :: omp_proc_bind_true = 1
integer (kind=omp_proc_bind_kind), parameter :: omp_proc_bind_master = 2
integer (kind=omp_proc_bind_kind), parameter :: omp_proc_bind_close = 3
integer (kind=omp_proc_bind_kind), parameter :: omp_proc_bind_spread = 4
integer (kind=kmp_cancel_kind), parameter :: kmp_cancel_parallel = 1
integer (kind=kmp_cancel_kind), parameter :: kmp_cancel_loop = 2
integer (kind=kmp_cancel_kind), parameter :: kmp_cancel_sections = 3
integer (kind=kmp_cancel_kind), parameter :: kmp_cancel_taskgroup = 4
interface
! ***
! *** omp_* entry points
! ***
subroutine omp_set_num_threads(num_threads) bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind), value :: num_threads
end subroutine omp_set_num_threads
subroutine omp_set_dynamic(dynamic_threads) bind(c)
use omp_lib_kinds
logical (kind=omp_logical_kind), value :: dynamic_threads
end subroutine omp_set_dynamic
subroutine omp_set_nested(nested) bind(c)
use omp_lib_kinds
logical (kind=omp_logical_kind), value :: nested
end subroutine omp_set_nested
function omp_get_num_threads() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_num_threads
end function omp_get_num_threads
function omp_get_max_threads() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_max_threads
end function omp_get_max_threads
function omp_get_thread_num() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_thread_num
end function omp_get_thread_num
function omp_get_num_procs() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_num_procs
end function omp_get_num_procs
function omp_in_parallel() bind(c)
use omp_lib_kinds
logical (kind=omp_logical_kind) omp_in_parallel
end function omp_in_parallel
function omp_in_final() bind(c)
use omp_lib_kinds
logical (kind=omp_logical_kind) omp_in_final
end function omp_in_final
function omp_get_dynamic() bind(c)
use omp_lib_kinds
logical (kind=omp_logical_kind) omp_get_dynamic
end function omp_get_dynamic
function omp_get_nested() bind(c)
use omp_lib_kinds
logical (kind=omp_logical_kind) omp_get_nested
end function omp_get_nested
function omp_get_thread_limit() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_thread_limit
end function omp_get_thread_limit
subroutine omp_set_max_active_levels(max_levels) bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind), value :: max_levels
end subroutine omp_set_max_active_levels
function omp_get_max_active_levels() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_max_active_levels
end function omp_get_max_active_levels
function omp_get_level() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_level
end function omp_get_level
function omp_get_active_level() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_active_level
end function omp_get_active_level
function omp_get_ancestor_thread_num(level) bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_ancestor_thread_num
integer (kind=omp_integer_kind), value :: level
end function omp_get_ancestor_thread_num
function omp_get_team_size(level) bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_team_size
integer (kind=omp_integer_kind), value :: level
end function omp_get_team_size
subroutine omp_set_schedule(kind, chunk_size) bind(c)
use omp_lib_kinds
integer (kind=omp_sched_kind), value :: kind
integer (kind=omp_integer_kind), value :: chunk_size
end subroutine omp_set_schedule
subroutine omp_get_schedule(kind, chunk_size) bind(c)
use omp_lib_kinds
integer (kind=omp_sched_kind) kind
integer (kind=omp_integer_kind) chunk_size
end subroutine omp_get_schedule
function omp_get_proc_bind() bind(c)
use omp_lib_kinds
integer (kind=omp_proc_bind_kind) omp_get_proc_bind
end function omp_get_proc_bind
function omp_get_wtime() bind(c)
use omp_lib_kinds
real (kind=kmp_double_kind) omp_get_wtime
end function omp_get_wtime
function omp_get_wtick() bind(c)
use omp_lib_kinds
real (kind=kmp_double_kind) omp_get_wtick
end function omp_get_wtick
function omp_get_default_device() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_default_device
end function omp_get_default_device
subroutine omp_set_default_device(device_num) bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind), value :: device_num
end subroutine omp_set_default_device
function omp_get_num_devices() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_num_devices
end function omp_get_num_devices
function omp_get_num_teams() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_num_teams
end function omp_get_num_teams
function omp_get_team_num() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_team_num
end function omp_get_team_num
function omp_get_cancellation() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_cancellation
end function omp_get_cancellation
function omp_is_initial_device() bind(c)
use omp_lib_kinds
logical (kind=omp_logical_kind) omp_is_initial_device
end function omp_is_initial_device
subroutine omp_init_lock(svar) bind(c)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_init_lock
!DIR$ ENDIF
use omp_lib_kinds
integer (kind=omp_lock_kind) svar
end subroutine omp_init_lock
subroutine omp_destroy_lock(svar) bind(c)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_destroy_lock
!DIR$ ENDIF
use omp_lib_kinds
integer (kind=omp_lock_kind) svar
end subroutine omp_destroy_lock
subroutine omp_set_lock(svar) bind(c)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_set_lock
!DIR$ ENDIF
use omp_lib_kinds
integer (kind=omp_lock_kind) svar
end subroutine omp_set_lock
subroutine omp_unset_lock(svar) bind(c)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_unset_lock
!DIR$ ENDIF
use omp_lib_kinds
integer (kind=omp_lock_kind) svar
end subroutine omp_unset_lock
function omp_test_lock(svar) bind(c)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_test_lock
!DIR$ ENDIF
use omp_lib_kinds
logical (kind=omp_logical_kind) omp_test_lock
integer (kind=omp_lock_kind) svar
end function omp_test_lock
subroutine omp_init_nest_lock(nvar) bind(c)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_init_nest_lock
!DIR$ ENDIF
use omp_lib_kinds
integer (kind=omp_nest_lock_kind) nvar
end subroutine omp_init_nest_lock
subroutine omp_destroy_nest_lock(nvar) bind(c)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_destroy_nest_lock
!DIR$ ENDIF
use omp_lib_kinds
integer (kind=omp_nest_lock_kind) nvar
end subroutine omp_destroy_nest_lock
subroutine omp_set_nest_lock(nvar) bind(c)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_set_nest_lock
!DIR$ ENDIF
use omp_lib_kinds
integer (kind=omp_nest_lock_kind) nvar
end subroutine omp_set_nest_lock
subroutine omp_unset_nest_lock(nvar) bind(c)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_unset_nest_lock
!DIR$ ENDIF
use omp_lib_kinds
integer (kind=omp_nest_lock_kind) nvar
end subroutine omp_unset_nest_lock
function omp_test_nest_lock(nvar) bind(c)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_test_nest_lock
!DIR$ ENDIF
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_test_nest_lock
integer (kind=omp_nest_lock_kind) nvar
end function omp_test_nest_lock
! ***
! *** kmp_* entry points
! ***
subroutine kmp_set_stacksize(size) bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind), value :: size
end subroutine kmp_set_stacksize
subroutine kmp_set_stacksize_s(size) bind(c)
use omp_lib_kinds
integer (kind=kmp_size_t_kind), value :: size
end subroutine kmp_set_stacksize_s
subroutine kmp_set_blocktime(msec) bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind), value :: msec
end subroutine kmp_set_blocktime
subroutine kmp_set_library_serial() bind(c)
end subroutine kmp_set_library_serial
subroutine kmp_set_library_turnaround() bind(c)
end subroutine kmp_set_library_turnaround
subroutine kmp_set_library_throughput() bind(c)
end subroutine kmp_set_library_throughput
subroutine kmp_set_library(libnum) bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind), value :: libnum
end subroutine kmp_set_library
subroutine kmp_set_defaults(string) bind(c)
use, intrinsic :: iso_c_binding
character (kind=c_char) :: string(*)
end subroutine kmp_set_defaults
function kmp_get_stacksize() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_get_stacksize
end function kmp_get_stacksize
function kmp_get_stacksize_s() bind(c)
use omp_lib_kinds
integer (kind=kmp_size_t_kind) kmp_get_stacksize_s
end function kmp_get_stacksize_s
function kmp_get_blocktime() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_get_blocktime
end function kmp_get_blocktime
function kmp_get_library() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_get_library
end function kmp_get_library
function kmp_set_affinity(mask) bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_set_affinity
integer (kind=kmp_affinity_mask_kind) mask
end function kmp_set_affinity
function kmp_get_affinity(mask) bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_get_affinity
integer (kind=kmp_affinity_mask_kind) mask
end function kmp_get_affinity
function kmp_get_affinity_max_proc() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_get_affinity_max_proc
end function kmp_get_affinity_max_proc
subroutine kmp_create_affinity_mask(mask) bind(c)
use omp_lib_kinds
integer (kind=kmp_affinity_mask_kind) mask
end subroutine kmp_create_affinity_mask
subroutine kmp_destroy_affinity_mask(mask) bind(c)
use omp_lib_kinds
integer (kind=kmp_affinity_mask_kind) mask
end subroutine kmp_destroy_affinity_mask
function kmp_set_affinity_mask_proc(proc, mask) bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_set_affinity_mask_proc
integer (kind=omp_integer_kind), value :: proc
integer (kind=kmp_affinity_mask_kind) mask
end function kmp_set_affinity_mask_proc
function kmp_unset_affinity_mask_proc(proc, mask) bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_unset_affinity_mask_proc
integer (kind=omp_integer_kind), value :: proc
integer (kind=kmp_affinity_mask_kind) mask
end function kmp_unset_affinity_mask_proc
function kmp_get_affinity_mask_proc(proc, mask) bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_get_affinity_mask_proc
integer (kind=omp_integer_kind), value :: proc
integer (kind=kmp_affinity_mask_kind) mask
end function kmp_get_affinity_mask_proc
function kmp_malloc(size) bind(c)
use omp_lib_kinds
integer (kind=kmp_pointer_kind) kmp_malloc
integer (kind=kmp_size_t_kind), value :: size
end function kmp_malloc
function kmp_aligned_malloc(size, alignment) bind(c)
use omp_lib_kinds
integer (kind=kmp_pointer_kind) kmp_aligned_malloc
integer (kind=kmp_size_t_kind), value :: size
integer (kind=kmp_size_t_kind), value :: alignment
end function kmp_aligned_malloc
function kmp_calloc(nelem, elsize) bind(c)
use omp_lib_kinds
integer (kind=kmp_pointer_kind) kmp_calloc
integer (kind=kmp_size_t_kind), value :: nelem
integer (kind=kmp_size_t_kind), value :: elsize
end function kmp_calloc
function kmp_realloc(ptr, size) bind(c)
use omp_lib_kinds
integer (kind=kmp_pointer_kind) kmp_realloc
integer (kind=kmp_pointer_kind), value :: ptr
integer (kind=kmp_size_t_kind), value :: size
end function kmp_realloc
subroutine kmp_free(ptr) bind(c)
use omp_lib_kinds
integer (kind=kmp_pointer_kind), value :: ptr
end subroutine kmp_free
subroutine kmp_set_warnings_on() bind(c)
end subroutine kmp_set_warnings_on
subroutine kmp_set_warnings_off() bind(c)
end subroutine kmp_set_warnings_off
function kmp_get_cancellation_status(cancelkind) bind(c)
use omp_lib_kinds
integer (kind=kmp_cancel_kind), value :: cancelkind
logical (kind=omp_logical_kind) kmp_get_cancellation_status
end function kmp_get_cancellation_status
end interface
end module omp_lib

View File

@ -0,0 +1,567 @@
! include/40/omp_lib.h.var
!
!//===----------------------------------------------------------------------===//
!//
!// The LLVM Compiler Infrastructure
!//
!// This file is dual licensed under the MIT and the University of Illinois Open
!// Source Licenses. See LICENSE.txt for details.
!//
!//===----------------------------------------------------------------------===//
!
!***
!*** Some of the directives for the following routine extend past column 72,
!*** so process this file in 132-column mode.
!***
!DIR$ fixedformlinesize:132
integer, parameter :: omp_integer_kind = 4
integer, parameter :: omp_logical_kind = 4
integer, parameter :: omp_real_kind = 4
integer, parameter :: omp_lock_kind = int_ptr_kind()
integer, parameter :: omp_nest_lock_kind = int_ptr_kind()
integer, parameter :: omp_sched_kind = omp_integer_kind
integer, parameter :: omp_proc_bind_kind = omp_integer_kind
integer, parameter :: kmp_pointer_kind = int_ptr_kind()
integer, parameter :: kmp_size_t_kind = int_ptr_kind()
integer, parameter :: kmp_affinity_mask_kind = int_ptr_kind()
integer (kind=omp_integer_kind), parameter :: openmp_version = @LIBOMP_OMP_YEAR_MONTH@
integer (kind=omp_integer_kind), parameter :: kmp_version_major = @LIBOMP_VERSION_MAJOR@
integer (kind=omp_integer_kind), parameter :: kmp_version_minor = @LIBOMP_VERSION_MINOR@
integer (kind=omp_integer_kind), parameter :: kmp_version_build = @LIBOMP_VERSION_BUILD@
character(*) kmp_build_date
parameter( kmp_build_date = '@LIBOMP_BUILD_DATE@' )
integer(kind=omp_sched_kind), parameter :: omp_sched_static = 1
integer(kind=omp_sched_kind), parameter :: omp_sched_dynamic = 2
integer(kind=omp_sched_kind), parameter :: omp_sched_guided = 3
integer(kind=omp_sched_kind), parameter :: omp_sched_auto = 4
integer (kind=omp_proc_bind_kind), parameter :: omp_proc_bind_false = 0
integer (kind=omp_proc_bind_kind), parameter :: omp_proc_bind_true = 1
integer (kind=omp_proc_bind_kind), parameter :: omp_proc_bind_master = 2
integer (kind=omp_proc_bind_kind), parameter :: omp_proc_bind_close = 3
integer (kind=omp_proc_bind_kind), parameter :: omp_proc_bind_spread = 4
interface
! ***
! *** omp_* entry points
! ***
subroutine omp_set_num_threads(num_threads) bind(c)
import
integer (kind=omp_integer_kind), value :: num_threads
end subroutine omp_set_num_threads
subroutine omp_set_dynamic(dynamic_threads) bind(c)
import
logical (kind=omp_logical_kind), value :: dynamic_threads
end subroutine omp_set_dynamic
subroutine omp_set_nested(nested) bind(c)
import
logical (kind=omp_logical_kind), value :: nested
end subroutine omp_set_nested
function omp_get_num_threads() bind(c)
import
integer (kind=omp_integer_kind) omp_get_num_threads
end function omp_get_num_threads
function omp_get_max_threads() bind(c)
import
integer (kind=omp_integer_kind) omp_get_max_threads
end function omp_get_max_threads
function omp_get_thread_num() bind(c)
import
integer (kind=omp_integer_kind) omp_get_thread_num
end function omp_get_thread_num
function omp_get_num_procs() bind(c)
import
integer (kind=omp_integer_kind) omp_get_num_procs
end function omp_get_num_procs
function omp_in_parallel() bind(c)
import
logical (kind=omp_logical_kind) omp_in_parallel
end function omp_in_parallel
function omp_in_final() bind(c)
import
logical (kind=omp_logical_kind) omp_in_final
end function omp_in_final
function omp_get_dynamic() bind(c)
import
logical (kind=omp_logical_kind) omp_get_dynamic
end function omp_get_dynamic
function omp_get_nested() bind(c)
import
logical (kind=omp_logical_kind) omp_get_nested
end function omp_get_nested
function omp_get_thread_limit() bind(c)
import
integer (kind=omp_integer_kind) omp_get_thread_limit
end function omp_get_thread_limit
subroutine omp_set_max_active_levels(max_levels) bind(c)
import
integer (kind=omp_integer_kind), value :: max_levels
end subroutine omp_set_max_active_levels
function omp_get_max_active_levels() bind(c)
import
integer (kind=omp_integer_kind) omp_get_max_active_levels
end function omp_get_max_active_levels
function omp_get_level() bind(c)
import
integer (kind=omp_integer_kind) omp_get_level
end function omp_get_level
function omp_get_active_level() bind(c)
import
integer (kind=omp_integer_kind) omp_get_active_level
end function omp_get_active_level
function omp_get_ancestor_thread_num(level) bind(c)
import
integer (kind=omp_integer_kind) omp_get_ancestor_thread_num
integer (kind=omp_integer_kind), value :: level
end function omp_get_ancestor_thread_num
function omp_get_team_size(level) bind(c)
import
integer (kind=omp_integer_kind) omp_get_team_size
integer (kind=omp_integer_kind), value :: level
end function omp_get_team_size
subroutine omp_set_schedule(kind, chunk_size) bind(c)
import
integer (kind=omp_sched_kind), value :: kind
integer (kind=omp_integer_kind), value :: chunk_size
end subroutine omp_set_schedule
subroutine omp_get_schedule(kind, chunk_size) bind(c)
import
integer (kind=omp_sched_kind) kind
integer (kind=omp_integer_kind) chunk_size
end subroutine omp_get_schedule
function omp_get_proc_bind() bind(c)
import
integer (kind=omp_proc_bind_kind) omp_get_proc_bind
end function omp_get_proc_bind
function omp_get_wtime() bind(c)
double precision omp_get_wtime
end function omp_get_wtime
function omp_get_wtick() bind(c)
double precision omp_get_wtick
end function omp_get_wtick
function omp_get_default_device() bind(c)
import
integer (kind=omp_integer_kind) omp_get_default_device
end function omp_get_default_device
subroutine omp_set_default_device(device_num) bind(c)
import
integer (kind=omp_integer_kind), value :: device_num
end subroutine omp_set_default_device
function omp_get_num_devices() bind(c)
import
integer (kind=omp_integer_kind) omp_get_num_devices
end function omp_get_num_devices
function omp_get_num_teams() bind(c)
import
integer (kind=omp_integer_kind) omp_get_num_teams
end function omp_get_num_teams
function omp_get_team_num() bind(c)
import
integer (kind=omp_integer_kind) omp_get_team_num
end function omp_get_team_num
function omp_is_initial_device() bind(c)
import
logical (kind=omp_logical_kind) omp_is_initial_device
end function omp_is_initial_device
subroutine omp_init_lock(svar) bind(c)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_init_lock
!DIR$ ENDIF
import
integer (kind=omp_lock_kind) svar
end subroutine omp_init_lock
subroutine omp_destroy_lock(svar) bind(c)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_destroy_lock
!DIR$ ENDIF
import
integer (kind=omp_lock_kind) svar
end subroutine omp_destroy_lock
subroutine omp_set_lock(svar) bind(c)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_set_lock
!DIR$ ENDIF
import
integer (kind=omp_lock_kind) svar
end subroutine omp_set_lock
subroutine omp_unset_lock(svar) bind(c)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_unset_lock
!DIR$ ENDIF
import
integer (kind=omp_lock_kind) svar
end subroutine omp_unset_lock
function omp_test_lock(svar) bind(c)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_test_lock
!DIR$ ENDIF
import
logical (kind=omp_logical_kind) omp_test_lock
integer (kind=omp_lock_kind) svar
end function omp_test_lock
subroutine omp_init_nest_lock(nvar) bind(c)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_init_nest_lock
!DIR$ ENDIF
import
integer (kind=omp_nest_lock_kind) nvar
end subroutine omp_init_nest_lock
subroutine omp_destroy_nest_lock(nvar) bind(c)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_destroy_nest_lock
!DIR$ ENDIF
import
integer (kind=omp_nest_lock_kind) nvar
end subroutine omp_destroy_nest_lock
subroutine omp_set_nest_lock(nvar) bind(c)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_set_nest_lock
!DIR$ ENDIF
import
integer (kind=omp_nest_lock_kind) nvar
end subroutine omp_set_nest_lock
subroutine omp_unset_nest_lock(nvar) bind(c)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_unset_nest_lock
!DIR$ ENDIF
import
integer (kind=omp_nest_lock_kind) nvar
end subroutine omp_unset_nest_lock
function omp_test_nest_lock(nvar) bind(c)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_test_nest_lock
!DIR$ ENDIF
import
integer (kind=omp_integer_kind) omp_test_nest_lock
integer (kind=omp_nest_lock_kind) nvar
end function omp_test_nest_lock
! ***
! *** kmp_* entry points
! ***
subroutine kmp_set_stacksize(size) bind(c)
import
integer (kind=omp_integer_kind), value :: size
end subroutine kmp_set_stacksize
subroutine kmp_set_stacksize_s(size) bind(c)
import
integer (kind=kmp_size_t_kind), value :: size
end subroutine kmp_set_stacksize_s
subroutine kmp_set_blocktime(msec) bind(c)
import
integer (kind=omp_integer_kind), value :: msec
end subroutine kmp_set_blocktime
subroutine kmp_set_library_serial() bind(c)
end subroutine kmp_set_library_serial
subroutine kmp_set_library_turnaround() bind(c)
end subroutine kmp_set_library_turnaround
subroutine kmp_set_library_throughput() bind(c)
end subroutine kmp_set_library_throughput
subroutine kmp_set_library(libnum) bind(c)
import
integer (kind=omp_integer_kind), value :: libnum
end subroutine kmp_set_library
subroutine kmp_set_defaults(string) bind(c)
character string(*)
end subroutine kmp_set_defaults
function kmp_get_stacksize() bind(c)
import
integer (kind=omp_integer_kind) kmp_get_stacksize
end function kmp_get_stacksize
function kmp_get_stacksize_s() bind(c)
import
integer (kind=kmp_size_t_kind) kmp_get_stacksize_s
end function kmp_get_stacksize_s
function kmp_get_blocktime() bind(c)
import
integer (kind=omp_integer_kind) kmp_get_blocktime
end function kmp_get_blocktime
function kmp_get_library() bind(c)
import
integer (kind=omp_integer_kind) kmp_get_library
end function kmp_get_library
function kmp_set_affinity(mask) bind(c)
import
integer (kind=omp_integer_kind) kmp_set_affinity
integer (kind=kmp_affinity_mask_kind) mask
end function kmp_set_affinity
function kmp_get_affinity(mask) bind(c)
import
integer (kind=omp_integer_kind) kmp_get_affinity
integer (kind=kmp_affinity_mask_kind) mask
end function kmp_get_affinity
function kmp_get_affinity_max_proc() bind(c)
import
integer (kind=omp_integer_kind) kmp_get_affinity_max_proc
end function kmp_get_affinity_max_proc
subroutine kmp_create_affinity_mask(mask) bind(c)
import
integer (kind=kmp_affinity_mask_kind) mask
end subroutine kmp_create_affinity_mask
subroutine kmp_destroy_affinity_mask(mask) bind(c)
import
integer (kind=kmp_affinity_mask_kind) mask
end subroutine kmp_destroy_affinity_mask
function kmp_set_affinity_mask_proc(proc, mask) bind(c)
import
integer (kind=omp_integer_kind) kmp_set_affinity_mask_proc
integer (kind=omp_integer_kind), value :: proc
integer (kind=kmp_affinity_mask_kind) mask
end function kmp_set_affinity_mask_proc
function kmp_unset_affinity_mask_proc(proc, mask) bind(c)
import
integer (kind=omp_integer_kind) kmp_unset_affinity_mask_proc
integer (kind=omp_integer_kind), value :: proc
integer (kind=kmp_affinity_mask_kind) mask
end function kmp_unset_affinity_mask_proc
function kmp_get_affinity_mask_proc(proc, mask) bind(c)
import
integer (kind=omp_integer_kind) kmp_get_affinity_mask_proc
integer (kind=omp_integer_kind), value :: proc
integer (kind=kmp_affinity_mask_kind) mask
end function kmp_get_affinity_mask_proc
function kmp_malloc(size) bind(c)
import
integer (kind=kmp_pointer_kind) kmp_malloc
integer (kind=kmp_size_t_kind), value :: size
end function kmp_malloc
function kmp_aligned_malloc(size, alignment) bind(c)
import
integer (kind=kmp_pointer_kind) kmp_aligned_malloc
integer (kind=kmp_size_t_kind), value :: size
integer (kind=kmp_size_t_kind), value :: alignment
end function kmp_aligned_malloc
function kmp_calloc(nelem, elsize) bind(c)
import
integer (kind=kmp_pointer_kind) kmp_calloc
integer (kind=kmp_size_t_kind), value :: nelem
integer (kind=kmp_size_t_kind), value :: elsize
end function kmp_calloc
function kmp_realloc(ptr, size) bind(c)
import
integer (kind=kmp_pointer_kind) kmp_realloc
integer (kind=kmp_pointer_kind), value :: ptr
integer (kind=kmp_size_t_kind), value :: size
end function kmp_realloc
subroutine kmp_free(ptr) bind(c)
import
integer (kind=kmp_pointer_kind), value :: ptr
end subroutine kmp_free
subroutine kmp_set_warnings_on() bind(c)
end subroutine kmp_set_warnings_on
subroutine kmp_set_warnings_off() bind(c)
end subroutine kmp_set_warnings_off
end interface
!DIR$ IF DEFINED (__INTEL_OFFLOAD)
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_set_num_threads
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_set_dynamic
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_set_nested
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_num_threads
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_max_threads
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_thread_num
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_num_procs
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_in_parallel
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_in_final
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_dynamic
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_nested
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_thread_limit
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_set_max_active_levels
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_max_active_levels
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_level
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_active_level
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_ancestor_thread_num
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_team_size
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_set_schedule
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_schedule
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_proc_bind
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_wtime
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_wtick
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_default_device
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_set_default_device
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_is_initial_device
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_num_devices
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_num_teams
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_team_num
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_init_lock
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_destroy_lock
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_set_lock
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_unset_lock
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_test_lock
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_init_nest_lock
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_destroy_nest_lock
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_set_nest_lock
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_unset_nest_lock
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_test_nest_lock
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_set_stacksize
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_set_stacksize_s
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_set_blocktime
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_set_library_serial
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_set_library_turnaround
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_set_library_throughput
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_set_library
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_set_defaults
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_get_stacksize
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_get_stacksize_s
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_get_blocktime
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_get_library
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_set_affinity
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_get_affinity
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_get_affinity_max_proc
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_create_affinity_mask
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_destroy_affinity_mask
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_set_affinity_mask_proc
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_unset_affinity_mask_proc
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_get_affinity_mask_proc
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_malloc
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_aligned_malloc
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_calloc
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_realloc
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_free
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_set_warnings_on
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_set_warnings_off
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!$omp declare target(omp_set_num_threads )
!$omp declare target(omp_set_dynamic )
!$omp declare target(omp_set_nested )
!$omp declare target(omp_get_num_threads )
!$omp declare target(omp_get_max_threads )
!$omp declare target(omp_get_thread_num )
!$omp declare target(omp_get_num_procs )
!$omp declare target(omp_in_parallel )
!$omp declare target(omp_in_final )
!$omp declare target(omp_get_dynamic )
!$omp declare target(omp_get_nested )
!$omp declare target(omp_get_thread_limit )
!$omp declare target(omp_set_max_active_levels )
!$omp declare target(omp_get_max_active_levels )
!$omp declare target(omp_get_level )
!$omp declare target(omp_get_active_level )
!$omp declare target(omp_get_ancestor_thread_num )
!$omp declare target(omp_get_team_size )
!$omp declare target(omp_set_schedule )
!$omp declare target(omp_get_schedule )
!$omp declare target(omp_get_proc_bind )
!$omp declare target(omp_get_wtime )
!$omp declare target(omp_get_wtick )
!$omp declare target(omp_get_default_device )
!$omp declare target(omp_set_default_device )
!$omp declare target(omp_is_initial_device )
!$omp declare target(omp_get_num_devices )
!$omp declare target(omp_get_num_teams )
!$omp declare target(omp_get_team_num )
!$omp declare target(omp_init_lock )
!$omp declare target(omp_destroy_lock )
!$omp declare target(omp_set_lock )
!$omp declare target(omp_unset_lock )
!$omp declare target(omp_test_lock )
!$omp declare target(omp_init_nest_lock )
!$omp declare target(omp_destroy_nest_lock )
!$omp declare target(omp_set_nest_lock )
!$omp declare target(omp_unset_nest_lock )
!$omp declare target(omp_test_nest_lock )
!$omp declare target(kmp_set_stacksize )
!$omp declare target(kmp_set_stacksize_s )
!$omp declare target(kmp_set_blocktime )
!$omp declare target(kmp_set_library_serial )
!$omp declare target(kmp_set_library_turnaround )
!$omp declare target(kmp_set_library_throughput )
!$omp declare target(kmp_set_library )
!$omp declare target(kmp_set_defaults )
!$omp declare target(kmp_get_stacksize )
!$omp declare target(kmp_get_stacksize_s )
!$omp declare target(kmp_get_blocktime )
!$omp declare target(kmp_get_library )
!$omp declare target(kmp_set_affinity )
!$omp declare target(kmp_get_affinity )
!$omp declare target(kmp_get_affinity_max_proc )
!$omp declare target(kmp_create_affinity_mask )
!$omp declare target(kmp_destroy_affinity_mask )
!$omp declare target(kmp_set_affinity_mask_proc )
!$omp declare target(kmp_unset_affinity_mask_proc )
!$omp declare target(kmp_get_affinity_mask_proc )
!$omp declare target(kmp_malloc )
!$omp declare target(kmp_aligned_malloc )
!$omp declare target(kmp_calloc )
!$omp declare target(kmp_realloc )
!$omp declare target(kmp_free )
!$omp declare target(kmp_set_warnings_on )
!$omp declare target(kmp_set_warnings_off )
!DIR$ ENDIF
!DIR$ ENDIF

View File

@ -0,0 +1,197 @@
/*
* include/45/omp.h.var
*/
//===----------------------------------------------------------------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is dual licensed under the MIT and the University of Illinois Open
// Source Licenses. See LICENSE.txt for details.
//
//===----------------------------------------------------------------------===//
#ifndef __OMP_H
# define __OMP_H
# define KMP_VERSION_MAJOR @LIBOMP_VERSION_MAJOR@
# define KMP_VERSION_MINOR @LIBOMP_VERSION_MINOR@
# define KMP_VERSION_BUILD @LIBOMP_VERSION_BUILD@
# define KMP_BUILD_DATE "@LIBOMP_BUILD_DATE@"
# ifdef __cplusplus
extern "C" {
# endif
# if defined(_WIN32)
# define __KAI_KMPC_CONVENTION __cdecl
# else
# define __KAI_KMPC_CONVENTION
# endif
/* schedule kind constants */
typedef enum omp_sched_t {
omp_sched_static = 1,
omp_sched_dynamic = 2,
omp_sched_guided = 3,
omp_sched_auto = 4
} omp_sched_t;
/* set API functions */
extern void __KAI_KMPC_CONVENTION omp_set_num_threads (int);
extern void __KAI_KMPC_CONVENTION omp_set_dynamic (int);
extern void __KAI_KMPC_CONVENTION omp_set_nested (int);
extern void __KAI_KMPC_CONVENTION omp_set_max_active_levels (int);
extern void __KAI_KMPC_CONVENTION omp_set_schedule (omp_sched_t, int);
/* query API functions */
extern int __KAI_KMPC_CONVENTION omp_get_num_threads (void);
extern int __KAI_KMPC_CONVENTION omp_get_dynamic (void);
extern int __KAI_KMPC_CONVENTION omp_get_nested (void);
extern int __KAI_KMPC_CONVENTION omp_get_max_threads (void);
extern int __KAI_KMPC_CONVENTION omp_get_thread_num (void);
extern int __KAI_KMPC_CONVENTION omp_get_num_procs (void);
extern int __KAI_KMPC_CONVENTION omp_in_parallel (void);
extern int __KAI_KMPC_CONVENTION omp_in_final (void);
extern int __KAI_KMPC_CONVENTION omp_get_active_level (void);
extern int __KAI_KMPC_CONVENTION omp_get_level (void);
extern int __KAI_KMPC_CONVENTION omp_get_ancestor_thread_num (int);
extern int __KAI_KMPC_CONVENTION omp_get_team_size (int);
extern int __KAI_KMPC_CONVENTION omp_get_thread_limit (void);
extern int __KAI_KMPC_CONVENTION omp_get_max_active_levels (void);
extern void __KAI_KMPC_CONVENTION omp_get_schedule (omp_sched_t *, int *);
extern int __KAI_KMPC_CONVENTION omp_get_max_task_priority (void);
/* lock API functions */
typedef struct omp_lock_t {
void * _lk;
} omp_lock_t;
extern void __KAI_KMPC_CONVENTION omp_init_lock (omp_lock_t *);
extern void __KAI_KMPC_CONVENTION omp_set_lock (omp_lock_t *);
extern void __KAI_KMPC_CONVENTION omp_unset_lock (omp_lock_t *);
extern void __KAI_KMPC_CONVENTION omp_destroy_lock (omp_lock_t *);
extern int __KAI_KMPC_CONVENTION omp_test_lock (omp_lock_t *);
/* nested lock API functions */
typedef struct omp_nest_lock_t {
void * _lk;
} omp_nest_lock_t;
extern void __KAI_KMPC_CONVENTION omp_init_nest_lock (omp_nest_lock_t *);
extern void __KAI_KMPC_CONVENTION omp_set_nest_lock (omp_nest_lock_t *);
extern void __KAI_KMPC_CONVENTION omp_unset_nest_lock (omp_nest_lock_t *);
extern void __KAI_KMPC_CONVENTION omp_destroy_nest_lock (omp_nest_lock_t *);
extern int __KAI_KMPC_CONVENTION omp_test_nest_lock (omp_nest_lock_t *);
/* lock hint type for dynamic user lock */
typedef enum omp_lock_hint_t {
omp_lock_hint_none = 0,
omp_lock_hint_uncontended = 1,
omp_lock_hint_contended = (1<<1 ),
omp_lock_hint_nonspeculative = (1<<2 ),
omp_lock_hint_speculative = (1<<3 ),
kmp_lock_hint_hle = (1<<16),
kmp_lock_hint_rtm = (1<<17),
kmp_lock_hint_adaptive = (1<<18)
} omp_lock_hint_t;
/* hinted lock initializers */
extern void __KAI_KMPC_CONVENTION omp_init_lock_with_hint(omp_lock_t *, omp_lock_hint_t);
extern void __KAI_KMPC_CONVENTION omp_init_nest_lock_with_hint(omp_nest_lock_t *, omp_lock_hint_t);
/* time API functions */
extern double __KAI_KMPC_CONVENTION omp_get_wtime (void);
extern double __KAI_KMPC_CONVENTION omp_get_wtick (void);
/* OpenMP 4.0 */
extern int __KAI_KMPC_CONVENTION omp_get_default_device (void);
extern void __KAI_KMPC_CONVENTION omp_set_default_device (int);
extern int __KAI_KMPC_CONVENTION omp_is_initial_device (void);
extern int __KAI_KMPC_CONVENTION omp_get_num_devices (void);
extern int __KAI_KMPC_CONVENTION omp_get_num_teams (void);
extern int __KAI_KMPC_CONVENTION omp_get_team_num (void);
extern int __KAI_KMPC_CONVENTION omp_get_cancellation (void);
# include <stdlib.h>
/* OpenMP 4.5 */
extern int __KAI_KMPC_CONVENTION omp_get_initial_device (void);
extern void* __KAI_KMPC_CONVENTION omp_target_alloc(size_t, int);
extern void __KAI_KMPC_CONVENTION omp_target_free(void *, int);
extern int __KAI_KMPC_CONVENTION omp_target_is_present(void *, int);
extern int __KAI_KMPC_CONVENTION omp_target_memcpy(void *, void *, size_t, size_t, size_t, int, int);
extern int __KAI_KMPC_CONVENTION omp_target_memcpy_rect(void *, void *, size_t, int, const size_t *,
const size_t *, const size_t *, const size_t *, const size_t *, int, int);
extern int __KAI_KMPC_CONVENTION omp_target_associate_ptr(void *, void *, size_t, size_t, int);
extern int __KAI_KMPC_CONVENTION omp_target_disassociate_ptr(void *, int);
/* kmp API functions */
extern int __KAI_KMPC_CONVENTION kmp_get_stacksize (void);
extern void __KAI_KMPC_CONVENTION kmp_set_stacksize (int);
extern size_t __KAI_KMPC_CONVENTION kmp_get_stacksize_s (void);
extern void __KAI_KMPC_CONVENTION kmp_set_stacksize_s (size_t);
extern int __KAI_KMPC_CONVENTION kmp_get_blocktime (void);
extern int __KAI_KMPC_CONVENTION kmp_get_library (void);
extern void __KAI_KMPC_CONVENTION kmp_set_blocktime (int);
extern void __KAI_KMPC_CONVENTION kmp_set_library (int);
extern void __KAI_KMPC_CONVENTION kmp_set_library_serial (void);
extern void __KAI_KMPC_CONVENTION kmp_set_library_turnaround (void);
extern void __KAI_KMPC_CONVENTION kmp_set_library_throughput (void);
extern void __KAI_KMPC_CONVENTION kmp_set_defaults (char const *);
extern void __KAI_KMPC_CONVENTION kmp_set_disp_num_buffers (int);
/* Intel affinity API */
typedef void * kmp_affinity_mask_t;
extern int __KAI_KMPC_CONVENTION kmp_set_affinity (kmp_affinity_mask_t *);
extern int __KAI_KMPC_CONVENTION kmp_get_affinity (kmp_affinity_mask_t *);
extern int __KAI_KMPC_CONVENTION kmp_get_affinity_max_proc (void);
extern void __KAI_KMPC_CONVENTION kmp_create_affinity_mask (kmp_affinity_mask_t *);
extern void __KAI_KMPC_CONVENTION kmp_destroy_affinity_mask (kmp_affinity_mask_t *);
extern int __KAI_KMPC_CONVENTION kmp_set_affinity_mask_proc (int, kmp_affinity_mask_t *);
extern int __KAI_KMPC_CONVENTION kmp_unset_affinity_mask_proc (int, kmp_affinity_mask_t *);
extern int __KAI_KMPC_CONVENTION kmp_get_affinity_mask_proc (int, kmp_affinity_mask_t *);
/* OpenMP 4.0 affinity API */
typedef enum omp_proc_bind_t {
omp_proc_bind_false = 0,
omp_proc_bind_true = 1,
omp_proc_bind_master = 2,
omp_proc_bind_close = 3,
omp_proc_bind_spread = 4
} omp_proc_bind_t;
extern omp_proc_bind_t __KAI_KMPC_CONVENTION omp_get_proc_bind (void);
/* OpenMP 4.5 affinity API */
extern int __KAI_KMPC_CONVENTION omp_get_num_places (void);
extern int __KAI_KMPC_CONVENTION omp_get_place_num_procs (int);
extern void __KAI_KMPC_CONVENTION omp_get_place_proc_ids (int, int *);
extern int __KAI_KMPC_CONVENTION omp_get_place_num (void);
extern int __KAI_KMPC_CONVENTION omp_get_partition_num_places (void);
extern void __KAI_KMPC_CONVENTION omp_get_partition_place_nums (int *);
extern void * __KAI_KMPC_CONVENTION kmp_malloc (size_t);
extern void * __KAI_KMPC_CONVENTION kmp_aligned_malloc (size_t, size_t);
extern void * __KAI_KMPC_CONVENTION kmp_calloc (size_t, size_t);
extern void * __KAI_KMPC_CONVENTION kmp_realloc (void *, size_t);
extern void __KAI_KMPC_CONVENTION kmp_free (void *);
extern void __KAI_KMPC_CONVENTION kmp_set_warnings_on(void);
extern void __KAI_KMPC_CONVENTION kmp_set_warnings_off(void);
# undef __KAI_KMPC_CONVENTION
/* Warning:
The following typedefs are not standard, deprecated and will be removed in a future release.
*/
typedef int omp_int_t;
typedef double omp_wtime_t;
# ifdef __cplusplus
}
# endif
#endif /* __OMP_H */

View File

@ -0,0 +1,861 @@
! include/45/omp_lib.f.var
!
!//===----------------------------------------------------------------------===//
!//
!// The LLVM Compiler Infrastructure
!//
!// This file is dual licensed under the MIT and the University of Illinois Open
!// Source Licenses. See LICENSE.txt for details.
!//
!//===----------------------------------------------------------------------===//
!
!***
!*** Some of the directives for the following routine extend past column 72,
!*** so process this file in 132-column mode.
!***
!dec$ fixedformlinesize:132
module omp_lib_kinds
integer, parameter :: omp_integer_kind = 4
integer, parameter :: omp_logical_kind = 4
integer, parameter :: omp_real_kind = 4
integer, parameter :: omp_lock_kind = int_ptr_kind()
integer, parameter :: omp_nest_lock_kind = int_ptr_kind()
integer, parameter :: omp_sched_kind = omp_integer_kind
integer, parameter :: omp_proc_bind_kind = omp_integer_kind
integer, parameter :: kmp_pointer_kind = int_ptr_kind()
integer, parameter :: kmp_size_t_kind = int_ptr_kind()
integer, parameter :: kmp_affinity_mask_kind = int_ptr_kind()
integer, parameter :: kmp_cancel_kind = omp_integer_kind
integer, parameter :: omp_lock_hint_kind = omp_integer_kind
end module omp_lib_kinds
module omp_lib
use omp_lib_kinds
integer (kind=omp_integer_kind), parameter :: kmp_version_major = @LIBOMP_VERSION_MAJOR@
integer (kind=omp_integer_kind), parameter :: kmp_version_minor = @LIBOMP_VERSION_MINOR@
integer (kind=omp_integer_kind), parameter :: kmp_version_build = @LIBOMP_VERSION_BUILD@
character(*), parameter :: kmp_build_date = '@LIBOMP_BUILD_DATE@'
integer (kind=omp_integer_kind), parameter :: openmp_version = @LIBOMP_OMP_YEAR_MONTH@
integer(kind=omp_sched_kind), parameter :: omp_sched_static = 1
integer(kind=omp_sched_kind), parameter :: omp_sched_dynamic = 2
integer(kind=omp_sched_kind), parameter :: omp_sched_guided = 3
integer(kind=omp_sched_kind), parameter :: omp_sched_auto = 4
integer (kind=omp_proc_bind_kind), parameter :: omp_proc_bind_false = 0
integer (kind=omp_proc_bind_kind), parameter :: omp_proc_bind_true = 1
integer (kind=omp_proc_bind_kind), parameter :: omp_proc_bind_master = 2
integer (kind=omp_proc_bind_kind), parameter :: omp_proc_bind_close = 3
integer (kind=omp_proc_bind_kind), parameter :: omp_proc_bind_spread = 4
integer (kind=kmp_cancel_kind), parameter :: kmp_cancel_parallel = 1
integer (kind=kmp_cancel_kind), parameter :: kmp_cancel_loop = 2
integer (kind=kmp_cancel_kind), parameter :: kmp_cancel_sections = 3
integer (kind=kmp_cancel_kind), parameter :: kmp_cancel_taskgroup = 4
integer (kind=omp_lock_hint_kind), parameter :: omp_lock_hint_none = 0
integer (kind=omp_lock_hint_kind), parameter :: omp_lock_hint_uncontended = 1
integer (kind=omp_lock_hint_kind), parameter :: omp_lock_hint_contended = 2
integer (kind=omp_lock_hint_kind), parameter :: omp_lock_hint_nonspeculative = 4
integer (kind=omp_lock_hint_kind), parameter :: omp_lock_hint_speculative = 8
integer (kind=omp_lock_hint_kind), parameter :: kmp_lock_hint_hle = 65536
integer (kind=omp_lock_hint_kind), parameter :: kmp_lock_hint_rtm = 131072
integer (kind=omp_lock_hint_kind), parameter :: kmp_lock_hint_adaptive = 262144
interface
! ***
! *** omp_* entry points
! ***
subroutine omp_set_num_threads(num_threads)
use omp_lib_kinds
integer (kind=omp_integer_kind) num_threads
end subroutine omp_set_num_threads
subroutine omp_set_dynamic(dynamic_threads)
use omp_lib_kinds
logical (kind=omp_logical_kind) dynamic_threads
end subroutine omp_set_dynamic
subroutine omp_set_nested(nested)
use omp_lib_kinds
logical (kind=omp_logical_kind) nested
end subroutine omp_set_nested
function omp_get_num_threads()
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_num_threads
end function omp_get_num_threads
function omp_get_max_threads()
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_max_threads
end function omp_get_max_threads
function omp_get_thread_num()
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_thread_num
end function omp_get_thread_num
function omp_get_num_procs()
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_num_procs
end function omp_get_num_procs
function omp_in_parallel()
use omp_lib_kinds
logical (kind=omp_logical_kind) omp_in_parallel
end function omp_in_parallel
function omp_in_final()
use omp_lib_kinds
logical (kind=omp_logical_kind) omp_in_final
end function omp_in_final
function omp_get_dynamic()
use omp_lib_kinds
logical (kind=omp_logical_kind) omp_get_dynamic
end function omp_get_dynamic
function omp_get_nested()
use omp_lib_kinds
logical (kind=omp_logical_kind) omp_get_nested
end function omp_get_nested
function omp_get_thread_limit()
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_thread_limit
end function omp_get_thread_limit
subroutine omp_set_max_active_levels(max_levels)
use omp_lib_kinds
integer (kind=omp_integer_kind) max_levels
end subroutine omp_set_max_active_levels
function omp_get_max_active_levels()
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_max_active_levels
end function omp_get_max_active_levels
function omp_get_level()
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_level
end function omp_get_level
function omp_get_active_level()
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_active_level
end function omp_get_active_level
function omp_get_ancestor_thread_num(level)
use omp_lib_kinds
integer (kind=omp_integer_kind) level
integer (kind=omp_integer_kind) omp_get_ancestor_thread_num
end function omp_get_ancestor_thread_num
function omp_get_team_size(level)
use omp_lib_kinds
integer (kind=omp_integer_kind) level
integer (kind=omp_integer_kind) omp_get_team_size
end function omp_get_team_size
subroutine omp_set_schedule(kind, chunk_size)
use omp_lib_kinds
integer (kind=omp_sched_kind) kind
integer (kind=omp_integer_kind) chunk_size
end subroutine omp_set_schedule
subroutine omp_get_schedule(kind, chunk_size)
use omp_lib_kinds
integer (kind=omp_sched_kind) kind
integer (kind=omp_integer_kind) chunk_size
end subroutine omp_get_schedule
function omp_get_proc_bind()
use omp_lib_kinds
integer (kind=omp_proc_bind_kind) omp_get_proc_bind
end function omp_get_proc_bind
function omp_get_num_places()
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_num_places
end function omp_get_num_places
function omp_get_place_num_procs(place_num)
use omp_lib_kinds
integer (kind=omp_integer_kind) place_num
integer (kind=omp_integer_kind) omp_get_place_num_procs
end function omp_get_place_num_procs
subroutine omp_get_place_proc_ids(place_num, ids)
use omp_lib_kinds
integer (kind=omp_integer_kind) place_num
integer (kind=omp_integer_kind) ids(*)
end subroutine omp_get_place_proc_ids
function omp_get_place_num()
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_place_num
end function omp_get_place_num
function omp_get_partition_num_places()
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_partition_num_places
end function omp_get_partition_num_places
subroutine omp_get_partition_place_nums(place_nums)
use omp_lib_kinds
integer (kind=omp_integer_kind) place_nums(*)
end subroutine omp_get_partition_place_nums
function omp_get_wtime()
double precision omp_get_wtime
end function omp_get_wtime
function omp_get_wtick ()
double precision omp_get_wtick
end function omp_get_wtick
function omp_get_default_device()
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_default_device
end function omp_get_default_device
subroutine omp_set_default_device(device_num)
use omp_lib_kinds
integer (kind=omp_integer_kind) device_num
end subroutine omp_set_default_device
function omp_get_num_devices()
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_num_devices
end function omp_get_num_devices
function omp_get_num_teams()
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_num_teams
end function omp_get_num_teams
function omp_get_team_num()
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_team_num
end function omp_get_team_num
function omp_get_cancellation()
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_cancellation
end function omp_get_cancellation
function omp_is_initial_device()
use omp_lib_kinds
logical (kind=omp_logical_kind) omp_is_initial_device
end function omp_is_initial_device
function omp_get_initial_device()
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_initial_device
end function omp_get_initial_device
subroutine omp_init_lock(svar)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_init_lock
!DIR$ ENDIF
use omp_lib_kinds
integer (kind=omp_lock_kind) svar
end subroutine omp_init_lock
subroutine omp_destroy_lock(svar)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_destroy_lock
!DIR$ ENDIF
use omp_lib_kinds
integer (kind=omp_lock_kind) svar
end subroutine omp_destroy_lock
subroutine omp_set_lock(svar)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_set_lock
!DIR$ ENDIF
use omp_lib_kinds
integer (kind=omp_lock_kind) svar
end subroutine omp_set_lock
subroutine omp_unset_lock(svar)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_unset_lock
!DIR$ ENDIF
use omp_lib_kinds
integer (kind=omp_lock_kind) svar
end subroutine omp_unset_lock
function omp_test_lock(svar)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_test_lock
!DIR$ ENDIF
use omp_lib_kinds
logical (kind=omp_logical_kind) omp_test_lock
integer (kind=omp_lock_kind) svar
end function omp_test_lock
subroutine omp_init_nest_lock(nvar)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_init_nest_lock
!DIR$ ENDIF
use omp_lib_kinds
integer (kind=omp_nest_lock_kind) nvar
end subroutine omp_init_nest_lock
subroutine omp_destroy_nest_lock(nvar)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_destroy_nest_lock
!DIR$ ENDIF
use omp_lib_kinds
integer (kind=omp_nest_lock_kind) nvar
end subroutine omp_destroy_nest_lock
subroutine omp_set_nest_lock(nvar)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_set_nest_lock
!DIR$ ENDIF
use omp_lib_kinds
integer (kind=omp_nest_lock_kind) nvar
end subroutine omp_set_nest_lock
subroutine omp_unset_nest_lock(nvar)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_unset_nest_lock
!DIR$ ENDIF
use omp_lib_kinds
integer (kind=omp_nest_lock_kind) nvar
end subroutine omp_unset_nest_lock
function omp_test_nest_lock(nvar)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_test_nest_lock
!DIR$ ENDIF
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_test_nest_lock
integer (kind=omp_nest_lock_kind) nvar
end function omp_test_nest_lock
function omp_get_max_task_priority()
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_max_task_priority
end function omp_get_max_task_priority
! ***
! *** kmp_* entry points
! ***
subroutine kmp_set_stacksize(size)
use omp_lib_kinds
integer (kind=omp_integer_kind) size
end subroutine kmp_set_stacksize
subroutine kmp_set_stacksize_s(size)
use omp_lib_kinds
integer (kind=kmp_size_t_kind) size
end subroutine kmp_set_stacksize_s
subroutine kmp_set_blocktime(msec)
use omp_lib_kinds
integer (kind=omp_integer_kind) msec
end subroutine kmp_set_blocktime
subroutine kmp_set_library_serial()
end subroutine kmp_set_library_serial
subroutine kmp_set_library_turnaround()
end subroutine kmp_set_library_turnaround
subroutine kmp_set_library_throughput()
end subroutine kmp_set_library_throughput
subroutine kmp_set_library(libnum)
use omp_lib_kinds
integer (kind=omp_integer_kind) libnum
end subroutine kmp_set_library
subroutine kmp_set_defaults(string)
character*(*) string
end subroutine kmp_set_defaults
function kmp_get_stacksize()
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_get_stacksize
end function kmp_get_stacksize
function kmp_get_stacksize_s()
use omp_lib_kinds
integer (kind=kmp_size_t_kind) kmp_get_stacksize_s
end function kmp_get_stacksize_s
function kmp_get_blocktime()
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_get_blocktime
end function kmp_get_blocktime
function kmp_get_library()
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_get_library
end function kmp_get_library
subroutine kmp_set_disp_num_buffers(num)
use omp_lib_kinds
integer (kind=omp_integer_kind) num
end subroutine kmp_set_disp_num_buffers
function kmp_set_affinity(mask)
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_set_affinity
integer (kind=kmp_affinity_mask_kind) mask
end function kmp_set_affinity
function kmp_get_affinity(mask)
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_get_affinity
integer (kind=kmp_affinity_mask_kind) mask
end function kmp_get_affinity
function kmp_get_affinity_max_proc()
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_get_affinity_max_proc
end function kmp_get_affinity_max_proc
subroutine kmp_create_affinity_mask(mask)
use omp_lib_kinds
integer (kind=kmp_affinity_mask_kind) mask
end subroutine kmp_create_affinity_mask
subroutine kmp_destroy_affinity_mask(mask)
use omp_lib_kinds
integer (kind=kmp_affinity_mask_kind) mask
end subroutine kmp_destroy_affinity_mask
function kmp_set_affinity_mask_proc(proc, mask)
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_set_affinity_mask_proc
integer (kind=omp_integer_kind) proc
integer (kind=kmp_affinity_mask_kind) mask
end function kmp_set_affinity_mask_proc
function kmp_unset_affinity_mask_proc(proc, mask)
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_unset_affinity_mask_proc
integer (kind=omp_integer_kind) proc
integer (kind=kmp_affinity_mask_kind) mask
end function kmp_unset_affinity_mask_proc
function kmp_get_affinity_mask_proc(proc, mask)
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_get_affinity_mask_proc
integer (kind=omp_integer_kind) proc
integer (kind=kmp_affinity_mask_kind) mask
end function kmp_get_affinity_mask_proc
function kmp_malloc(size)
use omp_lib_kinds
integer (kind=kmp_pointer_kind) kmp_malloc
integer (kind=kmp_size_t_kind) size
end function kmp_malloc
function kmp_aligned_malloc(size, alignment)
use omp_lib_kinds
integer (kind=kmp_pointer_kind) kmp_aligned_malloc
integer (kind=kmp_size_t_kind) size
integer (kind=kmp_size_t_kind) alignment
end function kmp_aligned_malloc
function kmp_calloc(nelem, elsize)
use omp_lib_kinds
integer (kind=kmp_pointer_kind) kmp_calloc
integer (kind=kmp_size_t_kind) nelem
integer (kind=kmp_size_t_kind) elsize
end function kmp_calloc
function kmp_realloc(ptr, size)
use omp_lib_kinds
integer (kind=kmp_pointer_kind) kmp_realloc
integer (kind=kmp_pointer_kind) ptr
integer (kind=kmp_size_t_kind) size
end function kmp_realloc
subroutine kmp_free(ptr)
use omp_lib_kinds
integer (kind=kmp_pointer_kind) ptr
end subroutine kmp_free
subroutine kmp_set_warnings_on()
end subroutine kmp_set_warnings_on
subroutine kmp_set_warnings_off()
end subroutine kmp_set_warnings_off
function kmp_get_cancellation_status(cancelkind)
use omp_lib_kinds
integer (kind=kmp_cancel_kind) cancelkind
logical (kind=omp_logical_kind) kmp_get_cancellation_status
end function kmp_get_cancellation_status
subroutine omp_init_lock_with_hint(svar, hint)
use omp_lib_kinds
integer (kind=omp_lock_kind) svar
integer (kind=omp_lock_hint_kind) hint
end subroutine omp_init_lock_with_hint
subroutine omp_init_nest_lock_with_hint(nvar, hint)
use omp_lib_kinds
integer (kind=omp_nest_lock_kind) nvar
integer (kind=omp_lock_hint_kind) hint
end subroutine omp_init_nest_lock_with_hint
end interface
!dec$ if defined(_WIN32)
!dec$ if defined(_WIN64) .or. defined(_M_AMD64)
!***
!*** The Fortran entry points must be in uppercase, even if the /Qlowercase
!*** option is specified. The alias attribute ensures that the specified
!*** string is used as the entry point.
!***
!*** On the Windows* OS IA-32 architecture, the Fortran entry points have an
!*** underscore prepended. On the Windows* OS Intel(R) 64
!*** architecture, no underscore is prepended.
!***
!dec$ attributes alias:'OMP_SET_NUM_THREADS' :: omp_set_num_threads
!dec$ attributes alias:'OMP_SET_DYNAMIC' :: omp_set_dynamic
!dec$ attributes alias:'OMP_SET_NESTED' :: omp_set_nested
!dec$ attributes alias:'OMP_GET_NUM_THREADS' :: omp_get_num_threads
!dec$ attributes alias:'OMP_GET_MAX_THREADS' :: omp_get_max_threads
!dec$ attributes alias:'OMP_GET_THREAD_NUM' :: omp_get_thread_num
!dec$ attributes alias:'OMP_GET_NUM_PROCS' :: omp_get_num_procs
!dec$ attributes alias:'OMP_IN_PARALLEL' :: omp_in_parallel
!dec$ attributes alias:'OMP_GET_DYNAMIC' :: omp_get_dynamic
!dec$ attributes alias:'OMP_GET_NESTED' :: omp_get_nested
!dec$ attributes alias:'OMP_GET_THREAD_LIMIT' :: omp_get_thread_limit
!dec$ attributes alias:'OMP_SET_MAX_ACTIVE_LEVELS' :: omp_set_max_active_levels
!dec$ attributes alias:'OMP_GET_MAX_ACTIVE_LEVELS' :: omp_get_max_active_levels
!dec$ attributes alias:'OMP_GET_LEVEL' :: omp_get_level
!dec$ attributes alias:'OMP_GET_ACTIVE_LEVEL' :: omp_get_active_level
!dec$ attributes alias:'OMP_GET_ANCESTOR_THREAD_NUM' :: omp_get_ancestor_thread_num
!dec$ attributes alias:'OMP_GET_TEAM_SIZE' :: omp_get_team_size
!dec$ attributes alias:'OMP_SET_SCHEDULE' :: omp_set_schedule
!dec$ attributes alias:'OMP_GET_SCHEDULE' :: omp_get_schedule
!dec$ attributes alias:'OMP_GET_PROC_BIND' :: omp_get_proc_bind
!dec$ attributes alias:'OMP_GET_WTIME' :: omp_get_wtime
!dec$ attributes alias:'OMP_GET_WTICK' :: omp_get_wtick
!dec$ attributes alias:'OMP_GET_DEFAULT_DEVICE' :: omp_get_default_device
!dec$ attributes alias:'OMP_SET_DEFAULT_DEVICE' :: omp_set_default_device
!dec$ attributes alias:'OMP_GET_NUM_DEVICES' :: omp_get_num_devices
!dec$ attributes alias:'OMP_GET_NUM_TEAMS' :: omp_get_num_teams
!dec$ attributes alias:'OMP_GET_TEAM_NUM' :: omp_get_team_num
!dec$ attributes alias:'OMP_GET_CANCELLATION' :: omp_get_cancellation
!dec$ attributes alias:'OMP_IS_INITIAL_DEVICE' :: omp_is_initial_device
!dec$ attributes alias:'OMP_GET_INITIAL_DEVICE' :: omp_get_initial_device
!dec$ attributes alias:'OMP_GET_MAX_TASK_PRIORITY' :: omp_get_max_task_priority
!dec$ attributes alias:'omp_init_lock' :: omp_init_lock
!dec$ attributes alias:'omp_init_lock_with_hint' :: omp_init_lock_with_hint
!dec$ attributes alias:'omp_destroy_lock' :: omp_destroy_lock
!dec$ attributes alias:'omp_set_lock' :: omp_set_lock
!dec$ attributes alias:'omp_unset_lock' :: omp_unset_lock
!dec$ attributes alias:'omp_test_lock' :: omp_test_lock
!dec$ attributes alias:'omp_init_nest_lock' :: omp_init_nest_lock
!dec$ attributes alias:'omp_init_nest_lock_with_hint' :: omp_init_nest_lock_with_hint
!dec$ attributes alias:'omp_destroy_nest_lock' :: omp_destroy_nest_lock
!dec$ attributes alias:'omp_set_nest_lock' :: omp_set_nest_lock
!dec$ attributes alias:'omp_unset_nest_lock' :: omp_unset_nest_lock
!dec$ attributes alias:'omp_test_nest_lock' :: omp_test_nest_lock
!dec$ attributes alias:'KMP_SET_STACKSIZE'::kmp_set_stacksize
!dec$ attributes alias:'KMP_SET_STACKSIZE_S'::kmp_set_stacksize_s
!dec$ attributes alias:'KMP_SET_BLOCKTIME'::kmp_set_blocktime
!dec$ attributes alias:'KMP_SET_LIBRARY_SERIAL'::kmp_set_library_serial
!dec$ attributes alias:'KMP_SET_LIBRARY_TURNAROUND'::kmp_set_library_turnaround
!dec$ attributes alias:'KMP_SET_LIBRARY_THROUGHPUT'::kmp_set_library_throughput
!dec$ attributes alias:'KMP_SET_LIBRARY'::kmp_set_library
!dec$ attributes alias:'KMP_GET_STACKSIZE'::kmp_get_stacksize
!dec$ attributes alias:'KMP_GET_STACKSIZE_S'::kmp_get_stacksize_s
!dec$ attributes alias:'KMP_GET_BLOCKTIME'::kmp_get_blocktime
!dec$ attributes alias:'KMP_GET_LIBRARY'::kmp_get_library
!dec$ attributes alias:'KMP_SET_AFFINITY'::kmp_set_affinity
!dec$ attributes alias:'KMP_GET_AFFINITY'::kmp_get_affinity
!dec$ attributes alias:'KMP_GET_AFFINITY_MAX_PROC'::kmp_get_affinity_max_proc
!dec$ attributes alias:'KMP_CREATE_AFFINITY_MASK'::kmp_create_affinity_mask
!dec$ attributes alias:'KMP_DESTROY_AFFINITY_MASK'::kmp_destroy_affinity_mask
!dec$ attributes alias:'KMP_SET_AFFINITY_MASK_PROC'::kmp_set_affinity_mask_proc
!dec$ attributes alias:'KMP_UNSET_AFFINITY_MASK_PROC'::kmp_unset_affinity_mask_proc
!dec$ attributes alias:'KMP_GET_AFFINITY_MASK_PROC'::kmp_get_affinity_mask_proc
!dec$ attributes alias:'KMP_MALLOC'::kmp_malloc
!dec$ attributes alias:'KMP_ALIGNED_MALLOC'::kmp_aligned_malloc
!dec$ attributes alias:'KMP_CALLOC'::kmp_calloc
!dec$ attributes alias:'KMP_REALLOC'::kmp_realloc
!dec$ attributes alias:'KMP_FREE'::kmp_free
!dec$ attributes alias:'KMP_SET_WARNINGS_ON'::kmp_set_warnings_on
!dec$ attributes alias:'KMP_SET_WARNINGS_OFF'::kmp_set_warnings_off
!dec$ attributes alias:'KMP_GET_CANCELLATION_STATUS' :: kmp_get_cancellation_status
!dec$ else
!***
!*** On Windows* OS IA-32 architecture, the Fortran entry points have an underscore prepended.
!***
!dec$ attributes alias:'_OMP_SET_NUM_THREADS' :: omp_set_num_threads
!dec$ attributes alias:'_OMP_SET_DYNAMIC' :: omp_set_dynamic
!dec$ attributes alias:'_OMP_SET_NESTED' :: omp_set_nested
!dec$ attributes alias:'_OMP_GET_NUM_THREADS' :: omp_get_num_threads
!dec$ attributes alias:'_OMP_GET_MAX_THREADS' :: omp_get_max_threads
!dec$ attributes alias:'_OMP_GET_THREAD_NUM' :: omp_get_thread_num
!dec$ attributes alias:'_OMP_GET_NUM_PROCS' :: omp_get_num_procs
!dec$ attributes alias:'_OMP_IN_PARALLEL' :: omp_in_parallel
!dec$ attributes alias:'_OMP_GET_DYNAMIC' :: omp_get_dynamic
!dec$ attributes alias:'_OMP_GET_NESTED' :: omp_get_nested
!dec$ attributes alias:'_OMP_GET_THREAD_LIMIT' :: omp_get_thread_limit
!dec$ attributes alias:'_OMP_SET_MAX_ACTIVE_LEVELS' :: omp_set_max_active_levels
!dec$ attributes alias:'_OMP_GET_MAX_ACTIVE_LEVELS' :: omp_get_max_active_levels
!dec$ attributes alias:'_OMP_GET_LEVEL' :: omp_get_level
!dec$ attributes alias:'_OMP_GET_ACTIVE_LEVEL' :: omp_get_active_level
!dec$ attributes alias:'_OMP_GET_ANCESTOR_THREAD_NUM' :: omp_get_ancestor_thread_num
!dec$ attributes alias:'_OMP_GET_TEAM_SIZE' :: omp_get_team_size
!dec$ attributes alias:'_OMP_SET_SCHEDULE' :: omp_set_schedule
!dec$ attributes alias:'_OMP_GET_SCHEDULE' :: omp_get_schedule
!dec$ attributes alias:'_OMP_GET_PROC_BIND' :: omp_get_proc_bind
!dec$ attributes alias:'_OMP_GET_WTIME' :: omp_get_wtime
!dec$ attributes alias:'_OMP_GET_WTICK' :: omp_get_wtick
!dec$ attributes alias:'_OMP_GET_DEFAULT_DEVICE' :: omp_get_default_device
!dec$ attributes alias:'_OMP_SET_DEFAULT_DEVICE' :: omp_set_default_device
!dec$ attributes alias:'_OMP_GET_NUM_DEVICES' :: omp_get_num_devices
!dec$ attributes alias:'_OMP_GET_NUM_TEAMS' :: omp_get_num_teams
!dec$ attributes alias:'_OMP_GET_TEAM_NUM' :: omp_get_team_num
!dec$ attributes alias:'_OMP_GET_CANCELLATION' :: omp_get_cancellation
!dec$ attributes alias:'_OMP_IS_INITIAL_DEVICE' :: omp_is_initial_device
!dec$ attributes alias:'_OMP_GET_INITIAL_DEVICE' :: omp_get_initial_device
!dec$ attributes alias:'_OMP_GET_MAX_TASK_PRIORTY' :: omp_get_max_task_priority
!dec$ attributes alias:'_omp_init_lock' :: omp_init_lock
!dec$ attributes alias:'_omp_init_lock_with_hint' :: omp_init_lock_with_hint
!dec$ attributes alias:'_omp_destroy_lock' :: omp_destroy_lock
!dec$ attributes alias:'_omp_set_lock' :: omp_set_lock
!dec$ attributes alias:'_omp_unset_lock' :: omp_unset_lock
!dec$ attributes alias:'_omp_test_lock' :: omp_test_lock
!dec$ attributes alias:'_omp_init_nest_lock' :: omp_init_nest_lock
!dec$ attributes alias:'_omp_init_nest_lock_with_hint' :: omp_init_nest_lock_with_hint
!dec$ attributes alias:'_omp_destroy_nest_lock' :: omp_destroy_nest_lock
!dec$ attributes alias:'_omp_set_nest_lock' :: omp_set_nest_lock
!dec$ attributes alias:'_omp_unset_nest_lock' :: omp_unset_nest_lock
!dec$ attributes alias:'_omp_test_nest_lock' :: omp_test_nest_lock
!dec$ attributes alias:'_KMP_SET_STACKSIZE'::kmp_set_stacksize
!dec$ attributes alias:'_KMP_SET_STACKSIZE_S'::kmp_set_stacksize_s
!dec$ attributes alias:'_KMP_SET_BLOCKTIME'::kmp_set_blocktime
!dec$ attributes alias:'_KMP_SET_LIBRARY_SERIAL'::kmp_set_library_serial
!dec$ attributes alias:'_KMP_SET_LIBRARY_TURNAROUND'::kmp_set_library_turnaround
!dec$ attributes alias:'_KMP_SET_LIBRARY_THROUGHPUT'::kmp_set_library_throughput
!dec$ attributes alias:'_KMP_SET_LIBRARY'::kmp_set_library
!dec$ attributes alias:'_KMP_GET_STACKSIZE'::kmp_get_stacksize
!dec$ attributes alias:'_KMP_GET_STACKSIZE_S'::kmp_get_stacksize_s
!dec$ attributes alias:'_KMP_GET_BLOCKTIME'::kmp_get_blocktime
!dec$ attributes alias:'_KMP_GET_LIBRARY'::kmp_get_library
!dec$ attributes alias:'_KMP_SET_AFFINITY'::kmp_set_affinity
!dec$ attributes alias:'_KMP_GET_AFFINITY'::kmp_get_affinity
!dec$ attributes alias:'_KMP_GET_AFFINITY_MAX_PROC'::kmp_get_affinity_max_proc
!dec$ attributes alias:'_KMP_CREATE_AFFINITY_MASK'::kmp_create_affinity_mask
!dec$ attributes alias:'_KMP_DESTROY_AFFINITY_MASK'::kmp_destroy_affinity_mask
!dec$ attributes alias:'_KMP_SET_AFFINITY_MASK_PROC'::kmp_set_affinity_mask_proc
!dec$ attributes alias:'_KMP_UNSET_AFFINITY_MASK_PROC'::kmp_unset_affinity_mask_proc
!dec$ attributes alias:'_KMP_GET_AFFINITY_MASK_PROC'::kmp_get_affinity_mask_proc
!dec$ attributes alias:'_KMP_MALLOC'::kmp_malloc
!dec$ attributes alias:'_KMP_ALIGNED_MALLOC'::kmp_aligned_malloc
!dec$ attributes alias:'_KMP_CALLOC'::kmp_calloc
!dec$ attributes alias:'_KMP_REALLOC'::kmp_realloc
!dec$ attributes alias:'_KMP_FREE'::kmp_free
!dec$ attributes alias:'_KMP_SET_WARNINGS_ON'::kmp_set_warnings_on
!dec$ attributes alias:'_KMP_SET_WARNINGS_OFF'::kmp_set_warnings_off
!dec$ attributes alias:'_KMP_GET_CANCELLATION_STATUS' :: kmp_get_cancellation_status
!dec$ endif
!dec$ endif
!dec$ if defined(__linux)
!***
!*** The Linux* OS entry points are in lowercase, with an underscore appended.
!***
!dec$ attributes alias:'omp_set_num_threads_'::omp_set_num_threads
!dec$ attributes alias:'omp_set_dynamic_'::omp_set_dynamic
!dec$ attributes alias:'omp_set_nested_'::omp_set_nested
!dec$ attributes alias:'omp_get_num_threads_'::omp_get_num_threads
!dec$ attributes alias:'omp_get_max_threads_'::omp_get_max_threads
!dec$ attributes alias:'omp_get_thread_num_'::omp_get_thread_num
!dec$ attributes alias:'omp_get_num_procs_'::omp_get_num_procs
!dec$ attributes alias:'omp_in_parallel_'::omp_in_parallel
!dec$ attributes alias:'omp_get_dynamic_'::omp_get_dynamic
!dec$ attributes alias:'omp_get_nested_'::omp_get_nested
!dec$ attributes alias:'omp_get_thread_limit_'::omp_get_thread_limit
!dec$ attributes alias:'omp_set_max_active_levels_'::omp_set_max_active_levels
!dec$ attributes alias:'omp_get_max_active_levels_'::omp_get_max_active_levels
!dec$ attributes alias:'omp_get_level_'::omp_get_level
!dec$ attributes alias:'omp_get_active_level_'::omp_get_active_level
!dec$ attributes alias:'omp_get_ancestor_thread_num_'::omp_get_ancestor_thread_num
!dec$ attributes alias:'omp_get_team_size_'::omp_get_team_size
!dec$ attributes alias:'omp_set_schedule_'::omp_set_schedule
!dec$ attributes alias:'omp_get_schedule_'::omp_get_schedule
!dec$ attributes alias:'omp_get_proc_bind_' :: omp_get_proc_bind
!dec$ attributes alias:'omp_get_wtime_'::omp_get_wtime
!dec$ attributes alias:'omp_get_wtick_'::omp_get_wtick
!dec$ attributes alias:'omp_get_default_device_'::omp_get_default_device
!dec$ attributes alias:'omp_set_default_device_'::omp_set_default_device
!dec$ attributes alias:'omp_get_num_devices_'::omp_get_num_devices
!dec$ attributes alias:'omp_get_num_teams_'::omp_get_num_teams
!dec$ attributes alias:'omp_get_team_num_'::omp_get_team_num
!dec$ attributes alias:'omp_get_cancellation_'::omp_get_cancellation
!dec$ attributes alias:'omp_is_initial_device_'::omp_is_initial_device
!dec$ attributes alias:'omp_get_initial_device_'::omp_get_initial_device
!dec$ attributes alias:'omp_get_max_task_priority_'::omp_get_max_task_priority
!dec$ attributes alias:'omp_init_lock_'::omp_init_lock
!dec$ attributes alias:'omp_init_lock_with_hint_'::omp_init_lock_with_hint
!dec$ attributes alias:'omp_destroy_lock_'::omp_destroy_lock
!dec$ attributes alias:'omp_set_lock_'::omp_set_lock
!dec$ attributes alias:'omp_unset_lock_'::omp_unset_lock
!dec$ attributes alias:'omp_test_lock_'::omp_test_lock
!dec$ attributes alias:'omp_init_nest_lock_'::omp_init_nest_lock
!dec$ attributes alias:'omp_init_nest_lock_with_hint_'::omp_init_nest_lock_with_hint
!dec$ attributes alias:'omp_destroy_nest_lock_'::omp_destroy_nest_lock
!dec$ attributes alias:'omp_set_nest_lock_'::omp_set_nest_lock
!dec$ attributes alias:'omp_unset_nest_lock_'::omp_unset_nest_lock
!dec$ attributes alias:'omp_test_nest_lock_'::omp_test_nest_lock
!dec$ attributes alias:'kmp_set_stacksize_'::kmp_set_stacksize
!dec$ attributes alias:'kmp_set_stacksize_s_'::kmp_set_stacksize_s
!dec$ attributes alias:'kmp_set_blocktime_'::kmp_set_blocktime
!dec$ attributes alias:'kmp_set_library_serial_'::kmp_set_library_serial
!dec$ attributes alias:'kmp_set_library_turnaround_'::kmp_set_library_turnaround
!dec$ attributes alias:'kmp_set_library_throughput_'::kmp_set_library_throughput
!dec$ attributes alias:'kmp_set_library_'::kmp_set_library
!dec$ attributes alias:'kmp_get_stacksize_'::kmp_get_stacksize
!dec$ attributes alias:'kmp_get_stacksize_s_'::kmp_get_stacksize_s
!dec$ attributes alias:'kmp_get_blocktime_'::kmp_get_blocktime
!dec$ attributes alias:'kmp_get_library_'::kmp_get_library
!dec$ attributes alias:'kmp_set_affinity_'::kmp_set_affinity
!dec$ attributes alias:'kmp_get_affinity_'::kmp_get_affinity
!dec$ attributes alias:'kmp_get_affinity_max_proc_'::kmp_get_affinity_max_proc
!dec$ attributes alias:'kmp_create_affinity_mask_'::kmp_create_affinity_mask
!dec$ attributes alias:'kmp_destroy_affinity_mask_'::kmp_destroy_affinity_mask
!dec$ attributes alias:'kmp_set_affinity_mask_proc_'::kmp_set_affinity_mask_proc
!dec$ attributes alias:'kmp_unset_affinity_mask_proc_'::kmp_unset_affinity_mask_proc
!dec$ attributes alias:'kmp_get_affinity_mask_proc_'::kmp_get_affinity_mask_proc
!dec$ attributes alias:'kmp_malloc_'::kmp_malloc
!dec$ attributes alias:'kmp_aligned_malloc_'::kmp_aligned_malloc
!dec$ attributes alias:'kmp_calloc_'::kmp_calloc
!dec$ attributes alias:'kmp_realloc_'::kmp_realloc
!dec$ attributes alias:'kmp_free_'::kmp_free
!dec$ attributes alias:'kmp_set_warnings_on_'::kmp_set_warnings_on
!dec$ attributes alias:'kmp_set_warnings_off_'::kmp_set_warnings_off
!dec$ attributes alias:'kmp_get_cancellation_status_'::kmp_get_cancellation_status
!dec$ endif
!dec$ if defined(__APPLE__)
!***
!*** The Mac entry points are in lowercase, with an both an underscore
!*** appended and an underscore prepended.
!***
!dec$ attributes alias:'_omp_set_num_threads_'::omp_set_num_threads
!dec$ attributes alias:'_omp_set_dynamic_'::omp_set_dynamic
!dec$ attributes alias:'_omp_set_nested_'::omp_set_nested
!dec$ attributes alias:'_omp_get_num_threads_'::omp_get_num_threads
!dec$ attributes alias:'_omp_get_max_threads_'::omp_get_max_threads
!dec$ attributes alias:'_omp_get_thread_num_'::omp_get_thread_num
!dec$ attributes alias:'_omp_get_num_procs_'::omp_get_num_procs
!dec$ attributes alias:'_omp_in_parallel_'::omp_in_parallel
!dec$ attributes alias:'_omp_get_dynamic_'::omp_get_dynamic
!dec$ attributes alias:'_omp_get_nested_'::omp_get_nested
!dec$ attributes alias:'_omp_get_thread_limit_'::omp_get_thread_limit
!dec$ attributes alias:'_omp_set_max_active_levels_'::omp_set_max_active_levels
!dec$ attributes alias:'_omp_get_max_active_levels_'::omp_get_max_active_levels
!dec$ attributes alias:'_omp_get_level_'::omp_get_level
!dec$ attributes alias:'_omp_get_active_level_'::omp_get_active_level
!dec$ attributes alias:'_omp_get_ancestor_thread_num_'::omp_get_ancestor_thread_num
!dec$ attributes alias:'_omp_get_team_size_'::omp_get_team_size
!dec$ attributes alias:'_omp_set_schedule_'::omp_set_schedule
!dec$ attributes alias:'_omp_get_schedule_'::omp_get_schedule
!dec$ attributes alias:'_omp_get_proc_bind_' :: omp_get_proc_bind
!dec$ attributes alias:'_omp_get_wtime_'::omp_get_wtime
!dec$ attributes alias:'_omp_get_wtick_'::omp_get_wtick
!dec$ attributes alias:'_omp_get_default_device_'::omp_get_default_device
!dec$ attributes alias:'_omp_set_default_device_'::omp_set_default_device
!dec$ attributes alias:'_omp_get_num_devices_'::omp_get_num_devices
!dec$ attributes alias:'_omp_get_num_teams_'::omp_get_num_teams
!dec$ attributes alias:'_omp_get_team_num_'::omp_get_team_num
!dec$ attributes alias:'_omp_get_cancellation_'::omp_get_cancellation
!dec$ attributes alias:'_omp_is_initial_device_'::omp_is_initial_device
!dec$ attributes alias:'_omp_get_initial_device_'::omp_get_initial_device
!dec$ attributes alias:'_omp_get_max_task_priorty_'::omp_get_max_task_priority
!dec$ attributes alias:'_omp_init_lock_'::omp_init_lock
!dec$ attributes alias:'_omp_init_lock_with_hint_'::omp_init_lock_with_hint
!dec$ attributes alias:'_omp_destroy_lock_'::omp_destroy_lock
!dec$ attributes alias:'_omp_set_lock_'::omp_set_lock
!dec$ attributes alias:'_omp_unset_lock_'::omp_unset_lock
!dec$ attributes alias:'_omp_test_lock_'::omp_test_lock
!dec$ attributes alias:'_omp_init_nest_lock_'::omp_init_nest_lock
!dec$ attributes alias:'_omp_init_nest_lock_with_hint_'::omp_init_nest_lock_with_hint
!dec$ attributes alias:'_omp_destroy_nest_lock_'::omp_destroy_nest_lock
!dec$ attributes alias:'_omp_set_nest_lock_'::omp_set_nest_lock
!dec$ attributes alias:'_omp_unset_nest_lock_'::omp_unset_nest_lock
!dec$ attributes alias:'_omp_test_nest_lock_'::omp_test_nest_lock
!dec$ attributes alias:'_kmp_set_stacksize_'::kmp_set_stacksize
!dec$ attributes alias:'_kmp_set_stacksize_s_'::kmp_set_stacksize_s
!dec$ attributes alias:'_kmp_set_blocktime_'::kmp_set_blocktime
!dec$ attributes alias:'_kmp_set_library_serial_'::kmp_set_library_serial
!dec$ attributes alias:'_kmp_set_library_turnaround_'::kmp_set_library_turnaround
!dec$ attributes alias:'_kmp_set_library_throughput_'::kmp_set_library_throughput
!dec$ attributes alias:'_kmp_set_library_'::kmp_set_library
!dec$ attributes alias:'_kmp_get_stacksize_'::kmp_get_stacksize
!dec$ attributes alias:'_kmp_get_stacksize_s_'::kmp_get_stacksize_s
!dec$ attributes alias:'_kmp_get_blocktime_'::kmp_get_blocktime
!dec$ attributes alias:'_kmp_get_library_'::kmp_get_library
!dec$ attributes alias:'_kmp_set_affinity_'::kmp_set_affinity
!dec$ attributes alias:'_kmp_get_affinity_'::kmp_get_affinity
!dec$ attributes alias:'_kmp_get_affinity_max_proc_'::kmp_get_affinity_max_proc
!dec$ attributes alias:'_kmp_create_affinity_mask_'::kmp_create_affinity_mask
!dec$ attributes alias:'_kmp_destroy_affinity_mask_'::kmp_destroy_affinity_mask
!dec$ attributes alias:'_kmp_set_affinity_mask_proc_'::kmp_set_affinity_mask_proc
!dec$ attributes alias:'_kmp_unset_affinity_mask_proc_'::kmp_unset_affinity_mask_proc
!dec$ attributes alias:'_kmp_get_affinity_mask_proc_'::kmp_get_affinity_mask_proc
!dec$ attributes alias:'_kmp_malloc_'::kmp_malloc
!dec$ attributes alias:'_kmp_aligned_malloc_'::kmp_aligned_malloc
!dec$ attributes alias:'_kmp_calloc_'::kmp_calloc
!dec$ attributes alias:'_kmp_realloc_'::kmp_realloc
!dec$ attributes alias:'_kmp_free_'::kmp_free
!dec$ attributes alias:'_kmp_set_warnings_on_'::kmp_set_warnings_on
!dec$ attributes alias:'_kmp_set_warnings_off_'::kmp_set_warnings_off
!dec$ attributes alias:'_kmp_get_cancellation_status_'::kmp_get_cancellation_status
!dec$ endif
end module omp_lib

View File

@ -0,0 +1,524 @@
! include/45/omp_lib.f90.var
!
!//===----------------------------------------------------------------------===//
!//
!// The LLVM Compiler Infrastructure
!//
!// This file is dual licensed under the MIT and the University of Illinois Open
!// Source Licenses. See LICENSE.txt for details.
!//
!//===----------------------------------------------------------------------===//
!
module omp_lib_kinds
use, intrinsic :: iso_c_binding
integer, parameter :: omp_integer_kind = c_int
integer, parameter :: omp_logical_kind = 4
integer, parameter :: omp_real_kind = c_float
integer, parameter :: kmp_double_kind = c_double
integer, parameter :: omp_lock_kind = c_intptr_t
integer, parameter :: omp_nest_lock_kind = c_intptr_t
integer, parameter :: omp_sched_kind = omp_integer_kind
integer, parameter :: omp_proc_bind_kind = omp_integer_kind
integer, parameter :: kmp_pointer_kind = c_intptr_t
integer, parameter :: kmp_size_t_kind = c_size_t
integer, parameter :: kmp_affinity_mask_kind = c_intptr_t
integer, parameter :: kmp_cancel_kind = omp_integer_kind
integer, parameter :: omp_lock_hint_kind = omp_integer_kind
end module omp_lib_kinds
module omp_lib
use omp_lib_kinds
integer (kind=omp_integer_kind), parameter :: openmp_version = @LIBOMP_OMP_YEAR_MONTH@
integer (kind=omp_integer_kind), parameter :: kmp_version_major = @LIBOMP_VERSION_MAJOR@
integer (kind=omp_integer_kind), parameter :: kmp_version_minor = @LIBOMP_VERSION_MINOR@
integer (kind=omp_integer_kind), parameter :: kmp_version_build = @LIBOMP_VERSION_BUILD@
character(*) kmp_build_date
parameter( kmp_build_date = '@LIBOMP_BUILD_DATE@' )
integer(kind=omp_sched_kind), parameter :: omp_sched_static = 1
integer(kind=omp_sched_kind), parameter :: omp_sched_dynamic = 2
integer(kind=omp_sched_kind), parameter :: omp_sched_guided = 3
integer(kind=omp_sched_kind), parameter :: omp_sched_auto = 4
integer (kind=omp_proc_bind_kind), parameter :: omp_proc_bind_false = 0
integer (kind=omp_proc_bind_kind), parameter :: omp_proc_bind_true = 1
integer (kind=omp_proc_bind_kind), parameter :: omp_proc_bind_master = 2
integer (kind=omp_proc_bind_kind), parameter :: omp_proc_bind_close = 3
integer (kind=omp_proc_bind_kind), parameter :: omp_proc_bind_spread = 4
integer (kind=kmp_cancel_kind), parameter :: kmp_cancel_parallel = 1
integer (kind=kmp_cancel_kind), parameter :: kmp_cancel_loop = 2
integer (kind=kmp_cancel_kind), parameter :: kmp_cancel_sections = 3
integer (kind=kmp_cancel_kind), parameter :: kmp_cancel_taskgroup = 4
integer (kind=omp_lock_hint_kind), parameter :: omp_lock_hint_none = 0
integer (kind=omp_lock_hint_kind), parameter :: omp_lock_hint_uncontended = 1
integer (kind=omp_lock_hint_kind), parameter :: omp_lock_hint_contended = 2
integer (kind=omp_lock_hint_kind), parameter :: omp_lock_hint_nonspeculative = 4
integer (kind=omp_lock_hint_kind), parameter :: omp_lock_hint_speculative = 8
integer (kind=omp_lock_hint_kind), parameter :: kmp_lock_hint_hle = 65536
integer (kind=omp_lock_hint_kind), parameter :: kmp_lock_hint_rtm = 131072
integer (kind=omp_lock_hint_kind), parameter :: kmp_lock_hint_adaptive = 262144
interface
! ***
! *** omp_* entry points
! ***
subroutine omp_set_num_threads(num_threads) bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind), value :: num_threads
end subroutine omp_set_num_threads
subroutine omp_set_dynamic(dynamic_threads) bind(c)
use omp_lib_kinds
logical (kind=omp_logical_kind), value :: dynamic_threads
end subroutine omp_set_dynamic
subroutine omp_set_nested(nested) bind(c)
use omp_lib_kinds
logical (kind=omp_logical_kind), value :: nested
end subroutine omp_set_nested
function omp_get_num_threads() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_num_threads
end function omp_get_num_threads
function omp_get_max_threads() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_max_threads
end function omp_get_max_threads
function omp_get_thread_num() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_thread_num
end function omp_get_thread_num
function omp_get_num_procs() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_num_procs
end function omp_get_num_procs
function omp_in_parallel() bind(c)
use omp_lib_kinds
logical (kind=omp_logical_kind) omp_in_parallel
end function omp_in_parallel
function omp_in_final() bind(c)
use omp_lib_kinds
logical (kind=omp_logical_kind) omp_in_final
end function omp_in_final
function omp_get_dynamic() bind(c)
use omp_lib_kinds
logical (kind=omp_logical_kind) omp_get_dynamic
end function omp_get_dynamic
function omp_get_nested() bind(c)
use omp_lib_kinds
logical (kind=omp_logical_kind) omp_get_nested
end function omp_get_nested
function omp_get_thread_limit() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_thread_limit
end function omp_get_thread_limit
subroutine omp_set_max_active_levels(max_levels) bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind), value :: max_levels
end subroutine omp_set_max_active_levels
function omp_get_max_active_levels() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_max_active_levels
end function omp_get_max_active_levels
function omp_get_level() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_level
end function omp_get_level
function omp_get_active_level() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_active_level
end function omp_get_active_level
function omp_get_ancestor_thread_num(level) bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_ancestor_thread_num
integer (kind=omp_integer_kind), value :: level
end function omp_get_ancestor_thread_num
function omp_get_team_size(level) bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_team_size
integer (kind=omp_integer_kind), value :: level
end function omp_get_team_size
subroutine omp_set_schedule(kind, chunk_size) bind(c)
use omp_lib_kinds
integer (kind=omp_sched_kind), value :: kind
integer (kind=omp_integer_kind), value :: chunk_size
end subroutine omp_set_schedule
subroutine omp_get_schedule(kind, chunk_size) bind(c)
use omp_lib_kinds
integer (kind=omp_sched_kind) kind
integer (kind=omp_integer_kind) chunk_size
end subroutine omp_get_schedule
function omp_get_proc_bind() bind(c)
use omp_lib_kinds
integer (kind=omp_proc_bind_kind) omp_get_proc_bind
end function omp_get_proc_bind
function omp_get_num_places() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_num_places
end function omp_get_num_places
function omp_get_place_num_procs(place_num) bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind), value :: place_num
integer (kind=omp_integer_kind) omp_get_place_num_procs
end function omp_get_place_num_procs
subroutine omp_get_place_proc_ids(place_num, ids) bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind), value :: place_num
integer (kind=omp_integer_kind) ids(*)
end subroutine omp_get_place_proc_ids
function omp_get_place_num() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_place_num
end function omp_get_place_num
function omp_get_partition_num_places() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_partition_num_places
end function omp_get_partition_num_places
subroutine omp_get_partition_place_nums(place_nums) bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) place_nums(*)
end subroutine omp_get_partition_place_nums
function omp_get_wtime() bind(c)
use omp_lib_kinds
real (kind=kmp_double_kind) omp_get_wtime
end function omp_get_wtime
function omp_get_wtick() bind(c)
use omp_lib_kinds
real (kind=kmp_double_kind) omp_get_wtick
end function omp_get_wtick
function omp_get_default_device() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_default_device
end function omp_get_default_device
subroutine omp_set_default_device(device_num) bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind), value :: device_num
end subroutine omp_set_default_device
function omp_get_num_devices() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_num_devices
end function omp_get_num_devices
function omp_get_num_teams() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_num_teams
end function omp_get_num_teams
function omp_get_team_num() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_team_num
end function omp_get_team_num
function omp_get_cancellation() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_cancellation
end function omp_get_cancellation
function omp_is_initial_device() bind(c)
use omp_lib_kinds
logical (kind=omp_logical_kind) omp_is_initial_device
end function omp_is_initial_device
function omp_get_initial_device() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_initial_device
end function omp_get_initial_device
subroutine omp_init_lock(svar) bind(c)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_init_lock
!DIR$ ENDIF
use omp_lib_kinds
integer (kind=omp_lock_kind) svar
end subroutine omp_init_lock
subroutine omp_destroy_lock(svar) bind(c)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_destroy_lock
!DIR$ ENDIF
use omp_lib_kinds
integer (kind=omp_lock_kind) svar
end subroutine omp_destroy_lock
subroutine omp_set_lock(svar) bind(c)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_set_lock
!DIR$ ENDIF
use omp_lib_kinds
integer (kind=omp_lock_kind) svar
end subroutine omp_set_lock
subroutine omp_unset_lock(svar) bind(c)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_unset_lock
!DIR$ ENDIF
use omp_lib_kinds
integer (kind=omp_lock_kind) svar
end subroutine omp_unset_lock
function omp_test_lock(svar) bind(c)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_test_lock
!DIR$ ENDIF
use omp_lib_kinds
logical (kind=omp_logical_kind) omp_test_lock
integer (kind=omp_lock_kind) svar
end function omp_test_lock
subroutine omp_init_nest_lock(nvar) bind(c)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_init_nest_lock
!DIR$ ENDIF
use omp_lib_kinds
integer (kind=omp_nest_lock_kind) nvar
end subroutine omp_init_nest_lock
subroutine omp_destroy_nest_lock(nvar) bind(c)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_destroy_nest_lock
!DIR$ ENDIF
use omp_lib_kinds
integer (kind=omp_nest_lock_kind) nvar
end subroutine omp_destroy_nest_lock
subroutine omp_set_nest_lock(nvar) bind(c)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_set_nest_lock
!DIR$ ENDIF
use omp_lib_kinds
integer (kind=omp_nest_lock_kind) nvar
end subroutine omp_set_nest_lock
subroutine omp_unset_nest_lock(nvar) bind(c)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_unset_nest_lock
!DIR$ ENDIF
use omp_lib_kinds
integer (kind=omp_nest_lock_kind) nvar
end subroutine omp_unset_nest_lock
function omp_test_nest_lock(nvar) bind(c)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_test_nest_lock
!DIR$ ENDIF
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_test_nest_lock
integer (kind=omp_nest_lock_kind) nvar
end function omp_test_nest_lock
function omp_get_max_task_priority() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_max_task_priority
end function omp_get_max_task_priority
! ***
! *** kmp_* entry points
! ***
subroutine kmp_set_stacksize(size) bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind), value :: size
end subroutine kmp_set_stacksize
subroutine kmp_set_stacksize_s(size) bind(c)
use omp_lib_kinds
integer (kind=kmp_size_t_kind), value :: size
end subroutine kmp_set_stacksize_s
subroutine kmp_set_blocktime(msec) bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind), value :: msec
end subroutine kmp_set_blocktime
subroutine kmp_set_library_serial() bind(c)
end subroutine kmp_set_library_serial
subroutine kmp_set_library_turnaround() bind(c)
end subroutine kmp_set_library_turnaround
subroutine kmp_set_library_throughput() bind(c)
end subroutine kmp_set_library_throughput
subroutine kmp_set_library(libnum) bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind), value :: libnum
end subroutine kmp_set_library
subroutine kmp_set_defaults(string) bind(c)
use, intrinsic :: iso_c_binding
character (kind=c_char) :: string(*)
end subroutine kmp_set_defaults
function kmp_get_stacksize() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_get_stacksize
end function kmp_get_stacksize
function kmp_get_stacksize_s() bind(c)
use omp_lib_kinds
integer (kind=kmp_size_t_kind) kmp_get_stacksize_s
end function kmp_get_stacksize_s
function kmp_get_blocktime() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_get_blocktime
end function kmp_get_blocktime
function kmp_get_library() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_get_library
end function kmp_get_library
subroutine kmp_set_disp_num_buffers(num) bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind), value :: num
end subroutine kmp_set_disp_num_buffers
function kmp_set_affinity(mask) bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_set_affinity
integer (kind=kmp_affinity_mask_kind) mask
end function kmp_set_affinity
function kmp_get_affinity(mask) bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_get_affinity
integer (kind=kmp_affinity_mask_kind) mask
end function kmp_get_affinity
function kmp_get_affinity_max_proc() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_get_affinity_max_proc
end function kmp_get_affinity_max_proc
subroutine kmp_create_affinity_mask(mask) bind(c)
use omp_lib_kinds
integer (kind=kmp_affinity_mask_kind) mask
end subroutine kmp_create_affinity_mask
subroutine kmp_destroy_affinity_mask(mask) bind(c)
use omp_lib_kinds
integer (kind=kmp_affinity_mask_kind) mask
end subroutine kmp_destroy_affinity_mask
function kmp_set_affinity_mask_proc(proc, mask) bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_set_affinity_mask_proc
integer (kind=omp_integer_kind), value :: proc
integer (kind=kmp_affinity_mask_kind) mask
end function kmp_set_affinity_mask_proc
function kmp_unset_affinity_mask_proc(proc, mask) bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_unset_affinity_mask_proc
integer (kind=omp_integer_kind), value :: proc
integer (kind=kmp_affinity_mask_kind) mask
end function kmp_unset_affinity_mask_proc
function kmp_get_affinity_mask_proc(proc, mask) bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_get_affinity_mask_proc
integer (kind=omp_integer_kind), value :: proc
integer (kind=kmp_affinity_mask_kind) mask
end function kmp_get_affinity_mask_proc
function kmp_malloc(size) bind(c)
use omp_lib_kinds
integer (kind=kmp_pointer_kind) kmp_malloc
integer (kind=kmp_size_t_kind), value :: size
end function kmp_malloc
function kmp_aligned_malloc(size, alignment) bind(c)
use omp_lib_kinds
integer (kind=kmp_pointer_kind) kmp_aligned_malloc
integer (kind=kmp_size_t_kind), value :: size
integer (kind=kmp_size_t_kind), value :: alignment
end function kmp_aligned_malloc
function kmp_calloc(nelem, elsize) bind(c)
use omp_lib_kinds
integer (kind=kmp_pointer_kind) kmp_calloc
integer (kind=kmp_size_t_kind), value :: nelem
integer (kind=kmp_size_t_kind), value :: elsize
end function kmp_calloc
function kmp_realloc(ptr, size) bind(c)
use omp_lib_kinds
integer (kind=kmp_pointer_kind) kmp_realloc
integer (kind=kmp_pointer_kind), value :: ptr
integer (kind=kmp_size_t_kind), value :: size
end function kmp_realloc
subroutine kmp_free(ptr) bind(c)
use omp_lib_kinds
integer (kind=kmp_pointer_kind), value :: ptr
end subroutine kmp_free
subroutine kmp_set_warnings_on() bind(c)
end subroutine kmp_set_warnings_on
subroutine kmp_set_warnings_off() bind(c)
end subroutine kmp_set_warnings_off
function kmp_get_cancellation_status(cancelkind) bind(c)
use omp_lib_kinds
integer (kind=kmp_cancel_kind), value :: cancelkind
logical (kind=omp_logical_kind) kmp_get_cancellation_status
end function kmp_get_cancellation_status
subroutine omp_init_lock_with_hint(svar, hint) bind(c)
use omp_lib_kinds
integer (kind=omp_lock_kind) svar
integer (kind=omp_lock_hint_kind), value :: hint
end subroutine omp_init_lock_with_hint
subroutine omp_init_nest_lock_with_hint(nvar, hint) bind(c)
use omp_lib_kinds
integer (kind=omp_nest_lock_kind) nvar
integer (kind=omp_lock_hint_kind), value :: hint
end subroutine omp_init_nest_lock_with_hint
end interface
end module omp_lib

View File

@ -0,0 +1,645 @@
! include/45/omp_lib.h.var
!
!//===----------------------------------------------------------------------===//
!//
!// The LLVM Compiler Infrastructure
!//
!// This file is dual licensed under the MIT and the University of Illinois Open
!// Source Licenses. See LICENSE.txt for details.
!//
!//===----------------------------------------------------------------------===//
!
!***
!*** Some of the directives for the following routine extend past column 72,
!*** so process this file in 132-column mode.
!***
!DIR$ fixedformlinesize:132
integer, parameter :: omp_integer_kind = 4
integer, parameter :: omp_logical_kind = 4
integer, parameter :: omp_real_kind = 4
integer, parameter :: omp_lock_kind = int_ptr_kind()
integer, parameter :: omp_nest_lock_kind = int_ptr_kind()
integer, parameter :: omp_sched_kind = omp_integer_kind
integer, parameter :: omp_proc_bind_kind = omp_integer_kind
integer, parameter :: kmp_pointer_kind = int_ptr_kind()
integer, parameter :: kmp_size_t_kind = int_ptr_kind()
integer, parameter :: kmp_affinity_mask_kind = int_ptr_kind()
integer, parameter :: omp_lock_hint_kind = omp_integer_kind
integer (kind=omp_integer_kind), parameter :: openmp_version = @LIBOMP_OMP_YEAR_MONTH@
integer (kind=omp_integer_kind), parameter :: kmp_version_major = @LIBOMP_VERSION_MAJOR@
integer (kind=omp_integer_kind), parameter :: kmp_version_minor = @LIBOMP_VERSION_MINOR@
integer (kind=omp_integer_kind), parameter :: kmp_version_build = @LIBOMP_VERSION_BUILD@
character(*) kmp_build_date
parameter( kmp_build_date = '@LIBOMP_BUILD_DATE@' )
integer(kind=omp_sched_kind), parameter :: omp_sched_static = 1
integer(kind=omp_sched_kind), parameter :: omp_sched_dynamic = 2
integer(kind=omp_sched_kind), parameter :: omp_sched_guided = 3
integer(kind=omp_sched_kind), parameter :: omp_sched_auto = 4
integer (kind=omp_proc_bind_kind), parameter :: omp_proc_bind_false = 0
integer (kind=omp_proc_bind_kind), parameter :: omp_proc_bind_true = 1
integer (kind=omp_proc_bind_kind), parameter :: omp_proc_bind_master = 2
integer (kind=omp_proc_bind_kind), parameter :: omp_proc_bind_close = 3
integer (kind=omp_proc_bind_kind), parameter :: omp_proc_bind_spread = 4
integer (kind=omp_lock_hint_kind), parameter :: omp_lock_hint_none = 0
integer (kind=omp_lock_hint_kind), parameter :: omp_lock_hint_uncontended = 1
integer (kind=omp_lock_hint_kind), parameter :: omp_lock_hint_contended = 2
integer (kind=omp_lock_hint_kind), parameter :: omp_lock_hint_nonspeculative = 4
integer (kind=omp_lock_hint_kind), parameter :: omp_lock_hint_speculative = 8
integer (kind=omp_lock_hint_kind), parameter :: kmp_lock_hint_hle = 65536
integer (kind=omp_lock_hint_kind), parameter :: kmp_lock_hint_rtm = 131072
integer (kind=omp_lock_hint_kind), parameter :: kmp_lock_hint_adaptive = 262144
interface
! ***
! *** omp_* entry points
! ***
subroutine omp_set_num_threads(num_threads) bind(c)
import
integer (kind=omp_integer_kind), value :: num_threads
end subroutine omp_set_num_threads
subroutine omp_set_dynamic(dynamic_threads) bind(c)
import
logical (kind=omp_logical_kind), value :: dynamic_threads
end subroutine omp_set_dynamic
subroutine omp_set_nested(nested) bind(c)
import
logical (kind=omp_logical_kind), value :: nested
end subroutine omp_set_nested
function omp_get_num_threads() bind(c)
import
integer (kind=omp_integer_kind) omp_get_num_threads
end function omp_get_num_threads
function omp_get_max_threads() bind(c)
import
integer (kind=omp_integer_kind) omp_get_max_threads
end function omp_get_max_threads
function omp_get_thread_num() bind(c)
import
integer (kind=omp_integer_kind) omp_get_thread_num
end function omp_get_thread_num
function omp_get_num_procs() bind(c)
import
integer (kind=omp_integer_kind) omp_get_num_procs
end function omp_get_num_procs
function omp_in_parallel() bind(c)
import
logical (kind=omp_logical_kind) omp_in_parallel
end function omp_in_parallel
function omp_in_final() bind(c)
import
logical (kind=omp_logical_kind) omp_in_final
end function omp_in_final
function omp_get_dynamic() bind(c)
import
logical (kind=omp_logical_kind) omp_get_dynamic
end function omp_get_dynamic
function omp_get_nested() bind(c)
import
logical (kind=omp_logical_kind) omp_get_nested
end function omp_get_nested
function omp_get_thread_limit() bind(c)
import
integer (kind=omp_integer_kind) omp_get_thread_limit
end function omp_get_thread_limit
subroutine omp_set_max_active_levels(max_levels) bind(c)
import
integer (kind=omp_integer_kind), value :: max_levels
end subroutine omp_set_max_active_levels
function omp_get_max_active_levels() bind(c)
import
integer (kind=omp_integer_kind) omp_get_max_active_levels
end function omp_get_max_active_levels
function omp_get_level() bind(c)
import
integer (kind=omp_integer_kind) omp_get_level
end function omp_get_level
function omp_get_active_level() bind(c)
import
integer (kind=omp_integer_kind) omp_get_active_level
end function omp_get_active_level
function omp_get_ancestor_thread_num(level) bind(c)
import
integer (kind=omp_integer_kind) omp_get_ancestor_thread_num
integer (kind=omp_integer_kind), value :: level
end function omp_get_ancestor_thread_num
function omp_get_team_size(level) bind(c)
import
integer (kind=omp_integer_kind) omp_get_team_size
integer (kind=omp_integer_kind), value :: level
end function omp_get_team_size
subroutine omp_set_schedule(kind, chunk_size) bind(c)
import
integer (kind=omp_sched_kind), value :: kind
integer (kind=omp_integer_kind), value :: chunk_size
end subroutine omp_set_schedule
subroutine omp_get_schedule(kind, chunk_size) bind(c)
import
integer (kind=omp_sched_kind) kind
integer (kind=omp_integer_kind) chunk_size
end subroutine omp_get_schedule
function omp_get_proc_bind() bind(c)
import
integer (kind=omp_proc_bind_kind) omp_get_proc_bind
end function omp_get_proc_bind
function omp_get_num_places() bind(c)
import
integer (kind=omp_integer_kind) omp_get_num_places
end function omp_get_num_places
function omp_get_place_num_procs(place_num) bind(c)
import
integer (kind=omp_integer_kind), value :: place_num
integer (kind=omp_integer_kind) omp_get_place_num_procs
end function omp_get_place_num_procs
subroutine omp_get_place_proc_ids(place_num, ids) bind(c)
import
integer (kind=omp_integer_kind), value :: place_num
integer (kind=omp_integer_kind) ids(*)
end subroutine omp_get_place_proc_ids
function omp_get_place_num() bind(c)
import
integer (kind=omp_integer_kind) omp_get_place_num
end function omp_get_place_num
function omp_get_partition_num_places() bind(c)
import
integer (kind=omp_integer_kind) omp_get_partition_num_places
end function omp_get_partition_num_places
subroutine omp_get_partition_place_nums(place_nums) bind(c)
import
integer (kind=omp_integer_kind) place_nums(*)
end subroutine omp_get_partition_place_nums
function omp_get_wtime() bind(c)
double precision omp_get_wtime
end function omp_get_wtime
function omp_get_wtick() bind(c)
double precision omp_get_wtick
end function omp_get_wtick
function omp_get_default_device() bind(c)
import
integer (kind=omp_integer_kind) omp_get_default_device
end function omp_get_default_device
subroutine omp_set_default_device(device_num) bind(c)
import
integer (kind=omp_integer_kind), value :: device_num
end subroutine omp_set_default_device
function omp_get_num_devices() bind(c)
import
integer (kind=omp_integer_kind) omp_get_num_devices
end function omp_get_num_devices
function omp_get_num_teams() bind(c)
import
integer (kind=omp_integer_kind) omp_get_num_teams
end function omp_get_num_teams
function omp_get_team_num() bind(c)
import
integer (kind=omp_integer_kind) omp_get_team_num
end function omp_get_team_num
function omp_is_initial_device() bind(c)
import
logical (kind=omp_logical_kind) omp_is_initial_device
end function omp_is_initial_device
function omp_get_initial_device() bind(c)
import
integer (kind=omp_integer_kind) omp_get_initial_device
end function omp_get_initial_device
subroutine omp_init_lock(svar) bind(c)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_init_lock
!DIR$ ENDIF
import
integer (kind=omp_lock_kind) svar
end subroutine omp_init_lock
subroutine omp_destroy_lock(svar) bind(c)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_destroy_lock
!DIR$ ENDIF
import
integer (kind=omp_lock_kind) svar
end subroutine omp_destroy_lock
subroutine omp_set_lock(svar) bind(c)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_set_lock
!DIR$ ENDIF
import
integer (kind=omp_lock_kind) svar
end subroutine omp_set_lock
subroutine omp_unset_lock(svar) bind(c)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_unset_lock
!DIR$ ENDIF
import
integer (kind=omp_lock_kind) svar
end subroutine omp_unset_lock
function omp_test_lock(svar) bind(c)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_test_lock
!DIR$ ENDIF
import
logical (kind=omp_logical_kind) omp_test_lock
integer (kind=omp_lock_kind) svar
end function omp_test_lock
subroutine omp_init_nest_lock(nvar) bind(c)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_init_nest_lock
!DIR$ ENDIF
import
integer (kind=omp_nest_lock_kind) nvar
end subroutine omp_init_nest_lock
subroutine omp_destroy_nest_lock(nvar) bind(c)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_destroy_nest_lock
!DIR$ ENDIF
import
integer (kind=omp_nest_lock_kind) nvar
end subroutine omp_destroy_nest_lock
subroutine omp_set_nest_lock(nvar) bind(c)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_set_nest_lock
!DIR$ ENDIF
import
integer (kind=omp_nest_lock_kind) nvar
end subroutine omp_set_nest_lock
subroutine omp_unset_nest_lock(nvar) bind(c)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_unset_nest_lock
!DIR$ ENDIF
import
integer (kind=omp_nest_lock_kind) nvar
end subroutine omp_unset_nest_lock
function omp_test_nest_lock(nvar) bind(c)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_test_nest_lock
!DIR$ ENDIF
import
integer (kind=omp_integer_kind) omp_test_nest_lock
integer (kind=omp_nest_lock_kind) nvar
end function omp_test_nest_lock
function omp_get_max_task_priority() bind(c)
import
integer (kind=omp_integer_kind) omp_get_max_task_priority
end function omp_get_max_task_priority
! ***
! *** kmp_* entry points
! ***
subroutine kmp_set_stacksize(size) bind(c)
import
integer (kind=omp_integer_kind), value :: size
end subroutine kmp_set_stacksize
subroutine kmp_set_stacksize_s(size) bind(c)
import
integer (kind=kmp_size_t_kind), value :: size
end subroutine kmp_set_stacksize_s
subroutine kmp_set_blocktime(msec) bind(c)
import
integer (kind=omp_integer_kind), value :: msec
end subroutine kmp_set_blocktime
subroutine kmp_set_library_serial() bind(c)
end subroutine kmp_set_library_serial
subroutine kmp_set_library_turnaround() bind(c)
end subroutine kmp_set_library_turnaround
subroutine kmp_set_library_throughput() bind(c)
end subroutine kmp_set_library_throughput
subroutine kmp_set_library(libnum) bind(c)
import
integer (kind=omp_integer_kind), value :: libnum
end subroutine kmp_set_library
subroutine kmp_set_defaults(string) bind(c)
character string(*)
end subroutine kmp_set_defaults
function kmp_get_stacksize() bind(c)
import
integer (kind=omp_integer_kind) kmp_get_stacksize
end function kmp_get_stacksize
function kmp_get_stacksize_s() bind(c)
import
integer (kind=kmp_size_t_kind) kmp_get_stacksize_s
end function kmp_get_stacksize_s
function kmp_get_blocktime() bind(c)
import
integer (kind=omp_integer_kind) kmp_get_blocktime
end function kmp_get_blocktime
function kmp_get_library() bind(c)
import
integer (kind=omp_integer_kind) kmp_get_library
end function kmp_get_library
subroutine kmp_set_disp_num_buffers(num) bind(c)
import
integer (kind=omp_integer_kind), value :: num
end subroutine kmp_set_disp_num_buffers
function kmp_set_affinity(mask) bind(c)
import
integer (kind=omp_integer_kind) kmp_set_affinity
integer (kind=kmp_affinity_mask_kind) mask
end function kmp_set_affinity
function kmp_get_affinity(mask) bind(c)
import
integer (kind=omp_integer_kind) kmp_get_affinity
integer (kind=kmp_affinity_mask_kind) mask
end function kmp_get_affinity
function kmp_get_affinity_max_proc() bind(c)
import
integer (kind=omp_integer_kind) kmp_get_affinity_max_proc
end function kmp_get_affinity_max_proc
subroutine kmp_create_affinity_mask(mask) bind(c)
import
integer (kind=kmp_affinity_mask_kind) mask
end subroutine kmp_create_affinity_mask
subroutine kmp_destroy_affinity_mask(mask) bind(c)
import
integer (kind=kmp_affinity_mask_kind) mask
end subroutine kmp_destroy_affinity_mask
function kmp_set_affinity_mask_proc(proc, mask) bind(c)
import
integer (kind=omp_integer_kind) kmp_set_affinity_mask_proc
integer (kind=omp_integer_kind), value :: proc
integer (kind=kmp_affinity_mask_kind) mask
end function kmp_set_affinity_mask_proc
function kmp_unset_affinity_mask_proc(proc, mask) bind(c)
import
integer (kind=omp_integer_kind) kmp_unset_affinity_mask_proc
integer (kind=omp_integer_kind), value :: proc
integer (kind=kmp_affinity_mask_kind) mask
end function kmp_unset_affinity_mask_proc
function kmp_get_affinity_mask_proc(proc, mask) bind(c)
import
integer (kind=omp_integer_kind) kmp_get_affinity_mask_proc
integer (kind=omp_integer_kind), value :: proc
integer (kind=kmp_affinity_mask_kind) mask
end function kmp_get_affinity_mask_proc
function kmp_malloc(size) bind(c)
import
integer (kind=kmp_pointer_kind) kmp_malloc
integer (kind=kmp_size_t_kind), value :: size
end function kmp_malloc
function kmp_aligned_malloc(size, alignment) bind(c)
import
integer (kind=kmp_pointer_kind) kmp_aligned_malloc
integer (kind=kmp_size_t_kind), value :: size
integer (kind=kmp_size_t_kind), value :: alignment
end function kmp_aligned_malloc
function kmp_calloc(nelem, elsize) bind(c)
import
integer (kind=kmp_pointer_kind) kmp_calloc
integer (kind=kmp_size_t_kind), value :: nelem
integer (kind=kmp_size_t_kind), value :: elsize
end function kmp_calloc
function kmp_realloc(ptr, size) bind(c)
import
integer (kind=kmp_pointer_kind) kmp_realloc
integer (kind=kmp_pointer_kind), value :: ptr
integer (kind=kmp_size_t_kind), value :: size
end function kmp_realloc
subroutine kmp_free(ptr) bind(c)
import
integer (kind=kmp_pointer_kind), value :: ptr
end subroutine kmp_free
subroutine kmp_set_warnings_on() bind(c)
end subroutine kmp_set_warnings_on
subroutine kmp_set_warnings_off() bind(c)
end subroutine kmp_set_warnings_off
subroutine omp_init_lock_with_hint(svar, hint) bind(c)
import
integer (kind=omp_lock_kind) svar
integer (kind=omp_lock_hint_kind), value :: hint
end subroutine omp_init_lock_with_hint
subroutine omp_init_nest_lock_with_hint(nvar, hint) bind(c)
import
integer (kind=omp_nest_lock_kind) nvar
integer (kind=omp_lock_hint_kind), value :: hint
end subroutine omp_init_nest_lock_with_hint
end interface
!DIR$ IF DEFINED (__INTEL_OFFLOAD)
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_set_num_threads
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_set_dynamic
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_set_nested
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_num_threads
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_max_threads
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_thread_num
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_num_procs
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_in_parallel
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_in_final
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_dynamic
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_nested
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_thread_limit
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_set_max_active_levels
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_max_active_levels
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_level
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_active_level
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_ancestor_thread_num
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_team_size
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_set_schedule
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_schedule
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_proc_bind
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_wtime
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_wtick
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_default_device
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_set_default_device
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_is_initial_device
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_initial_device
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_num_devices
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_num_teams
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_team_num
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_init_lock
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_destroy_lock
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_set_lock
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_unset_lock
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_test_lock
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_init_nest_lock
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_destroy_nest_lock
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_set_nest_lock
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_unset_nest_lock
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_test_nest_lock
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_max_task_priority
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_set_stacksize
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_set_stacksize_s
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_set_blocktime
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_set_library_serial
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_set_library_turnaround
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_set_library_throughput
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_set_library
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_set_defaults
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_get_stacksize
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_get_stacksize_s
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_get_blocktime
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_get_library
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_set_disp_num_buffers
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_set_affinity
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_get_affinity
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_get_affinity_max_proc
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_create_affinity_mask
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_destroy_affinity_mask
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_set_affinity_mask_proc
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_unset_affinity_mask_proc
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_get_affinity_mask_proc
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_malloc
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_aligned_malloc
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_calloc
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_realloc
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_free
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_set_warnings_on
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_set_warnings_off
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_init_lock_with_hint
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_init_nest_lock_with_hint
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!$omp declare target(omp_set_num_threads )
!$omp declare target(omp_set_dynamic )
!$omp declare target(omp_set_nested )
!$omp declare target(omp_get_num_threads )
!$omp declare target(omp_get_max_threads )
!$omp declare target(omp_get_thread_num )
!$omp declare target(omp_get_num_procs )
!$omp declare target(omp_in_parallel )
!$omp declare target(omp_in_final )
!$omp declare target(omp_get_dynamic )
!$omp declare target(omp_get_nested )
!$omp declare target(omp_get_thread_limit )
!$omp declare target(omp_set_max_active_levels )
!$omp declare target(omp_get_max_active_levels )
!$omp declare target(omp_get_level )
!$omp declare target(omp_get_active_level )
!$omp declare target(omp_get_ancestor_thread_num )
!$omp declare target(omp_get_team_size )
!$omp declare target(omp_set_schedule )
!$omp declare target(omp_get_schedule )
!$omp declare target(omp_get_proc_bind )
!$omp declare target(omp_get_wtime )
!$omp declare target(omp_get_wtick )
!$omp declare target(omp_get_default_device )
!$omp declare target(omp_set_default_device )
!$omp declare target(omp_is_initial_device )
!$omp declare target(omp_get_initial_device )
!$omp declare target(omp_get_num_devices )
!$omp declare target(omp_get_num_teams )
!$omp declare target(omp_get_team_num )
!$omp declare target(omp_init_lock )
!$omp declare target(omp_destroy_lock )
!$omp declare target(omp_set_lock )
!$omp declare target(omp_unset_lock )
!$omp declare target(omp_test_lock )
!$omp declare target(omp_init_nest_lock )
!$omp declare target(omp_destroy_nest_lock )
!$omp declare target(omp_set_nest_lock )
!$omp declare target(omp_unset_nest_lock )
!$omp declare target(omp_test_nest_lock )
!$omp declare target(omp_get_max_task_priority )
!$omp declare target(kmp_set_stacksize )
!$omp declare target(kmp_set_stacksize_s )
!$omp declare target(kmp_set_blocktime )
!$omp declare target(kmp_set_library_serial )
!$omp declare target(kmp_set_library_turnaround )
!$omp declare target(kmp_set_library_throughput )
!$omp declare target(kmp_set_library )
!$omp declare target(kmp_set_defaults )
!$omp declare target(kmp_get_stacksize )
!$omp declare target(kmp_get_stacksize_s )
!$omp declare target(kmp_get_blocktime )
!$omp declare target(kmp_get_library )
!$omp declare target(kmp_set_disp_num_buffers )
!$omp declare target(kmp_set_affinity )
!$omp declare target(kmp_get_affinity )
!$omp declare target(kmp_get_affinity_max_proc )
!$omp declare target(kmp_create_affinity_mask )
!$omp declare target(kmp_destroy_affinity_mask )
!$omp declare target(kmp_set_affinity_mask_proc )
!$omp declare target(kmp_unset_affinity_mask_proc )
!$omp declare target(kmp_get_affinity_mask_proc )
!$omp declare target(kmp_malloc )
!$omp declare target(kmp_aligned_malloc )
!$omp declare target(kmp_calloc )
!$omp declare target(kmp_realloc )
!$omp declare target(kmp_free )
!$omp declare target(kmp_set_warnings_on )
!$omp declare target(kmp_set_warnings_off )
!$omp declare target(omp_init_lock_with_hint )
!$omp declare target(omp_init_nest_lock_with_hint )
!DIR$ ENDIF
!DIR$ ENDIF

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,265 @@
/*
* include/50/omp.h.var
*/
//===----------------------------------------------------------------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is dual licensed under the MIT and the University of Illinois Open
// Source Licenses. See LICENSE.txt for details.
//
//===----------------------------------------------------------------------===//
#ifndef __OMP_H
# define __OMP_H
# define KMP_VERSION_MAJOR @LIBOMP_VERSION_MAJOR@
# define KMP_VERSION_MINOR @LIBOMP_VERSION_MINOR@
# define KMP_VERSION_BUILD @LIBOMP_VERSION_BUILD@
# define KMP_BUILD_DATE "@LIBOMP_BUILD_DATE@"
# ifdef __cplusplus
extern "C" {
# endif
# define omp_set_affinity_format ompc_set_affinity_format
# define omp_get_affinity_format ompc_get_affinity_format
# define omp_display_affinity ompc_display_affinity
# define omp_capture_affinity ompc_capture_affinity
# if defined(_WIN32)
# define __KAI_KMPC_CONVENTION __cdecl
# ifndef __KMP_IMP
# define __KMP_IMP __declspec(dllimport)
# endif
# else
# define __KAI_KMPC_CONVENTION
# ifndef __KMP_IMP
# define __KMP_IMP
# endif
# endif
/* schedule kind constants */
typedef enum omp_sched_t {
omp_sched_static = 1,
omp_sched_dynamic = 2,
omp_sched_guided = 3,
omp_sched_auto = 4
} omp_sched_t;
/* set API functions */
extern void __KAI_KMPC_CONVENTION omp_set_num_threads (int);
extern void __KAI_KMPC_CONVENTION omp_set_dynamic (int);
extern void __KAI_KMPC_CONVENTION omp_set_nested (int);
extern void __KAI_KMPC_CONVENTION omp_set_max_active_levels (int);
extern void __KAI_KMPC_CONVENTION omp_set_schedule (omp_sched_t, int);
/* query API functions */
extern int __KAI_KMPC_CONVENTION omp_get_num_threads (void);
extern int __KAI_KMPC_CONVENTION omp_get_dynamic (void);
extern int __KAI_KMPC_CONVENTION omp_get_nested (void);
extern int __KAI_KMPC_CONVENTION omp_get_max_threads (void);
extern int __KAI_KMPC_CONVENTION omp_get_thread_num (void);
extern int __KAI_KMPC_CONVENTION omp_get_num_procs (void);
extern int __KAI_KMPC_CONVENTION omp_in_parallel (void);
extern int __KAI_KMPC_CONVENTION omp_in_final (void);
extern int __KAI_KMPC_CONVENTION omp_get_active_level (void);
extern int __KAI_KMPC_CONVENTION omp_get_level (void);
extern int __KAI_KMPC_CONVENTION omp_get_ancestor_thread_num (int);
extern int __KAI_KMPC_CONVENTION omp_get_team_size (int);
extern int __KAI_KMPC_CONVENTION omp_get_thread_limit (void);
extern int __KAI_KMPC_CONVENTION omp_get_max_active_levels (void);
extern void __KAI_KMPC_CONVENTION omp_get_schedule (omp_sched_t *, int *);
extern int __KAI_KMPC_CONVENTION omp_get_max_task_priority (void);
/* lock API functions */
typedef struct omp_lock_t {
void * _lk;
} omp_lock_t;
extern void __KAI_KMPC_CONVENTION omp_init_lock (omp_lock_t *);
extern void __KAI_KMPC_CONVENTION omp_set_lock (omp_lock_t *);
extern void __KAI_KMPC_CONVENTION omp_unset_lock (omp_lock_t *);
extern void __KAI_KMPC_CONVENTION omp_destroy_lock (omp_lock_t *);
extern int __KAI_KMPC_CONVENTION omp_test_lock (omp_lock_t *);
/* nested lock API functions */
typedef struct omp_nest_lock_t {
void * _lk;
} omp_nest_lock_t;
extern void __KAI_KMPC_CONVENTION omp_init_nest_lock (omp_nest_lock_t *);
extern void __KAI_KMPC_CONVENTION omp_set_nest_lock (omp_nest_lock_t *);
extern void __KAI_KMPC_CONVENTION omp_unset_nest_lock (omp_nest_lock_t *);
extern void __KAI_KMPC_CONVENTION omp_destroy_nest_lock (omp_nest_lock_t *);
extern int __KAI_KMPC_CONVENTION omp_test_nest_lock (omp_nest_lock_t *);
/* OpenMP 5.0 Synchronization hints*/
typedef enum omp_sync_hint_t {
omp_sync_hint_none = 0,
omp_lock_hint_none = omp_sync_hint_none,
omp_sync_hint_uncontended = 1,
omp_lock_hint_uncontended = omp_sync_hint_uncontended,
omp_sync_hint_contended = (1<<1),
omp_lock_hint_contended = omp_sync_hint_contended,
omp_sync_hint_nonspeculative = (1<<2),
omp_lock_hint_nonspeculative = omp_sync_hint_nonspeculative,
omp_sync_hint_speculative = (1<<3),
omp_lock_hint_speculative = omp_sync_hint_speculative,
kmp_lock_hint_hle = (1<<16),
kmp_lock_hint_rtm = (1<<17),
kmp_lock_hint_adaptive = (1<<18)
} omp_sync_hint_t;
/* lock hint type for dynamic user lock */
typedef omp_sync_hint_t omp_lock_hint_t;
/* hinted lock initializers */
extern void __KAI_KMPC_CONVENTION omp_init_lock_with_hint(omp_lock_t *, omp_lock_hint_t);
extern void __KAI_KMPC_CONVENTION omp_init_nest_lock_with_hint(omp_nest_lock_t *, omp_lock_hint_t);
/* time API functions */
extern double __KAI_KMPC_CONVENTION omp_get_wtime (void);
extern double __KAI_KMPC_CONVENTION omp_get_wtick (void);
/* OpenMP 4.0 */
extern int __KAI_KMPC_CONVENTION omp_get_default_device (void);
extern void __KAI_KMPC_CONVENTION omp_set_default_device (int);
extern int __KAI_KMPC_CONVENTION omp_is_initial_device (void);
extern int __KAI_KMPC_CONVENTION omp_get_num_devices (void);
extern int __KAI_KMPC_CONVENTION omp_get_num_teams (void);
extern int __KAI_KMPC_CONVENTION omp_get_team_num (void);
extern int __KAI_KMPC_CONVENTION omp_get_cancellation (void);
# include <stdlib.h>
/* OpenMP 4.5 */
extern int __KAI_KMPC_CONVENTION omp_get_initial_device (void);
extern void* __KAI_KMPC_CONVENTION omp_target_alloc(size_t, int);
extern void __KAI_KMPC_CONVENTION omp_target_free(void *, int);
extern int __KAI_KMPC_CONVENTION omp_target_is_present(void *, int);
extern int __KAI_KMPC_CONVENTION omp_target_memcpy(void *, void *, size_t, size_t, size_t, int, int);
extern int __KAI_KMPC_CONVENTION omp_target_memcpy_rect(void *, void *, size_t, int, const size_t *,
const size_t *, const size_t *, const size_t *, const size_t *, int, int);
extern int __KAI_KMPC_CONVENTION omp_target_associate_ptr(void *, void *, size_t, size_t, int);
extern int __KAI_KMPC_CONVENTION omp_target_disassociate_ptr(void *, int);
/* OpenMP 5.0 */
extern int __KAI_KMPC_CONVENTION omp_get_device_num (void);
/* kmp API functions */
extern int __KAI_KMPC_CONVENTION kmp_get_stacksize (void);
extern void __KAI_KMPC_CONVENTION kmp_set_stacksize (int);
extern size_t __KAI_KMPC_CONVENTION kmp_get_stacksize_s (void);
extern void __KAI_KMPC_CONVENTION kmp_set_stacksize_s (size_t);
extern int __KAI_KMPC_CONVENTION kmp_get_blocktime (void);
extern int __KAI_KMPC_CONVENTION kmp_get_library (void);
extern void __KAI_KMPC_CONVENTION kmp_set_blocktime (int);
extern void __KAI_KMPC_CONVENTION kmp_set_library (int);
extern void __KAI_KMPC_CONVENTION kmp_set_library_serial (void);
extern void __KAI_KMPC_CONVENTION kmp_set_library_turnaround (void);
extern void __KAI_KMPC_CONVENTION kmp_set_library_throughput (void);
extern void __KAI_KMPC_CONVENTION kmp_set_defaults (char const *);
extern void __KAI_KMPC_CONVENTION kmp_set_disp_num_buffers (int);
/* Intel affinity API */
typedef void * kmp_affinity_mask_t;
extern int __KAI_KMPC_CONVENTION kmp_set_affinity (kmp_affinity_mask_t *);
extern int __KAI_KMPC_CONVENTION kmp_get_affinity (kmp_affinity_mask_t *);
extern int __KAI_KMPC_CONVENTION kmp_get_affinity_max_proc (void);
extern void __KAI_KMPC_CONVENTION kmp_create_affinity_mask (kmp_affinity_mask_t *);
extern void __KAI_KMPC_CONVENTION kmp_destroy_affinity_mask (kmp_affinity_mask_t *);
extern int __KAI_KMPC_CONVENTION kmp_set_affinity_mask_proc (int, kmp_affinity_mask_t *);
extern int __KAI_KMPC_CONVENTION kmp_unset_affinity_mask_proc (int, kmp_affinity_mask_t *);
extern int __KAI_KMPC_CONVENTION kmp_get_affinity_mask_proc (int, kmp_affinity_mask_t *);
/* OpenMP 4.0 affinity API */
typedef enum omp_proc_bind_t {
omp_proc_bind_false = 0,
omp_proc_bind_true = 1,
omp_proc_bind_master = 2,
omp_proc_bind_close = 3,
omp_proc_bind_spread = 4
} omp_proc_bind_t;
extern omp_proc_bind_t __KAI_KMPC_CONVENTION omp_get_proc_bind (void);
/* OpenMP 4.5 affinity API */
extern int __KAI_KMPC_CONVENTION omp_get_num_places (void);
extern int __KAI_KMPC_CONVENTION omp_get_place_num_procs (int);
extern void __KAI_KMPC_CONVENTION omp_get_place_proc_ids (int, int *);
extern int __KAI_KMPC_CONVENTION omp_get_place_num (void);
extern int __KAI_KMPC_CONVENTION omp_get_partition_num_places (void);
extern void __KAI_KMPC_CONVENTION omp_get_partition_place_nums (int *);
extern void * __KAI_KMPC_CONVENTION kmp_malloc (size_t);
extern void * __KAI_KMPC_CONVENTION kmp_aligned_malloc (size_t, size_t);
extern void * __KAI_KMPC_CONVENTION kmp_calloc (size_t, size_t);
extern void * __KAI_KMPC_CONVENTION kmp_realloc (void *, size_t);
extern void __KAI_KMPC_CONVENTION kmp_free (void *);
extern void __KAI_KMPC_CONVENTION kmp_set_warnings_on(void);
extern void __KAI_KMPC_CONVENTION kmp_set_warnings_off(void);
/* OpenMP 5.0 Tool Control */
typedef enum omp_control_tool_result_t {
omp_control_tool_notool = -2,
omp_control_tool_nocallback = -1,
omp_control_tool_success = 0,
omp_control_tool_ignored = 1
} omp_control_tool_result_t;
typedef enum omp_control_tool_t {
omp_control_tool_start = 1,
omp_control_tool_pause = 2,
omp_control_tool_flush = 3,
omp_control_tool_end = 4
} omp_control_tool_t;
extern int __KAI_KMPC_CONVENTION omp_control_tool(int, int, void*);
/* OpenMP 5.0 Memory Management */
typedef void *omp_allocator_t;
extern __KMP_IMP const omp_allocator_t *OMP_NULL_ALLOCATOR;
extern __KMP_IMP const omp_allocator_t *omp_default_mem_alloc;
extern __KMP_IMP const omp_allocator_t *omp_large_cap_mem_alloc;
extern __KMP_IMP const omp_allocator_t *omp_const_mem_alloc;
extern __KMP_IMP const omp_allocator_t *omp_high_bw_mem_alloc;
extern __KMP_IMP const omp_allocator_t *omp_low_lat_mem_alloc;
extern __KMP_IMP const omp_allocator_t *omp_cgroup_mem_alloc;
extern __KMP_IMP const omp_allocator_t *omp_pteam_mem_alloc;
extern __KMP_IMP const omp_allocator_t *omp_thread_mem_alloc;
extern void __KAI_KMPC_CONVENTION omp_set_default_allocator(const omp_allocator_t *);
extern const omp_allocator_t * __KAI_KMPC_CONVENTION omp_get_default_allocator(void);
#ifdef __cplusplus
extern void *__KAI_KMPC_CONVENTION omp_alloc(size_t size, const omp_allocator_t *allocator = OMP_NULL_ALLOCATOR);
extern void __KAI_KMPC_CONVENTION omp_free(void * ptr, const omp_allocator_t *allocator = OMP_NULL_ALLOCATOR);
#else
extern void *__KAI_KMPC_CONVENTION omp_alloc(size_t size, const omp_allocator_t *allocator);
extern void __KAI_KMPC_CONVENTION omp_free(void *ptr, const omp_allocator_t *allocator);
#endif
/* OpenMP 5.0 Affinity Format */
extern void __KAI_KMPC_CONVENTION omp_set_affinity_format(char const *);
extern size_t __KAI_KMPC_CONVENTION omp_get_affinity_format(char *, size_t);
extern void __KAI_KMPC_CONVENTION omp_display_affinity(char const *);
extern size_t __KAI_KMPC_CONVENTION omp_capture_affinity(char *, size_t, char const *);
# undef __KAI_KMPC_CONVENTION
# undef __KMP_IMP
/* Warning:
The following typedefs are not standard, deprecated and will be removed in a future release.
*/
typedef int omp_int_t;
typedef double omp_wtime_t;
# ifdef __cplusplus
}
# endif
#endif /* __OMP_H */

View File

@ -0,0 +1,940 @@
! include/50/omp_lib.f.var
!
!//===----------------------------------------------------------------------===//
!//
!// The LLVM Compiler Infrastructure
!//
!// This file is dual licensed under the MIT and the University of Illinois Open
!// Source Licenses. See LICENSE.txt for details.
!//
!//===----------------------------------------------------------------------===//
!
!***
!*** Some of the directives for the following routine extend past column 72,
!*** so process this file in 132-column mode.
!***
!dec$ fixedformlinesize:132
module omp_lib_kinds
integer, parameter :: omp_integer_kind = 4
integer, parameter :: omp_logical_kind = 4
integer, parameter :: omp_real_kind = 4
integer, parameter :: omp_lock_kind = int_ptr_kind()
integer, parameter :: omp_nest_lock_kind = int_ptr_kind()
integer, parameter :: omp_sched_kind = omp_integer_kind
integer, parameter :: omp_proc_bind_kind = omp_integer_kind
integer, parameter :: kmp_pointer_kind = int_ptr_kind()
integer, parameter :: kmp_size_t_kind = int_ptr_kind()
integer, parameter :: kmp_affinity_mask_kind = int_ptr_kind()
integer, parameter :: kmp_cancel_kind = omp_integer_kind
integer, parameter :: omp_lock_hint_kind = omp_integer_kind
integer, parameter :: omp_control_tool_kind = omp_integer_kind
integer, parameter :: omp_control_tool_result_kind = omp_integer_kind
integer, parameter :: omp_allocator_kind = int_ptr_kind()
end module omp_lib_kinds
module omp_lib
use omp_lib_kinds
integer (kind=omp_integer_kind), parameter :: kmp_version_major = @LIBOMP_VERSION_MAJOR@
integer (kind=omp_integer_kind), parameter :: kmp_version_minor = @LIBOMP_VERSION_MINOR@
integer (kind=omp_integer_kind), parameter :: kmp_version_build = @LIBOMP_VERSION_BUILD@
character(*), parameter :: kmp_build_date = '@LIBOMP_BUILD_DATE@'
integer (kind=omp_integer_kind), parameter :: openmp_version = @LIBOMP_OMP_YEAR_MONTH@
integer(kind=omp_sched_kind), parameter :: omp_sched_static = 1
integer(kind=omp_sched_kind), parameter :: omp_sched_dynamic = 2
integer(kind=omp_sched_kind), parameter :: omp_sched_guided = 3
integer(kind=omp_sched_kind), parameter :: omp_sched_auto = 4
integer (kind=omp_proc_bind_kind), parameter :: omp_proc_bind_false = 0
integer (kind=omp_proc_bind_kind), parameter :: omp_proc_bind_true = 1
integer (kind=omp_proc_bind_kind), parameter :: omp_proc_bind_master = 2
integer (kind=omp_proc_bind_kind), parameter :: omp_proc_bind_close = 3
integer (kind=omp_proc_bind_kind), parameter :: omp_proc_bind_spread = 4
integer (kind=kmp_cancel_kind), parameter :: kmp_cancel_parallel = 1
integer (kind=kmp_cancel_kind), parameter :: kmp_cancel_loop = 2
integer (kind=kmp_cancel_kind), parameter :: kmp_cancel_sections = 3
integer (kind=kmp_cancel_kind), parameter :: kmp_cancel_taskgroup = 4
integer (kind=omp_lock_hint_kind), parameter :: omp_lock_hint_none = 0
integer (kind=omp_lock_hint_kind), parameter :: omp_lock_hint_uncontended = 1
integer (kind=omp_lock_hint_kind), parameter :: omp_lock_hint_contended = 2
integer (kind=omp_lock_hint_kind), parameter :: omp_lock_hint_nonspeculative = 4
integer (kind=omp_lock_hint_kind), parameter :: omp_lock_hint_speculative = 8
integer (kind=omp_lock_hint_kind), parameter :: kmp_lock_hint_hle = 65536
integer (kind=omp_lock_hint_kind), parameter :: kmp_lock_hint_rtm = 131072
integer (kind=omp_lock_hint_kind), parameter :: kmp_lock_hint_adaptive = 262144
integer (kind=omp_allocator_kind), parameter :: omp_null_allocator = 0
integer (kind=omp_allocator_kind), parameter :: omp_default_mem_alloc = 1
integer (kind=omp_allocator_kind), parameter :: omp_large_cap_mem_alloc = 2
integer (kind=omp_allocator_kind), parameter :: omp_const_mem_alloc = 3
integer (kind=omp_allocator_kind), parameter :: omp_high_bw_mem_alloc = 4
integer (kind=omp_allocator_kind), parameter :: omp_low_lat_mem_alloc = 5
integer (kind=omp_allocator_kind), parameter :: omp_cgroup_mem_alloc = 6
integer (kind=omp_allocator_kind), parameter :: omp_pteam_mem_alloc = 7
integer (kind=omp_allocator_kind), parameter :: omp_thread_mem_alloc = 8
interface
! ***
! *** omp_* entry points
! ***
subroutine omp_set_num_threads(num_threads)
use omp_lib_kinds
integer (kind=omp_integer_kind) num_threads
end subroutine omp_set_num_threads
subroutine omp_set_dynamic(dynamic_threads)
use omp_lib_kinds
logical (kind=omp_logical_kind) dynamic_threads
end subroutine omp_set_dynamic
subroutine omp_set_nested(nested)
use omp_lib_kinds
logical (kind=omp_logical_kind) nested
end subroutine omp_set_nested
function omp_get_num_threads()
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_num_threads
end function omp_get_num_threads
function omp_get_max_threads()
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_max_threads
end function omp_get_max_threads
function omp_get_thread_num()
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_thread_num
end function omp_get_thread_num
function omp_get_num_procs()
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_num_procs
end function omp_get_num_procs
function omp_in_parallel()
use omp_lib_kinds
logical (kind=omp_logical_kind) omp_in_parallel
end function omp_in_parallel
function omp_in_final()
use omp_lib_kinds
logical (kind=omp_logical_kind) omp_in_final
end function omp_in_final
function omp_get_dynamic()
use omp_lib_kinds
logical (kind=omp_logical_kind) omp_get_dynamic
end function omp_get_dynamic
function omp_get_nested()
use omp_lib_kinds
logical (kind=omp_logical_kind) omp_get_nested
end function omp_get_nested
function omp_get_thread_limit()
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_thread_limit
end function omp_get_thread_limit
subroutine omp_set_max_active_levels(max_levels)
use omp_lib_kinds
integer (kind=omp_integer_kind) max_levels
end subroutine omp_set_max_active_levels
function omp_get_max_active_levels()
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_max_active_levels
end function omp_get_max_active_levels
function omp_get_level()
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_level
end function omp_get_level
function omp_get_active_level()
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_active_level
end function omp_get_active_level
function omp_get_ancestor_thread_num(level)
use omp_lib_kinds
integer (kind=omp_integer_kind) level
integer (kind=omp_integer_kind) omp_get_ancestor_thread_num
end function omp_get_ancestor_thread_num
function omp_get_team_size(level)
use omp_lib_kinds
integer (kind=omp_integer_kind) level
integer (kind=omp_integer_kind) omp_get_team_size
end function omp_get_team_size
subroutine omp_set_schedule(kind, chunk_size)
use omp_lib_kinds
integer (kind=omp_sched_kind) kind
integer (kind=omp_integer_kind) chunk_size
end subroutine omp_set_schedule
subroutine omp_get_schedule(kind, chunk_size)
use omp_lib_kinds
integer (kind=omp_sched_kind) kind
integer (kind=omp_integer_kind) chunk_size
end subroutine omp_get_schedule
function omp_get_proc_bind()
use omp_lib_kinds
integer (kind=omp_proc_bind_kind) omp_get_proc_bind
end function omp_get_proc_bind
function omp_get_num_places()
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_num_places
end function omp_get_num_places
function omp_get_place_num_procs(place_num)
use omp_lib_kinds
integer (kind=omp_integer_kind) place_num
integer (kind=omp_integer_kind) omp_get_place_num_procs
end function omp_get_place_num_procs
subroutine omp_get_place_proc_ids(place_num, ids)
use omp_lib_kinds
integer (kind=omp_integer_kind) place_num
integer (kind=omp_integer_kind) ids(*)
end subroutine omp_get_place_proc_ids
function omp_get_place_num()
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_place_num
end function omp_get_place_num
function omp_get_partition_num_places()
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_partition_num_places
end function omp_get_partition_num_places
subroutine omp_get_partition_place_nums(place_nums)
use omp_lib_kinds
integer (kind=omp_integer_kind) place_nums(*)
end subroutine omp_get_partition_place_nums
function omp_get_wtime()
double precision omp_get_wtime
end function omp_get_wtime
function omp_get_wtick ()
double precision omp_get_wtick
end function omp_get_wtick
function omp_get_default_device()
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_default_device
end function omp_get_default_device
subroutine omp_set_default_device(device_num)
use omp_lib_kinds
integer (kind=omp_integer_kind) device_num
end subroutine omp_set_default_device
function omp_get_num_devices()
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_num_devices
end function omp_get_num_devices
function omp_get_num_teams()
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_num_teams
end function omp_get_num_teams
function omp_get_team_num()
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_team_num
end function omp_get_team_num
function omp_get_cancellation()
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_cancellation
end function omp_get_cancellation
function omp_is_initial_device()
use omp_lib_kinds
logical (kind=omp_logical_kind) omp_is_initial_device
end function omp_is_initial_device
function omp_get_initial_device()
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_initial_device
end function omp_get_initial_device
function omp_get_device_num()
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_device_num
end function omp_get_device_num
subroutine omp_init_lock(svar)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_init_lock
!DIR$ ENDIF
use omp_lib_kinds
integer (kind=omp_lock_kind) svar
end subroutine omp_init_lock
subroutine omp_destroy_lock(svar)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_destroy_lock
!DIR$ ENDIF
use omp_lib_kinds
integer (kind=omp_lock_kind) svar
end subroutine omp_destroy_lock
subroutine omp_set_lock(svar)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_set_lock
!DIR$ ENDIF
use omp_lib_kinds
integer (kind=omp_lock_kind) svar
end subroutine omp_set_lock
subroutine omp_unset_lock(svar)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_unset_lock
!DIR$ ENDIF
use omp_lib_kinds
integer (kind=omp_lock_kind) svar
end subroutine omp_unset_lock
function omp_test_lock(svar)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_test_lock
!DIR$ ENDIF
use omp_lib_kinds
logical (kind=omp_logical_kind) omp_test_lock
integer (kind=omp_lock_kind) svar
end function omp_test_lock
subroutine omp_init_nest_lock(nvar)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_init_nest_lock
!DIR$ ENDIF
use omp_lib_kinds
integer (kind=omp_nest_lock_kind) nvar
end subroutine omp_init_nest_lock
subroutine omp_destroy_nest_lock(nvar)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_destroy_nest_lock
!DIR$ ENDIF
use omp_lib_kinds
integer (kind=omp_nest_lock_kind) nvar
end subroutine omp_destroy_nest_lock
subroutine omp_set_nest_lock(nvar)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_set_nest_lock
!DIR$ ENDIF
use omp_lib_kinds
integer (kind=omp_nest_lock_kind) nvar
end subroutine omp_set_nest_lock
subroutine omp_unset_nest_lock(nvar)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_unset_nest_lock
!DIR$ ENDIF
use omp_lib_kinds
integer (kind=omp_nest_lock_kind) nvar
end subroutine omp_unset_nest_lock
function omp_test_nest_lock(nvar)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_test_nest_lock
!DIR$ ENDIF
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_test_nest_lock
integer (kind=omp_nest_lock_kind) nvar
end function omp_test_nest_lock
function omp_get_max_task_priority()
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_max_task_priority
end function omp_get_max_task_priority
subroutine omp_set_default_allocator(svar)
use omp_lib_kinds
integer (kind=omp_allocator_kind) svar
end subroutine omp_set_default_allocator
function omp_get_default_allocator()
use omp_lib_kinds
integer (kind=omp_allocator_kind) omp_get_default_allocator
end function omp_get_default_allocator
subroutine omp_set_affinity_format(format)
character (len=*) format
end subroutine omp_set_affinity_format
function omp_get_affinity_format(buffer)
use omp_lib_kinds
character (len=*) buffer
integer (kind=kmp_size_t_kind) omp_get_affinity_format
end function omp_get_affinity_format
subroutine omp_display_affinity(format)
character (len=*) format
end subroutine omp_display_affinity
function omp_capture_affinity(buffer, format)
use omp_lib_kinds
character (len=*) format
character (len=*) buffer
integer (kind=kmp_size_t_kind) omp_capture_affinity
end function omp_capture_affinity
! ***
! *** kmp_* entry points
! ***
subroutine kmp_set_stacksize(size)
use omp_lib_kinds
integer (kind=omp_integer_kind) size
end subroutine kmp_set_stacksize
subroutine kmp_set_stacksize_s(size)
use omp_lib_kinds
integer (kind=kmp_size_t_kind) size
end subroutine kmp_set_stacksize_s
subroutine kmp_set_blocktime(msec)
use omp_lib_kinds
integer (kind=omp_integer_kind) msec
end subroutine kmp_set_blocktime
subroutine kmp_set_library_serial()
end subroutine kmp_set_library_serial
subroutine kmp_set_library_turnaround()
end subroutine kmp_set_library_turnaround
subroutine kmp_set_library_throughput()
end subroutine kmp_set_library_throughput
subroutine kmp_set_library(libnum)
use omp_lib_kinds
integer (kind=omp_integer_kind) libnum
end subroutine kmp_set_library
subroutine kmp_set_defaults(string)
character*(*) string
end subroutine kmp_set_defaults
function kmp_get_stacksize()
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_get_stacksize
end function kmp_get_stacksize
function kmp_get_stacksize_s()
use omp_lib_kinds
integer (kind=kmp_size_t_kind) kmp_get_stacksize_s
end function kmp_get_stacksize_s
function kmp_get_blocktime()
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_get_blocktime
end function kmp_get_blocktime
function kmp_get_library()
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_get_library
end function kmp_get_library
subroutine kmp_set_disp_num_buffers(num)
use omp_lib_kinds
integer (kind=omp_integer_kind) num
end subroutine kmp_set_disp_num_buffers
function kmp_set_affinity(mask)
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_set_affinity
integer (kind=kmp_affinity_mask_kind) mask
end function kmp_set_affinity
function kmp_get_affinity(mask)
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_get_affinity
integer (kind=kmp_affinity_mask_kind) mask
end function kmp_get_affinity
function kmp_get_affinity_max_proc()
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_get_affinity_max_proc
end function kmp_get_affinity_max_proc
subroutine kmp_create_affinity_mask(mask)
use omp_lib_kinds
integer (kind=kmp_affinity_mask_kind) mask
end subroutine kmp_create_affinity_mask
subroutine kmp_destroy_affinity_mask(mask)
use omp_lib_kinds
integer (kind=kmp_affinity_mask_kind) mask
end subroutine kmp_destroy_affinity_mask
function kmp_set_affinity_mask_proc(proc, mask)
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_set_affinity_mask_proc
integer (kind=omp_integer_kind) proc
integer (kind=kmp_affinity_mask_kind) mask
end function kmp_set_affinity_mask_proc
function kmp_unset_affinity_mask_proc(proc, mask)
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_unset_affinity_mask_proc
integer (kind=omp_integer_kind) proc
integer (kind=kmp_affinity_mask_kind) mask
end function kmp_unset_affinity_mask_proc
function kmp_get_affinity_mask_proc(proc, mask)
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_get_affinity_mask_proc
integer (kind=omp_integer_kind) proc
integer (kind=kmp_affinity_mask_kind) mask
end function kmp_get_affinity_mask_proc
function kmp_malloc(size)
use omp_lib_kinds
integer (kind=kmp_pointer_kind) kmp_malloc
integer (kind=kmp_size_t_kind) size
end function kmp_malloc
function kmp_aligned_malloc(size, alignment)
use omp_lib_kinds
integer (kind=kmp_pointer_kind) kmp_aligned_malloc
integer (kind=kmp_size_t_kind) size
integer (kind=kmp_size_t_kind) alignment
end function kmp_aligned_malloc
function kmp_calloc(nelem, elsize)
use omp_lib_kinds
integer (kind=kmp_pointer_kind) kmp_calloc
integer (kind=kmp_size_t_kind) nelem
integer (kind=kmp_size_t_kind) elsize
end function kmp_calloc
function kmp_realloc(ptr, size)
use omp_lib_kinds
integer (kind=kmp_pointer_kind) kmp_realloc
integer (kind=kmp_pointer_kind) ptr
integer (kind=kmp_size_t_kind) size
end function kmp_realloc
subroutine kmp_free(ptr)
use omp_lib_kinds
integer (kind=kmp_pointer_kind) ptr
end subroutine kmp_free
subroutine kmp_set_warnings_on()
end subroutine kmp_set_warnings_on
subroutine kmp_set_warnings_off()
end subroutine kmp_set_warnings_off
function kmp_get_cancellation_status(cancelkind)
use omp_lib_kinds
integer (kind=kmp_cancel_kind) cancelkind
logical (kind=omp_logical_kind) kmp_get_cancellation_status
end function kmp_get_cancellation_status
subroutine omp_init_lock_with_hint(svar, hint)
use omp_lib_kinds
integer (kind=omp_lock_kind) svar
integer (kind=omp_lock_hint_kind) hint
end subroutine omp_init_lock_with_hint
subroutine omp_init_nest_lock_with_hint(nvar, hint)
use omp_lib_kinds
integer (kind=omp_nest_lock_kind) nvar
integer (kind=omp_lock_hint_kind) hint
end subroutine omp_init_nest_lock_with_hint
function omp_control_tool(command, modifier)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_control_tool
integer (kind=omp_control_tool_kind) command
integer (kind=omp_control_tool_kind) modifier
end function omp_control_tool
end interface
!dec$ if defined(_WIN32)
!dec$ if defined(_WIN64) .or. defined(_M_AMD64)
!***
!*** The Fortran entry points must be in uppercase, even if the /Qlowercase
!*** option is specified. The alias attribute ensures that the specified
!*** string is used as the entry point.
!***
!*** On the Windows* OS IA-32 architecture, the Fortran entry points have an
!*** underscore prepended. On the Windows* OS Intel(R) 64
!*** architecture, no underscore is prepended.
!***
!dec$ attributes alias:'OMP_SET_NUM_THREADS' :: omp_set_num_threads
!dec$ attributes alias:'OMP_SET_DYNAMIC' :: omp_set_dynamic
!dec$ attributes alias:'OMP_SET_NESTED' :: omp_set_nested
!dec$ attributes alias:'OMP_GET_NUM_THREADS' :: omp_get_num_threads
!dec$ attributes alias:'OMP_GET_MAX_THREADS' :: omp_get_max_threads
!dec$ attributes alias:'OMP_GET_THREAD_NUM' :: omp_get_thread_num
!dec$ attributes alias:'OMP_GET_NUM_PROCS' :: omp_get_num_procs
!dec$ attributes alias:'OMP_IN_PARALLEL' :: omp_in_parallel
!dec$ attributes alias:'OMP_GET_DYNAMIC' :: omp_get_dynamic
!dec$ attributes alias:'OMP_GET_NESTED' :: omp_get_nested
!dec$ attributes alias:'OMP_GET_THREAD_LIMIT' :: omp_get_thread_limit
!dec$ attributes alias:'OMP_SET_MAX_ACTIVE_LEVELS' :: omp_set_max_active_levels
!dec$ attributes alias:'OMP_GET_MAX_ACTIVE_LEVELS' :: omp_get_max_active_levels
!dec$ attributes alias:'OMP_GET_LEVEL' :: omp_get_level
!dec$ attributes alias:'OMP_GET_ACTIVE_LEVEL' :: omp_get_active_level
!dec$ attributes alias:'OMP_GET_ANCESTOR_THREAD_NUM' :: omp_get_ancestor_thread_num
!dec$ attributes alias:'OMP_GET_TEAM_SIZE' :: omp_get_team_size
!dec$ attributes alias:'OMP_SET_SCHEDULE' :: omp_set_schedule
!dec$ attributes alias:'OMP_GET_SCHEDULE' :: omp_get_schedule
!dec$ attributes alias:'OMP_GET_PROC_BIND' :: omp_get_proc_bind
!dec$ attributes alias:'OMP_GET_WTIME' :: omp_get_wtime
!dec$ attributes alias:'OMP_GET_WTICK' :: omp_get_wtick
!dec$ attributes alias:'OMP_GET_DEFAULT_DEVICE' :: omp_get_default_device
!dec$ attributes alias:'OMP_SET_DEFAULT_DEVICE' :: omp_set_default_device
!dec$ attributes alias:'OMP_GET_NUM_DEVICES' :: omp_get_num_devices
!dec$ attributes alias:'OMP_GET_NUM_TEAMS' :: omp_get_num_teams
!dec$ attributes alias:'OMP_GET_TEAM_NUM' :: omp_get_team_num
!dec$ attributes alias:'OMP_GET_CANCELLATION' :: omp_get_cancellation
!dec$ attributes alias:'OMP_IS_INITIAL_DEVICE' :: omp_is_initial_device
!dec$ attributes alias:'OMP_GET_INITIAL_DEVICE' :: omp_get_initial_device
!dec$ attributes alias:'OMP_GET_MAX_TASK_PRIORITY' :: omp_get_max_task_priority
!dec$ attributes alias:'OMP_GET_DEVICE_NUM' :: omp_get_device_num
!dec$ attributes alias:'OMP_CONTROL_TOOL' :: omp_control_tool
!dec$ attributes alias:'OMP_SET_AFFINITY_FORMAT' :: omp_set_affinity_format
!dec$ attributes alias:'OMP_GET_AFFINITY_FORMAT' :: omp_get_affinity_format
!dec$ attributes alias:'OMP_DISPLAY_AFFINITY' :: omp_display_affinity
!dec$ attributes alias:'OMP_CAPTURE_AFFINITY' :: omp_capture_affinity
!dec$ attributes alias:'omp_init_lock' :: omp_init_lock
!dec$ attributes alias:'omp_init_lock_with_hint' :: omp_init_lock_with_hint
!dec$ attributes alias:'omp_destroy_lock' :: omp_destroy_lock
!dec$ attributes alias:'omp_set_lock' :: omp_set_lock
!dec$ attributes alias:'omp_unset_lock' :: omp_unset_lock
!dec$ attributes alias:'omp_test_lock' :: omp_test_lock
!dec$ attributes alias:'omp_init_nest_lock' :: omp_init_nest_lock
!dec$ attributes alias:'omp_init_nest_lock_with_hint' :: omp_init_nest_lock_with_hint
!dec$ attributes alias:'omp_destroy_nest_lock' :: omp_destroy_nest_lock
!dec$ attributes alias:'omp_set_nest_lock' :: omp_set_nest_lock
!dec$ attributes alias:'omp_unset_nest_lock' :: omp_unset_nest_lock
!dec$ attributes alias:'omp_test_nest_lock' :: omp_test_nest_lock
!dec$ attributes alias:'KMP_SET_STACKSIZE'::kmp_set_stacksize
!dec$ attributes alias:'KMP_SET_STACKSIZE_S'::kmp_set_stacksize_s
!dec$ attributes alias:'KMP_SET_BLOCKTIME'::kmp_set_blocktime
!dec$ attributes alias:'KMP_SET_LIBRARY_SERIAL'::kmp_set_library_serial
!dec$ attributes alias:'KMP_SET_LIBRARY_TURNAROUND'::kmp_set_library_turnaround
!dec$ attributes alias:'KMP_SET_LIBRARY_THROUGHPUT'::kmp_set_library_throughput
!dec$ attributes alias:'KMP_SET_LIBRARY'::kmp_set_library
!dec$ attributes alias:'KMP_GET_STACKSIZE'::kmp_get_stacksize
!dec$ attributes alias:'KMP_GET_STACKSIZE_S'::kmp_get_stacksize_s
!dec$ attributes alias:'KMP_GET_BLOCKTIME'::kmp_get_blocktime
!dec$ attributes alias:'KMP_GET_LIBRARY'::kmp_get_library
!dec$ attributes alias:'KMP_SET_AFFINITY'::kmp_set_affinity
!dec$ attributes alias:'KMP_GET_AFFINITY'::kmp_get_affinity
!dec$ attributes alias:'KMP_GET_AFFINITY_MAX_PROC'::kmp_get_affinity_max_proc
!dec$ attributes alias:'KMP_CREATE_AFFINITY_MASK'::kmp_create_affinity_mask
!dec$ attributes alias:'KMP_DESTROY_AFFINITY_MASK'::kmp_destroy_affinity_mask
!dec$ attributes alias:'KMP_SET_AFFINITY_MASK_PROC'::kmp_set_affinity_mask_proc
!dec$ attributes alias:'KMP_UNSET_AFFINITY_MASK_PROC'::kmp_unset_affinity_mask_proc
!dec$ attributes alias:'KMP_GET_AFFINITY_MASK_PROC'::kmp_get_affinity_mask_proc
!dec$ attributes alias:'KMP_MALLOC'::kmp_malloc
!dec$ attributes alias:'KMP_ALIGNED_MALLOC'::kmp_aligned_malloc
!dec$ attributes alias:'KMP_CALLOC'::kmp_calloc
!dec$ attributes alias:'KMP_REALLOC'::kmp_realloc
!dec$ attributes alias:'KMP_FREE'::kmp_free
!dec$ attributes alias:'KMP_SET_WARNINGS_ON'::kmp_set_warnings_on
!dec$ attributes alias:'KMP_SET_WARNINGS_OFF'::kmp_set_warnings_off
!dec$ attributes alias:'KMP_GET_CANCELLATION_STATUS' :: kmp_get_cancellation_status
!dec$ else
!***
!*** On Windows* OS IA-32 architecture, the Fortran entry points have an underscore prepended.
!***
!dec$ attributes alias:'_OMP_SET_NUM_THREADS' :: omp_set_num_threads
!dec$ attributes alias:'_OMP_SET_DYNAMIC' :: omp_set_dynamic
!dec$ attributes alias:'_OMP_SET_NESTED' :: omp_set_nested
!dec$ attributes alias:'_OMP_GET_NUM_THREADS' :: omp_get_num_threads
!dec$ attributes alias:'_OMP_GET_MAX_THREADS' :: omp_get_max_threads
!dec$ attributes alias:'_OMP_GET_THREAD_NUM' :: omp_get_thread_num
!dec$ attributes alias:'_OMP_GET_NUM_PROCS' :: omp_get_num_procs
!dec$ attributes alias:'_OMP_IN_PARALLEL' :: omp_in_parallel
!dec$ attributes alias:'_OMP_GET_DYNAMIC' :: omp_get_dynamic
!dec$ attributes alias:'_OMP_GET_NESTED' :: omp_get_nested
!dec$ attributes alias:'_OMP_GET_THREAD_LIMIT' :: omp_get_thread_limit
!dec$ attributes alias:'_OMP_SET_MAX_ACTIVE_LEVELS' :: omp_set_max_active_levels
!dec$ attributes alias:'_OMP_GET_MAX_ACTIVE_LEVELS' :: omp_get_max_active_levels
!dec$ attributes alias:'_OMP_GET_LEVEL' :: omp_get_level
!dec$ attributes alias:'_OMP_GET_ACTIVE_LEVEL' :: omp_get_active_level
!dec$ attributes alias:'_OMP_GET_ANCESTOR_THREAD_NUM' :: omp_get_ancestor_thread_num
!dec$ attributes alias:'_OMP_GET_TEAM_SIZE' :: omp_get_team_size
!dec$ attributes alias:'_OMP_SET_SCHEDULE' :: omp_set_schedule
!dec$ attributes alias:'_OMP_GET_SCHEDULE' :: omp_get_schedule
!dec$ attributes alias:'_OMP_GET_PROC_BIND' :: omp_get_proc_bind
!dec$ attributes alias:'_OMP_GET_WTIME' :: omp_get_wtime
!dec$ attributes alias:'_OMP_GET_WTICK' :: omp_get_wtick
!dec$ attributes alias:'_OMP_GET_DEFAULT_DEVICE' :: omp_get_default_device
!dec$ attributes alias:'_OMP_SET_DEFAULT_DEVICE' :: omp_set_default_device
!dec$ attributes alias:'_OMP_GET_NUM_DEVICES' :: omp_get_num_devices
!dec$ attributes alias:'_OMP_GET_NUM_TEAMS' :: omp_get_num_teams
!dec$ attributes alias:'_OMP_GET_TEAM_NUM' :: omp_get_team_num
!dec$ attributes alias:'_OMP_GET_CANCELLATION' :: omp_get_cancellation
!dec$ attributes alias:'_OMP_IS_INITIAL_DEVICE' :: omp_is_initial_device
!dec$ attributes alias:'_OMP_GET_INITIAL_DEVICE' :: omp_get_initial_device
!dec$ attributes alias:'_OMP_GET_MAX_TASK_PRIORTY' :: omp_get_max_task_priority
!dec$ attributes alias:'_OMP_GET_DEVICE_NUM' :: omp_get_device_num
!dec$ attributes alias:'_OMP_CONTROL_TOOL' :: omp_control_tool
!dec$ attributes alias:'_OMP_SET_AFFINITY_FORMAT' :: omp_set_affinity_format
!dec$ attributes alias:'_OMP_GET_AFFINITY_FORMAT' :: omp_get_affinity_format
!dec$ attributes alias:'_OMP_DISPLAY_AFFINITY' :: omp_display_affinity
!dec$ attributes alias:'_OMP_CAPTURE_AFFINITY' :: omp_capture_affinity
!dec$ attributes alias:'_omp_init_lock' :: omp_init_lock
!dec$ attributes alias:'_omp_init_lock_with_hint' :: omp_init_lock_with_hint
!dec$ attributes alias:'_omp_destroy_lock' :: omp_destroy_lock
!dec$ attributes alias:'_omp_set_lock' :: omp_set_lock
!dec$ attributes alias:'_omp_unset_lock' :: omp_unset_lock
!dec$ attributes alias:'_omp_test_lock' :: omp_test_lock
!dec$ attributes alias:'_omp_init_nest_lock' :: omp_init_nest_lock
!dec$ attributes alias:'_omp_init_nest_lock_with_hint' :: omp_init_nest_lock_with_hint
!dec$ attributes alias:'_omp_destroy_nest_lock' :: omp_destroy_nest_lock
!dec$ attributes alias:'_omp_set_nest_lock' :: omp_set_nest_lock
!dec$ attributes alias:'_omp_unset_nest_lock' :: omp_unset_nest_lock
!dec$ attributes alias:'_omp_test_nest_lock' :: omp_test_nest_lock
!dec$ attributes alias:'_KMP_SET_STACKSIZE'::kmp_set_stacksize
!dec$ attributes alias:'_KMP_SET_STACKSIZE_S'::kmp_set_stacksize_s
!dec$ attributes alias:'_KMP_SET_BLOCKTIME'::kmp_set_blocktime
!dec$ attributes alias:'_KMP_SET_LIBRARY_SERIAL'::kmp_set_library_serial
!dec$ attributes alias:'_KMP_SET_LIBRARY_TURNAROUND'::kmp_set_library_turnaround
!dec$ attributes alias:'_KMP_SET_LIBRARY_THROUGHPUT'::kmp_set_library_throughput
!dec$ attributes alias:'_KMP_SET_LIBRARY'::kmp_set_library
!dec$ attributes alias:'_KMP_GET_STACKSIZE'::kmp_get_stacksize
!dec$ attributes alias:'_KMP_GET_STACKSIZE_S'::kmp_get_stacksize_s
!dec$ attributes alias:'_KMP_GET_BLOCKTIME'::kmp_get_blocktime
!dec$ attributes alias:'_KMP_GET_LIBRARY'::kmp_get_library
!dec$ attributes alias:'_KMP_SET_AFFINITY'::kmp_set_affinity
!dec$ attributes alias:'_KMP_GET_AFFINITY'::kmp_get_affinity
!dec$ attributes alias:'_KMP_GET_AFFINITY_MAX_PROC'::kmp_get_affinity_max_proc
!dec$ attributes alias:'_KMP_CREATE_AFFINITY_MASK'::kmp_create_affinity_mask
!dec$ attributes alias:'_KMP_DESTROY_AFFINITY_MASK'::kmp_destroy_affinity_mask
!dec$ attributes alias:'_KMP_SET_AFFINITY_MASK_PROC'::kmp_set_affinity_mask_proc
!dec$ attributes alias:'_KMP_UNSET_AFFINITY_MASK_PROC'::kmp_unset_affinity_mask_proc
!dec$ attributes alias:'_KMP_GET_AFFINITY_MASK_PROC'::kmp_get_affinity_mask_proc
!dec$ attributes alias:'_KMP_MALLOC'::kmp_malloc
!dec$ attributes alias:'_KMP_ALIGNED_MALLOC'::kmp_aligned_malloc
!dec$ attributes alias:'_KMP_CALLOC'::kmp_calloc
!dec$ attributes alias:'_KMP_REALLOC'::kmp_realloc
!dec$ attributes alias:'_KMP_FREE'::kmp_free
!dec$ attributes alias:'_KMP_SET_WARNINGS_ON'::kmp_set_warnings_on
!dec$ attributes alias:'_KMP_SET_WARNINGS_OFF'::kmp_set_warnings_off
!dec$ attributes alias:'_KMP_GET_CANCELLATION_STATUS' :: kmp_get_cancellation_status
!dec$ endif
!dec$ endif
!dec$ if defined(__linux)
!***
!*** The Linux* OS entry points are in lowercase, with an underscore appended.
!***
!dec$ attributes alias:'omp_set_num_threads_'::omp_set_num_threads
!dec$ attributes alias:'omp_set_dynamic_'::omp_set_dynamic
!dec$ attributes alias:'omp_set_nested_'::omp_set_nested
!dec$ attributes alias:'omp_get_num_threads_'::omp_get_num_threads
!dec$ attributes alias:'omp_get_max_threads_'::omp_get_max_threads
!dec$ attributes alias:'omp_get_thread_num_'::omp_get_thread_num
!dec$ attributes alias:'omp_get_num_procs_'::omp_get_num_procs
!dec$ attributes alias:'omp_in_parallel_'::omp_in_parallel
!dec$ attributes alias:'omp_get_dynamic_'::omp_get_dynamic
!dec$ attributes alias:'omp_get_nested_'::omp_get_nested
!dec$ attributes alias:'omp_get_thread_limit_'::omp_get_thread_limit
!dec$ attributes alias:'omp_set_max_active_levels_'::omp_set_max_active_levels
!dec$ attributes alias:'omp_get_max_active_levels_'::omp_get_max_active_levels
!dec$ attributes alias:'omp_get_level_'::omp_get_level
!dec$ attributes alias:'omp_get_active_level_'::omp_get_active_level
!dec$ attributes alias:'omp_get_ancestor_thread_num_'::omp_get_ancestor_thread_num
!dec$ attributes alias:'omp_get_team_size_'::omp_get_team_size
!dec$ attributes alias:'omp_set_schedule_'::omp_set_schedule
!dec$ attributes alias:'omp_get_schedule_'::omp_get_schedule
!dec$ attributes alias:'omp_get_proc_bind_' :: omp_get_proc_bind
!dec$ attributes alias:'omp_get_wtime_'::omp_get_wtime
!dec$ attributes alias:'omp_get_wtick_'::omp_get_wtick
!dec$ attributes alias:'omp_get_default_device_'::omp_get_default_device
!dec$ attributes alias:'omp_set_default_device_'::omp_set_default_device
!dec$ attributes alias:'omp_get_num_devices_'::omp_get_num_devices
!dec$ attributes alias:'omp_get_num_teams_'::omp_get_num_teams
!dec$ attributes alias:'omp_get_team_num_'::omp_get_team_num
!dec$ attributes alias:'omp_get_cancellation_'::omp_get_cancellation
!dec$ attributes alias:'omp_is_initial_device_'::omp_is_initial_device
!dec$ attributes alias:'omp_get_initial_device_'::omp_get_initial_device
!dec$ attributes alias:'omp_get_max_task_priority_'::omp_get_max_task_priority
!dec$ attributes alias:'omp_get_device_num_'::omp_get_device_num
!dec$ attributes alias:'omp_set_affinity_format_' :: omp_set_affinity_format
!dec$ attributes alias:'omp_get_affinity_format_' :: omp_get_affinity_format
!dec$ attributes alias:'omp_display_affinity_' :: omp_display_affinity
!dec$ attributes alias:'omp_capture_affinity_' :: omp_capture_affinity
!dec$ attributes alias:'omp_init_lock_'::omp_init_lock
!dec$ attributes alias:'omp_init_lock_with_hint_'::omp_init_lock_with_hint
!dec$ attributes alias:'omp_destroy_lock_'::omp_destroy_lock
!dec$ attributes alias:'omp_set_lock_'::omp_set_lock
!dec$ attributes alias:'omp_unset_lock_'::omp_unset_lock
!dec$ attributes alias:'omp_test_lock_'::omp_test_lock
!dec$ attributes alias:'omp_init_nest_lock_'::omp_init_nest_lock
!dec$ attributes alias:'omp_init_nest_lock_with_hint_'::omp_init_nest_lock_with_hint
!dec$ attributes alias:'omp_destroy_nest_lock_'::omp_destroy_nest_lock
!dec$ attributes alias:'omp_set_nest_lock_'::omp_set_nest_lock
!dec$ attributes alias:'omp_unset_nest_lock_'::omp_unset_nest_lock
!dec$ attributes alias:'omp_test_nest_lock_'::omp_test_nest_lock
!dec$ attributes alias:'omp_control_tool_'::omp_control_tool
!dec$ attributes alias:'kmp_set_stacksize_'::kmp_set_stacksize
!dec$ attributes alias:'kmp_set_stacksize_s_'::kmp_set_stacksize_s
!dec$ attributes alias:'kmp_set_blocktime_'::kmp_set_blocktime
!dec$ attributes alias:'kmp_set_library_serial_'::kmp_set_library_serial
!dec$ attributes alias:'kmp_set_library_turnaround_'::kmp_set_library_turnaround
!dec$ attributes alias:'kmp_set_library_throughput_'::kmp_set_library_throughput
!dec$ attributes alias:'kmp_set_library_'::kmp_set_library
!dec$ attributes alias:'kmp_get_stacksize_'::kmp_get_stacksize
!dec$ attributes alias:'kmp_get_stacksize_s_'::kmp_get_stacksize_s
!dec$ attributes alias:'kmp_get_blocktime_'::kmp_get_blocktime
!dec$ attributes alias:'kmp_get_library_'::kmp_get_library
!dec$ attributes alias:'kmp_set_affinity_'::kmp_set_affinity
!dec$ attributes alias:'kmp_get_affinity_'::kmp_get_affinity
!dec$ attributes alias:'kmp_get_affinity_max_proc_'::kmp_get_affinity_max_proc
!dec$ attributes alias:'kmp_create_affinity_mask_'::kmp_create_affinity_mask
!dec$ attributes alias:'kmp_destroy_affinity_mask_'::kmp_destroy_affinity_mask
!dec$ attributes alias:'kmp_set_affinity_mask_proc_'::kmp_set_affinity_mask_proc
!dec$ attributes alias:'kmp_unset_affinity_mask_proc_'::kmp_unset_affinity_mask_proc
!dec$ attributes alias:'kmp_get_affinity_mask_proc_'::kmp_get_affinity_mask_proc
!dec$ attributes alias:'kmp_malloc_'::kmp_malloc
!dec$ attributes alias:'kmp_aligned_malloc_'::kmp_aligned_malloc
!dec$ attributes alias:'kmp_calloc_'::kmp_calloc
!dec$ attributes alias:'kmp_realloc_'::kmp_realloc
!dec$ attributes alias:'kmp_free_'::kmp_free
!dec$ attributes alias:'kmp_set_warnings_on_'::kmp_set_warnings_on
!dec$ attributes alias:'kmp_set_warnings_off_'::kmp_set_warnings_off
!dec$ attributes alias:'kmp_get_cancellation_status_'::kmp_get_cancellation_status
!dec$ endif
!dec$ if defined(__APPLE__)
!***
!*** The Mac entry points are in lowercase, with an both an underscore
!*** appended and an underscore prepended.
!***
!dec$ attributes alias:'_omp_set_num_threads_'::omp_set_num_threads
!dec$ attributes alias:'_omp_set_dynamic_'::omp_set_dynamic
!dec$ attributes alias:'_omp_set_nested_'::omp_set_nested
!dec$ attributes alias:'_omp_get_num_threads_'::omp_get_num_threads
!dec$ attributes alias:'_omp_get_max_threads_'::omp_get_max_threads
!dec$ attributes alias:'_omp_get_thread_num_'::omp_get_thread_num
!dec$ attributes alias:'_omp_get_num_procs_'::omp_get_num_procs
!dec$ attributes alias:'_omp_in_parallel_'::omp_in_parallel
!dec$ attributes alias:'_omp_get_dynamic_'::omp_get_dynamic
!dec$ attributes alias:'_omp_get_nested_'::omp_get_nested
!dec$ attributes alias:'_omp_get_thread_limit_'::omp_get_thread_limit
!dec$ attributes alias:'_omp_set_max_active_levels_'::omp_set_max_active_levels
!dec$ attributes alias:'_omp_get_max_active_levels_'::omp_get_max_active_levels
!dec$ attributes alias:'_omp_get_level_'::omp_get_level
!dec$ attributes alias:'_omp_get_active_level_'::omp_get_active_level
!dec$ attributes alias:'_omp_get_ancestor_thread_num_'::omp_get_ancestor_thread_num
!dec$ attributes alias:'_omp_get_team_size_'::omp_get_team_size
!dec$ attributes alias:'_omp_set_schedule_'::omp_set_schedule
!dec$ attributes alias:'_omp_get_schedule_'::omp_get_schedule
!dec$ attributes alias:'_omp_get_proc_bind_' :: omp_get_proc_bind
!dec$ attributes alias:'_omp_get_wtime_'::omp_get_wtime
!dec$ attributes alias:'_omp_get_wtick_'::omp_get_wtick
!dec$ attributes alias:'_omp_get_default_device_'::omp_get_default_device
!dec$ attributes alias:'_omp_set_default_device_'::omp_set_default_device
!dec$ attributes alias:'_omp_get_num_devices_'::omp_get_num_devices
!dec$ attributes alias:'_omp_get_num_teams_'::omp_get_num_teams
!dec$ attributes alias:'_omp_get_team_num_'::omp_get_team_num
!dec$ attributes alias:'_omp_get_cancellation_'::omp_get_cancellation
!dec$ attributes alias:'_omp_is_initial_device_'::omp_is_initial_device
!dec$ attributes alias:'_omp_get_initial_device_'::omp_get_initial_device
!dec$ attributes alias:'_omp_get_max_task_priorty_'::omp_get_max_task_priority
!dec$ attributes alias:'_omp_get_device_num_'::omp_get_device_num
!dec$ attributes alias:'_omp_init_lock_'::omp_init_lock
!dec$ attributes alias:'_omp_init_lock_with_hint_'::omp_init_lock_with_hint
!dec$ attributes alias:'_omp_destroy_lock_'::omp_destroy_lock
!dec$ attributes alias:'_omp_set_lock_'::omp_set_lock
!dec$ attributes alias:'_omp_unset_lock_'::omp_unset_lock
!dec$ attributes alias:'_omp_test_lock_'::omp_test_lock
!dec$ attributes alias:'_omp_init_nest_lock_'::omp_init_nest_lock
!dec$ attributes alias:'_omp_init_nest_lock_with_hint_'::omp_init_nest_lock_with_hint
!dec$ attributes alias:'_omp_destroy_nest_lock_'::omp_destroy_nest_lock
!dec$ attributes alias:'_omp_set_nest_lock_'::omp_set_nest_lock
!dec$ attributes alias:'_omp_unset_nest_lock_'::omp_unset_nest_lock
!dec$ attributes alias:'_omp_test_nest_lock_'::omp_test_nest_lock
!dec$ attributes alias:'_omp_control_tool_'::omp_control_tool
!dec$ attributes alias:'_omp_set_affinity_format_' :: omp_set_affinity_format
!dec$ attributes alias:'_omp_get_affinity_format_' :: omp_get_affinity_format
!dec$ attributes alias:'_omp_display_affinity_' :: omp_display_affinity
!dec$ attributes alias:'_omp_capture_affinity_' :: omp_capture_affinity
!dec$ attributes alias:'_kmp_set_stacksize_'::kmp_set_stacksize
!dec$ attributes alias:'_kmp_set_stacksize_s_'::kmp_set_stacksize_s
!dec$ attributes alias:'_kmp_set_blocktime_'::kmp_set_blocktime
!dec$ attributes alias:'_kmp_set_library_serial_'::kmp_set_library_serial
!dec$ attributes alias:'_kmp_set_library_turnaround_'::kmp_set_library_turnaround
!dec$ attributes alias:'_kmp_set_library_throughput_'::kmp_set_library_throughput
!dec$ attributes alias:'_kmp_set_library_'::kmp_set_library
!dec$ attributes alias:'_kmp_get_stacksize_'::kmp_get_stacksize
!dec$ attributes alias:'_kmp_get_stacksize_s_'::kmp_get_stacksize_s
!dec$ attributes alias:'_kmp_get_blocktime_'::kmp_get_blocktime
!dec$ attributes alias:'_kmp_get_library_'::kmp_get_library
!dec$ attributes alias:'_kmp_set_affinity_'::kmp_set_affinity
!dec$ attributes alias:'_kmp_get_affinity_'::kmp_get_affinity
!dec$ attributes alias:'_kmp_get_affinity_max_proc_'::kmp_get_affinity_max_proc
!dec$ attributes alias:'_kmp_create_affinity_mask_'::kmp_create_affinity_mask
!dec$ attributes alias:'_kmp_destroy_affinity_mask_'::kmp_destroy_affinity_mask
!dec$ attributes alias:'_kmp_set_affinity_mask_proc_'::kmp_set_affinity_mask_proc
!dec$ attributes alias:'_kmp_unset_affinity_mask_proc_'::kmp_unset_affinity_mask_proc
!dec$ attributes alias:'_kmp_get_affinity_mask_proc_'::kmp_get_affinity_mask_proc
!dec$ attributes alias:'_kmp_malloc_'::kmp_malloc
!dec$ attributes alias:'_kmp_aligned_malloc_'::kmp_aligned_malloc
!dec$ attributes alias:'_kmp_calloc_'::kmp_calloc
!dec$ attributes alias:'_kmp_realloc_'::kmp_realloc
!dec$ attributes alias:'_kmp_free_'::kmp_free
!dec$ attributes alias:'_kmp_set_warnings_on_'::kmp_set_warnings_on
!dec$ attributes alias:'_kmp_set_warnings_off_'::kmp_set_warnings_off
!dec$ attributes alias:'_kmp_get_cancellation_status_'::kmp_get_cancellation_status
!dec$ endif
end module omp_lib

View File

@ -0,0 +1,597 @@
! include/50/omp_lib.f90.var
!
!//===----------------------------------------------------------------------===//
!//
!// The LLVM Compiler Infrastructure
!//
!// This file is dual licensed under the MIT and the University of Illinois Open
!// Source Licenses. See LICENSE.txt for details.
!//
!//===----------------------------------------------------------------------===//
!
module omp_lib_kinds
use, intrinsic :: iso_c_binding
integer, parameter :: omp_integer_kind = c_int
integer, parameter :: omp_logical_kind = 4
integer, parameter :: omp_real_kind = c_float
integer, parameter :: kmp_double_kind = c_double
integer, parameter :: omp_lock_kind = c_intptr_t
integer, parameter :: omp_nest_lock_kind = c_intptr_t
integer, parameter :: omp_sched_kind = omp_integer_kind
integer, parameter :: omp_proc_bind_kind = omp_integer_kind
integer, parameter :: kmp_pointer_kind = c_intptr_t
integer, parameter :: kmp_size_t_kind = c_size_t
integer, parameter :: kmp_affinity_mask_kind = c_intptr_t
integer, parameter :: kmp_cancel_kind = omp_integer_kind
integer, parameter :: omp_sync_hint_kind = omp_integer_kind
integer, parameter :: omp_lock_hint_kind = omp_sync_hint_kind
integer, parameter :: omp_control_tool_kind = omp_integer_kind
integer, parameter :: omp_control_tool_result_kind = omp_integer_kind
integer, parameter :: omp_allocator_kind = c_intptr_t
end module omp_lib_kinds
module omp_lib
use omp_lib_kinds
integer (kind=omp_integer_kind), parameter :: openmp_version = @LIBOMP_OMP_YEAR_MONTH@
integer (kind=omp_integer_kind), parameter :: kmp_version_major = @LIBOMP_VERSION_MAJOR@
integer (kind=omp_integer_kind), parameter :: kmp_version_minor = @LIBOMP_VERSION_MINOR@
integer (kind=omp_integer_kind), parameter :: kmp_version_build = @LIBOMP_VERSION_BUILD@
character(*) kmp_build_date
parameter( kmp_build_date = '@LIBOMP_BUILD_DATE@' )
integer(kind=omp_sched_kind), parameter :: omp_sched_static = 1
integer(kind=omp_sched_kind), parameter :: omp_sched_dynamic = 2
integer(kind=omp_sched_kind), parameter :: omp_sched_guided = 3
integer(kind=omp_sched_kind), parameter :: omp_sched_auto = 4
integer (kind=omp_proc_bind_kind), parameter :: omp_proc_bind_false = 0
integer (kind=omp_proc_bind_kind), parameter :: omp_proc_bind_true = 1
integer (kind=omp_proc_bind_kind), parameter :: omp_proc_bind_master = 2
integer (kind=omp_proc_bind_kind), parameter :: omp_proc_bind_close = 3
integer (kind=omp_proc_bind_kind), parameter :: omp_proc_bind_spread = 4
integer (kind=kmp_cancel_kind), parameter :: kmp_cancel_parallel = 1
integer (kind=kmp_cancel_kind), parameter :: kmp_cancel_loop = 2
integer (kind=kmp_cancel_kind), parameter :: kmp_cancel_sections = 3
integer (kind=kmp_cancel_kind), parameter :: kmp_cancel_taskgroup = 4
integer (kind=omp_sync_hint_kind), parameter :: omp_sync_hint_none = 0
integer (kind=omp_sync_hint_kind), parameter :: omp_sync_hint_uncontended = 1
integer (kind=omp_sync_hint_kind), parameter :: omp_sync_hint_contended = 2
integer (kind=omp_sync_hint_kind), parameter :: omp_sync_hint_nonspeculative = 4
integer (kind=omp_sync_hint_kind), parameter :: omp_sync_hint_speculative = 8
integer (kind=omp_lock_hint_kind), parameter :: omp_lock_hint_none = omp_sync_hint_none
integer (kind=omp_lock_hint_kind), parameter :: omp_lock_hint_uncontended = omp_sync_hint_uncontended
integer (kind=omp_lock_hint_kind), parameter :: omp_lock_hint_contended = omp_sync_hint_contended
integer (kind=omp_lock_hint_kind), parameter :: omp_lock_hint_nonspeculative = omp_sync_hint_nonspeculative
integer (kind=omp_lock_hint_kind), parameter :: omp_lock_hint_speculative = omp_sync_hint_speculative
integer (kind=omp_lock_hint_kind), parameter :: kmp_lock_hint_hle = 65536
integer (kind=omp_lock_hint_kind), parameter :: kmp_lock_hint_rtm = 131072
integer (kind=omp_lock_hint_kind), parameter :: kmp_lock_hint_adaptive = 262144
integer (kind=omp_control_tool_kind), parameter :: omp_control_tool_start = 1
integer (kind=omp_control_tool_kind), parameter :: omp_control_tool_pause = 2
integer (kind=omp_control_tool_kind), parameter :: omp_control_tool_flush = 3
integer (kind=omp_control_tool_kind), parameter :: omp_control_tool_end = 4
integer (kind=omp_control_tool_result_kind), parameter :: omp_control_tool_notool = -2
integer (kind=omp_control_tool_result_kind), parameter :: omp_control_tool_nocallback = -1
integer (kind=omp_control_tool_result_kind), parameter :: omp_control_tool_success = 0
integer (kind=omp_control_tool_result_kind), parameter :: omp_control_tool_ignored = 1
integer (kind=omp_allocator_kind), parameter :: omp_null_allocator = 0
integer (kind=omp_allocator_kind), parameter :: omp_default_mem_alloc = 1
integer (kind=omp_allocator_kind), parameter :: omp_large_cap_mem_alloc = 2
integer (kind=omp_allocator_kind), parameter :: omp_const_mem_alloc = 3
integer (kind=omp_allocator_kind), parameter :: omp_high_bw_mem_alloc = 4
integer (kind=omp_allocator_kind), parameter :: omp_low_lat_mem_alloc = 5
integer (kind=omp_allocator_kind), parameter :: omp_cgroup_mem_alloc = 6
integer (kind=omp_allocator_kind), parameter :: omp_pteam_mem_alloc = 7
integer (kind=omp_allocator_kind), parameter :: omp_thread_mem_alloc = 8
interface
! ***
! *** omp_* entry points
! ***
subroutine omp_set_num_threads(num_threads) bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind), value :: num_threads
end subroutine omp_set_num_threads
subroutine omp_set_dynamic(dynamic_threads) bind(c)
use omp_lib_kinds
logical (kind=omp_logical_kind), value :: dynamic_threads
end subroutine omp_set_dynamic
subroutine omp_set_nested(nested) bind(c)
use omp_lib_kinds
logical (kind=omp_logical_kind), value :: nested
end subroutine omp_set_nested
function omp_get_num_threads() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_num_threads
end function omp_get_num_threads
function omp_get_max_threads() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_max_threads
end function omp_get_max_threads
function omp_get_thread_num() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_thread_num
end function omp_get_thread_num
function omp_get_num_procs() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_num_procs
end function omp_get_num_procs
function omp_in_parallel() bind(c)
use omp_lib_kinds
logical (kind=omp_logical_kind) omp_in_parallel
end function omp_in_parallel
function omp_in_final() bind(c)
use omp_lib_kinds
logical (kind=omp_logical_kind) omp_in_final
end function omp_in_final
function omp_get_dynamic() bind(c)
use omp_lib_kinds
logical (kind=omp_logical_kind) omp_get_dynamic
end function omp_get_dynamic
function omp_get_nested() bind(c)
use omp_lib_kinds
logical (kind=omp_logical_kind) omp_get_nested
end function omp_get_nested
function omp_get_thread_limit() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_thread_limit
end function omp_get_thread_limit
subroutine omp_set_max_active_levels(max_levels) bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind), value :: max_levels
end subroutine omp_set_max_active_levels
function omp_get_max_active_levels() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_max_active_levels
end function omp_get_max_active_levels
function omp_get_level() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_level
end function omp_get_level
function omp_get_active_level() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_active_level
end function omp_get_active_level
function omp_get_ancestor_thread_num(level) bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_ancestor_thread_num
integer (kind=omp_integer_kind), value :: level
end function omp_get_ancestor_thread_num
function omp_get_team_size(level) bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_team_size
integer (kind=omp_integer_kind), value :: level
end function omp_get_team_size
subroutine omp_set_schedule(kind, chunk_size) bind(c)
use omp_lib_kinds
integer (kind=omp_sched_kind), value :: kind
integer (kind=omp_integer_kind), value :: chunk_size
end subroutine omp_set_schedule
subroutine omp_get_schedule(kind, chunk_size) bind(c)
use omp_lib_kinds
integer (kind=omp_sched_kind) kind
integer (kind=omp_integer_kind) chunk_size
end subroutine omp_get_schedule
function omp_get_proc_bind() bind(c)
use omp_lib_kinds
integer (kind=omp_proc_bind_kind) omp_get_proc_bind
end function omp_get_proc_bind
function omp_get_num_places() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_num_places
end function omp_get_num_places
function omp_get_place_num_procs(place_num) bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind), value :: place_num
integer (kind=omp_integer_kind) omp_get_place_num_procs
end function omp_get_place_num_procs
subroutine omp_get_place_proc_ids(place_num, ids) bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind), value :: place_num
integer (kind=omp_integer_kind) ids(*)
end subroutine omp_get_place_proc_ids
function omp_get_place_num() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_place_num
end function omp_get_place_num
function omp_get_partition_num_places() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_partition_num_places
end function omp_get_partition_num_places
subroutine omp_get_partition_place_nums(place_nums) bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) place_nums(*)
end subroutine omp_get_partition_place_nums
function omp_get_wtime() bind(c)
use omp_lib_kinds
real (kind=kmp_double_kind) omp_get_wtime
end function omp_get_wtime
function omp_get_wtick() bind(c)
use omp_lib_kinds
real (kind=kmp_double_kind) omp_get_wtick
end function omp_get_wtick
function omp_get_default_device() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_default_device
end function omp_get_default_device
subroutine omp_set_default_device(device_num) bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind), value :: device_num
end subroutine omp_set_default_device
function omp_get_num_devices() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_num_devices
end function omp_get_num_devices
function omp_get_num_teams() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_num_teams
end function omp_get_num_teams
function omp_get_team_num() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_team_num
end function omp_get_team_num
function omp_get_cancellation() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_cancellation
end function omp_get_cancellation
function omp_is_initial_device() bind(c)
use omp_lib_kinds
logical (kind=omp_logical_kind) omp_is_initial_device
end function omp_is_initial_device
function omp_get_initial_device() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_initial_device
end function omp_get_initial_device
function omp_get_device_num() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_device_num
end function omp_get_device_num
subroutine omp_init_lock(svar) bind(c)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_init_lock
!DIR$ ENDIF
use omp_lib_kinds
integer (kind=omp_lock_kind) svar
end subroutine omp_init_lock
subroutine omp_destroy_lock(svar) bind(c)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_destroy_lock
!DIR$ ENDIF
use omp_lib_kinds
integer (kind=omp_lock_kind) svar
end subroutine omp_destroy_lock
subroutine omp_set_lock(svar) bind(c)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_set_lock
!DIR$ ENDIF
use omp_lib_kinds
integer (kind=omp_lock_kind) svar
end subroutine omp_set_lock
subroutine omp_unset_lock(svar) bind(c)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_unset_lock
!DIR$ ENDIF
use omp_lib_kinds
integer (kind=omp_lock_kind) svar
end subroutine omp_unset_lock
function omp_test_lock(svar) bind(c)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_test_lock
!DIR$ ENDIF
use omp_lib_kinds
logical (kind=omp_logical_kind) omp_test_lock
integer (kind=omp_lock_kind) svar
end function omp_test_lock
subroutine omp_init_nest_lock(nvar) bind(c)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_init_nest_lock
!DIR$ ENDIF
use omp_lib_kinds
integer (kind=omp_nest_lock_kind) nvar
end subroutine omp_init_nest_lock
subroutine omp_destroy_nest_lock(nvar) bind(c)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_destroy_nest_lock
!DIR$ ENDIF
use omp_lib_kinds
integer (kind=omp_nest_lock_kind) nvar
end subroutine omp_destroy_nest_lock
subroutine omp_set_nest_lock(nvar) bind(c)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_set_nest_lock
!DIR$ ENDIF
use omp_lib_kinds
integer (kind=omp_nest_lock_kind) nvar
end subroutine omp_set_nest_lock
subroutine omp_unset_nest_lock(nvar) bind(c)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_unset_nest_lock
!DIR$ ENDIF
use omp_lib_kinds
integer (kind=omp_nest_lock_kind) nvar
end subroutine omp_unset_nest_lock
function omp_test_nest_lock(nvar) bind(c)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_test_nest_lock
!DIR$ ENDIF
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_test_nest_lock
integer (kind=omp_nest_lock_kind) nvar
end function omp_test_nest_lock
function omp_get_max_task_priority() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_get_max_task_priority
end function omp_get_max_task_priority
subroutine omp_set_default_allocator(svar) bind(c)
use omp_lib_kinds
integer (kind=omp_allocator_kind), value :: svar
end subroutine omp_set_default_allocator
function omp_get_default_allocator() bind(c)
use omp_lib_kinds
integer (kind=omp_allocator_kind) omp_get_default_allocator
end function omp_get_default_allocator
subroutine omp_set_affinity_format(format)
character (len=*) :: format
end subroutine omp_set_affinity_format
function omp_get_affinity_format(buffer)
use omp_lib_kinds
character (len=*) :: buffer
integer (kind=kmp_size_t_kind) :: omp_get_affinity_format
end function omp_get_affinity_format
subroutine omp_display_affinity(format)
character (len=*) :: format
end subroutine omp_display_affinity
function omp_capture_affinity(buffer, format)
use omp_lib_kinds
character (len=*) :: format
character (len=*) :: buffer
integer (kind=kmp_size_t_kind) :: omp_capture_affinity
end function omp_capture_affinity
! ***
! *** kmp_* entry points
! ***
subroutine kmp_set_stacksize(size) bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind), value :: size
end subroutine kmp_set_stacksize
subroutine kmp_set_stacksize_s(size) bind(c)
use omp_lib_kinds
integer (kind=kmp_size_t_kind), value :: size
end subroutine kmp_set_stacksize_s
subroutine kmp_set_blocktime(msec) bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind), value :: msec
end subroutine kmp_set_blocktime
subroutine kmp_set_library_serial() bind(c)
end subroutine kmp_set_library_serial
subroutine kmp_set_library_turnaround() bind(c)
end subroutine kmp_set_library_turnaround
subroutine kmp_set_library_throughput() bind(c)
end subroutine kmp_set_library_throughput
subroutine kmp_set_library(libnum) bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind), value :: libnum
end subroutine kmp_set_library
subroutine kmp_set_defaults(string) bind(c)
use, intrinsic :: iso_c_binding
character (kind=c_char) :: string(*)
end subroutine kmp_set_defaults
function kmp_get_stacksize() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_get_stacksize
end function kmp_get_stacksize
function kmp_get_stacksize_s() bind(c)
use omp_lib_kinds
integer (kind=kmp_size_t_kind) kmp_get_stacksize_s
end function kmp_get_stacksize_s
function kmp_get_blocktime() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_get_blocktime
end function kmp_get_blocktime
function kmp_get_library() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_get_library
end function kmp_get_library
subroutine kmp_set_disp_num_buffers(num) bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind), value :: num
end subroutine kmp_set_disp_num_buffers
function kmp_set_affinity(mask) bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_set_affinity
integer (kind=kmp_affinity_mask_kind) mask
end function kmp_set_affinity
function kmp_get_affinity(mask) bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_get_affinity
integer (kind=kmp_affinity_mask_kind) mask
end function kmp_get_affinity
function kmp_get_affinity_max_proc() bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_get_affinity_max_proc
end function kmp_get_affinity_max_proc
subroutine kmp_create_affinity_mask(mask) bind(c)
use omp_lib_kinds
integer (kind=kmp_affinity_mask_kind) mask
end subroutine kmp_create_affinity_mask
subroutine kmp_destroy_affinity_mask(mask) bind(c)
use omp_lib_kinds
integer (kind=kmp_affinity_mask_kind) mask
end subroutine kmp_destroy_affinity_mask
function kmp_set_affinity_mask_proc(proc, mask) bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_set_affinity_mask_proc
integer (kind=omp_integer_kind), value :: proc
integer (kind=kmp_affinity_mask_kind) mask
end function kmp_set_affinity_mask_proc
function kmp_unset_affinity_mask_proc(proc, mask) bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_unset_affinity_mask_proc
integer (kind=omp_integer_kind), value :: proc
integer (kind=kmp_affinity_mask_kind) mask
end function kmp_unset_affinity_mask_proc
function kmp_get_affinity_mask_proc(proc, mask) bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) kmp_get_affinity_mask_proc
integer (kind=omp_integer_kind), value :: proc
integer (kind=kmp_affinity_mask_kind) mask
end function kmp_get_affinity_mask_proc
function kmp_malloc(size) bind(c)
use omp_lib_kinds
integer (kind=kmp_pointer_kind) kmp_malloc
integer (kind=kmp_size_t_kind), value :: size
end function kmp_malloc
function kmp_aligned_malloc(size, alignment) bind(c)
use omp_lib_kinds
integer (kind=kmp_pointer_kind) kmp_aligned_malloc
integer (kind=kmp_size_t_kind), value :: size
integer (kind=kmp_size_t_kind), value :: alignment
end function kmp_aligned_malloc
function kmp_calloc(nelem, elsize) bind(c)
use omp_lib_kinds
integer (kind=kmp_pointer_kind) kmp_calloc
integer (kind=kmp_size_t_kind), value :: nelem
integer (kind=kmp_size_t_kind), value :: elsize
end function kmp_calloc
function kmp_realloc(ptr, size) bind(c)
use omp_lib_kinds
integer (kind=kmp_pointer_kind) kmp_realloc
integer (kind=kmp_pointer_kind), value :: ptr
integer (kind=kmp_size_t_kind), value :: size
end function kmp_realloc
subroutine kmp_free(ptr) bind(c)
use omp_lib_kinds
integer (kind=kmp_pointer_kind), value :: ptr
end subroutine kmp_free
subroutine kmp_set_warnings_on() bind(c)
end subroutine kmp_set_warnings_on
subroutine kmp_set_warnings_off() bind(c)
end subroutine kmp_set_warnings_off
function kmp_get_cancellation_status(cancelkind) bind(c)
use omp_lib_kinds
integer (kind=kmp_cancel_kind), value :: cancelkind
logical (kind=omp_logical_kind) kmp_get_cancellation_status
end function kmp_get_cancellation_status
subroutine omp_init_lock_with_hint(svar, hint) bind(c)
use omp_lib_kinds
integer (kind=omp_lock_kind) svar
integer (kind=omp_lock_hint_kind), value :: hint
end subroutine omp_init_lock_with_hint
subroutine omp_init_nest_lock_with_hint(nvar, hint) bind(c)
use omp_lib_kinds
integer (kind=omp_nest_lock_kind) nvar
integer (kind=omp_lock_hint_kind), value :: hint
end subroutine omp_init_nest_lock_with_hint
function omp_control_tool(command, modifier, arg) bind(c)
use omp_lib_kinds
integer (kind=omp_integer_kind) omp_control_tool
integer (kind=omp_control_tool_kind), value :: command
integer (kind=omp_control_tool_kind), value :: modifier
integer (kind=kmp_pointer_kind), optional :: arg
end function omp_control_tool
end interface
end module omp_lib

View File

@ -0,0 +1,782 @@
! include/50/omp_lib.h.var
!
!//===----------------------------------------------------------------------===//
!//
!// The LLVM Compiler Infrastructure
!//
!// This file is dual licensed under the MIT and the University of Illinois Open
!// Source Licenses. See LICENSE.txt for details.
!//
!//===----------------------------------------------------------------------===//
!
integer omp_integer_kind
parameter(omp_integer_kind=4)
integer omp_logical_kind
parameter(omp_logical_kind=4)
integer omp_real_kind
parameter(omp_real_kind=4)
integer omp_lock_kind
parameter(omp_lock_kind=int_ptr_kind())
integer omp_nest_lock_kind
parameter(omp_nest_lock_kind=int_ptr_kind())
integer omp_sched_kind
parameter(omp_sched_kind=omp_integer_kind)
integer omp_proc_bind_kind
parameter(omp_proc_bind_kind=omp_integer_kind)
integer kmp_pointer_kind
parameter(kmp_pointer_kind=int_ptr_kind())
integer kmp_size_t_kind
parameter(kmp_size_t_kind=int_ptr_kind())
integer kmp_affinity_mask_kind
parameter(kmp_affinity_mask_kind=int_ptr_kind())
integer omp_sync_hint_kind
parameter(omp_sync_hint_kind=omp_integer_kind)
integer omp_lock_hint_kind
parameter(omp_lock_hint_kind=omp_sync_hint_kind)
integer omp_control_tool_kind
parameter(omp_control_tool_kind=omp_integer_kind)
integer omp_control_tool_result_kind
parameter(omp_control_tool_result_kind=omp_integer_kind)
integer omp_allocator_kind
parameter(omp_allocator_kind=int_ptr_kind())
integer(kind=omp_integer_kind)openmp_version
parameter(openmp_version=@LIBOMP_OMP_YEAR_MONTH@)
integer(kind=omp_integer_kind)kmp_version_major
parameter(kmp_version_major=@LIBOMP_VERSION_MAJOR@)
integer(kind=omp_integer_kind)kmp_version_minor
parameter(kmp_version_minor=@LIBOMP_VERSION_MINOR@)
integer(kind=omp_integer_kind)kmp_version_build
parameter(kmp_version_build=@LIBOMP_VERSION_BUILD@)
character(*)kmp_build_date
parameter(kmp_build_date='@LIBOMP_BUILD_DATE@')
integer(kind=omp_sched_kind)omp_sched_static
parameter(omp_sched_static=1)
integer(kind=omp_sched_kind)omp_sched_dynamic
parameter(omp_sched_dynamic=2)
integer(kind=omp_sched_kind)omp_sched_guided
parameter(omp_sched_guided=3)
integer(kind=omp_sched_kind)omp_sched_auto
parameter(omp_sched_auto=4)
integer(kind=omp_proc_bind_kind)omp_proc_bind_false
parameter(omp_proc_bind_false=0)
integer(kind=omp_proc_bind_kind)omp_proc_bind_true
parameter(omp_proc_bind_true=1)
integer(kind=omp_proc_bind_kind)omp_proc_bind_master
parameter(omp_proc_bind_master=2)
integer(kind=omp_proc_bind_kind)omp_proc_bind_close
parameter(omp_proc_bind_close=3)
integer(kind=omp_proc_bind_kind)omp_proc_bind_spread
parameter(omp_proc_bind_spread=4)
integer(kind=omp_sync_hint_kind)omp_sync_hint_none
parameter(omp_sync_hint_none=0)
integer(kind=omp_sync_hint_kind)omp_sync_hint_uncontended
parameter(omp_sync_hint_uncontended=1)
integer(kind=omp_sync_hint_kind)omp_sync_hint_contended
parameter(omp_sync_hint_contended=2)
integer(kind=omp_sync_hint_kind)omp_sync_hint_nonspeculative
parameter(omp_sync_hint_nonspeculative=4)
integer(kind=omp_sync_hint_kind)omp_sync_hint_speculative
parameter(omp_sync_hint_speculative=8)
integer(kind=omp_lock_hint_kind)omp_lock_hint_none
parameter(omp_lock_hint_none=omp_sync_hint_none)
integer(kind=omp_lock_hint_kind)omp_lock_hint_uncontended
parameter(omp_lock_hint_uncontended=omp_sync_hint_uncontended)
integer(kind=omp_lock_hint_kind)omp_lock_hint_contended
parameter(omp_lock_hint_contended=omp_sync_hint_contended)
integer(kind=omp_lock_hint_kind)omp_lock_hint_nonspeculative
parameter(omp_lock_hint_nonspeculative=4)
integer(kind=omp_lock_hint_kind)omp_lock_hint_speculative
parameter(omp_lock_hint_speculative=omp_sync_hint_speculative)
integer(kind=omp_lock_hint_kind)kmp_lock_hint_hle
parameter(kmp_lock_hint_hle=65536)
integer(kind=omp_lock_hint_kind)kmp_lock_hint_rtm
parameter(kmp_lock_hint_rtm=131072)
integer(kind=omp_lock_hint_kind)kmp_lock_hint_adaptive
parameter(kmp_lock_hint_adaptive=262144)
integer(kind=omp_control_tool_kind)omp_control_tool_start
parameter(omp_control_tool_start=1)
integer(kind=omp_control_tool_kind)omp_control_tool_pause
parameter(omp_control_tool_pause=2)
integer(kind=omp_control_tool_kind)omp_control_tool_flush
parameter(omp_control_tool_flush=3)
integer(kind=omp_control_tool_kind)omp_control_tool_end
parameter(omp_control_tool_end=4)
integer(omp_control_tool_result_kind)omp_control_tool_notool
parameter(omp_control_tool_notool=-2)
integer(omp_control_tool_result_kind)omp_control_tool_nocallback
parameter(omp_control_tool_nocallback=-1)
integer(omp_control_tool_result_kind)omp_control_tool_success
parameter(omp_control_tool_success=0)
integer(omp_control_tool_result_kind)omp_control_tool_ignored
parameter(omp_control_tool_ignored=1)
integer(kind=omp_allocator_kind)omp_null_allocator
parameter(omp_null_allocator=0)
integer(kind=omp_allocator_kind)omp_default_mem_alloc
parameter(omp_default_mem_alloc=1)
integer(kind=omp_allocator_kind)omp_large_cap_mem_alloc
parameter(omp_large_cap_mem_alloc=2)
integer(kind=omp_allocator_kind)omp_const_mem_alloc
parameter(omp_const_mem_alloc=3)
integer(kind=omp_allocator_kind)omp_high_bw_mem_alloc
parameter(omp_high_bw_mem_alloc=4)
integer(kind=omp_allocator_kind)omp_low_lat_mem_alloc
parameter(omp_low_lat_mem_alloc=5)
integer(kind=omp_allocator_kind)omp_cgroup_mem_alloc
parameter(omp_cgroup_mem_alloc=6)
integer(kind=omp_allocator_kind)omp_pteam_mem_alloc
parameter(omp_pteam_mem_alloc=7)
integer(kind=omp_allocator_kind)omp_thread_mem_alloc
parameter(omp_thread_mem_alloc=8)
interface
! ***
! *** omp_* entry points
! ***
subroutine omp_set_num_threads(num_threads) bind(c)
import
integer (kind=omp_integer_kind), value :: num_threads
end subroutine omp_set_num_threads
subroutine omp_set_dynamic(dynamic_threads) bind(c)
import
logical (kind=omp_logical_kind), value :: dynamic_threads
end subroutine omp_set_dynamic
subroutine omp_set_nested(nested) bind(c)
import
logical (kind=omp_logical_kind), value :: nested
end subroutine omp_set_nested
function omp_get_num_threads() bind(c)
import
integer (kind=omp_integer_kind) omp_get_num_threads
end function omp_get_num_threads
function omp_get_max_threads() bind(c)
import
integer (kind=omp_integer_kind) omp_get_max_threads
end function omp_get_max_threads
function omp_get_thread_num() bind(c)
import
integer (kind=omp_integer_kind) omp_get_thread_num
end function omp_get_thread_num
function omp_get_num_procs() bind(c)
import
integer (kind=omp_integer_kind) omp_get_num_procs
end function omp_get_num_procs
function omp_in_parallel() bind(c)
import
logical (kind=omp_logical_kind) omp_in_parallel
end function omp_in_parallel
function omp_in_final() bind(c)
import
logical (kind=omp_logical_kind) omp_in_final
end function omp_in_final
function omp_get_dynamic() bind(c)
import
logical (kind=omp_logical_kind) omp_get_dynamic
end function omp_get_dynamic
function omp_get_nested() bind(c)
import
logical (kind=omp_logical_kind) omp_get_nested
end function omp_get_nested
function omp_get_thread_limit() bind(c)
import
integer (kind=omp_integer_kind) omp_get_thread_limit
end function omp_get_thread_limit
subroutine omp_set_max_active_levels(max_levels) bind(c)
import
integer (kind=omp_integer_kind), value :: max_levels
end subroutine omp_set_max_active_levels
function omp_get_max_active_levels() bind(c)
import
integer (kind=omp_integer_kind) omp_get_max_active_levels
end function omp_get_max_active_levels
function omp_get_level() bind(c)
import
integer (kind=omp_integer_kind) omp_get_level
end function omp_get_level
function omp_get_active_level() bind(c)
import
integer (kind=omp_integer_kind) omp_get_active_level
end function omp_get_active_level
function omp_get_ancestor_thread_num(level) bind(c)
import
integer (kind=omp_integer_kind) omp_get_ancestor_thread_num
integer (kind=omp_integer_kind), value :: level
end function omp_get_ancestor_thread_num
function omp_get_team_size(level) bind(c)
import
integer (kind=omp_integer_kind) omp_get_team_size
integer (kind=omp_integer_kind), value :: level
end function omp_get_team_size
subroutine omp_set_schedule(kind, chunk_size) bind(c)
import
integer (kind=omp_sched_kind), value :: kind
integer (kind=omp_integer_kind), value :: chunk_size
end subroutine omp_set_schedule
subroutine omp_get_schedule(kind, chunk_size) bind(c)
import
integer (kind=omp_sched_kind) kind
integer (kind=omp_integer_kind) chunk_size
end subroutine omp_get_schedule
function omp_get_proc_bind() bind(c)
import
integer (kind=omp_proc_bind_kind) omp_get_proc_bind
end function omp_get_proc_bind
function omp_get_num_places() bind(c)
import
integer (kind=omp_integer_kind) omp_get_num_places
end function omp_get_num_places
function omp_get_place_num_procs(place_num) bind(c)
import
integer (kind=omp_integer_kind), value :: place_num
integer (kind=omp_integer_kind) omp_get_place_num_procs
end function omp_get_place_num_procs
subroutine omp_get_place_proc_ids(place_num, ids) bind(c)
import
integer (kind=omp_integer_kind), value :: place_num
integer (kind=omp_integer_kind) ids(*)
end subroutine omp_get_place_proc_ids
function omp_get_place_num() bind(c)
import
integer (kind=omp_integer_kind) omp_get_place_num
end function omp_get_place_num
function omp_get_partition_num_places() bind(c)
import
integer (kind=omp_integer_kind) omp_get_partition_num_places
end function omp_get_partition_num_places
subroutine omp_get_partition_place_nums(place_nums) bind(c)
import
integer (kind=omp_integer_kind) place_nums(*)
end subroutine omp_get_partition_place_nums
function omp_get_wtime() bind(c)
double precision omp_get_wtime
end function omp_get_wtime
function omp_get_wtick() bind(c)
double precision omp_get_wtick
end function omp_get_wtick
function omp_get_default_device() bind(c)
import
integer (kind=omp_integer_kind) omp_get_default_device
end function omp_get_default_device
subroutine omp_set_default_device(device_num) bind(c)
import
integer (kind=omp_integer_kind), value :: device_num
end subroutine omp_set_default_device
function omp_get_num_devices() bind(c)
import
integer (kind=omp_integer_kind) omp_get_num_devices
end function omp_get_num_devices
function omp_get_num_teams() bind(c)
import
integer (kind=omp_integer_kind) omp_get_num_teams
end function omp_get_num_teams
function omp_get_team_num() bind(c)
import
integer (kind=omp_integer_kind) omp_get_team_num
end function omp_get_team_num
function omp_is_initial_device() bind(c)
import
logical (kind=omp_logical_kind) omp_is_initial_device
end function omp_is_initial_device
function omp_get_initial_device() bind(c)
import
integer (kind=omp_integer_kind) omp_get_initial_device
end function omp_get_initial_device
function omp_get_device_num() bind(c)
import
integer (kind=omp_integer_kind) omp_get_device_num
end function omp_get_device_num
subroutine omp_init_lock(svar) bind(c)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_init_lock
!DIR$ ENDIF
import
integer (kind=omp_lock_kind) svar
end subroutine omp_init_lock
subroutine omp_destroy_lock(svar) bind(c)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_destroy_lock
!DIR$ ENDIF
import
integer (kind=omp_lock_kind) svar
end subroutine omp_destroy_lock
subroutine omp_set_lock(svar) bind(c)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_set_lock
!DIR$ ENDIF
import
integer (kind=omp_lock_kind) svar
end subroutine omp_set_lock
subroutine omp_unset_lock(svar) bind(c)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_unset_lock
!DIR$ ENDIF
import
integer (kind=omp_lock_kind) svar
end subroutine omp_unset_lock
function omp_test_lock(svar) bind(c)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_test_lock
!DIR$ ENDIF
import
logical (kind=omp_logical_kind) omp_test_lock
integer (kind=omp_lock_kind) svar
end function omp_test_lock
subroutine omp_init_nest_lock(nvar) bind(c)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_init_nest_lock
!DIR$ ENDIF
import
integer (kind=omp_nest_lock_kind) nvar
end subroutine omp_init_nest_lock
subroutine omp_destroy_nest_lock(nvar) bind(c)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_destroy_nest_lock
!DIR$ ENDIF
import
integer (kind=omp_nest_lock_kind) nvar
end subroutine omp_destroy_nest_lock
subroutine omp_set_nest_lock(nvar) bind(c)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_set_nest_lock
!DIR$ ENDIF
import
integer (kind=omp_nest_lock_kind) nvar
end subroutine omp_set_nest_lock
subroutine omp_unset_nest_lock(nvar) bind(c)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_unset_nest_lock
!DIR$ ENDIF
import
integer (kind=omp_nest_lock_kind) nvar
end subroutine omp_unset_nest_lock
function omp_test_nest_lock(nvar) bind(c)
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!DIR$ attributes known_intrinsic :: omp_test_nest_lock
!DIR$ ENDIF
import
integer (kind=omp_integer_kind) omp_test_nest_lock
integer (kind=omp_nest_lock_kind) nvar
end function omp_test_nest_lock
function omp_get_max_task_priority() bind(c)
import
integer (kind=omp_integer_kind) omp_get_max_task_priority
end function omp_get_max_task_priority
subroutine omp_set_default_allocator(svar) bind(c)
import
integer (kind=omp_allocator_kind), value :: svar
end subroutine omp_set_default_allocator
function omp_get_default_allocator() bind(c)
import
integer (kind=omp_allocator_kind) omp_get_default_allocator
end function omp_get_default_allocator
subroutine omp_set_affinity_format(format)
character (len=*) :: format
end subroutine omp_set_affinity_format
function omp_get_affinity_format(buffer)
import
character (len=*) :: buffer
integer (kind=kmp_size_t_kind) :: omp_get_affinity_format
end function omp_get_affinity_format
subroutine omp_display_affinity(format)
character (len=*) :: format
end subroutine omp_display_affinity
function omp_capture_affinity(buffer, format)
import
character (len=*) :: format
character (len=*) :: buffer
integer (kind=kmp_size_t_kind) :: omp_capture_affinity
end function omp_capture_affinity
! ***
! *** kmp_* entry points
! ***
subroutine kmp_set_stacksize(size) bind(c)
import
integer (kind=omp_integer_kind), value :: size
end subroutine kmp_set_stacksize
subroutine kmp_set_stacksize_s(size) bind(c)
import
integer (kind=kmp_size_t_kind), value :: size
end subroutine kmp_set_stacksize_s
subroutine kmp_set_blocktime(msec) bind(c)
import
integer (kind=omp_integer_kind), value :: msec
end subroutine kmp_set_blocktime
subroutine kmp_set_library_serial() bind(c)
end subroutine kmp_set_library_serial
subroutine kmp_set_library_turnaround() bind(c)
end subroutine kmp_set_library_turnaround
subroutine kmp_set_library_throughput() bind(c)
end subroutine kmp_set_library_throughput
subroutine kmp_set_library(libnum) bind(c)
import
integer (kind=omp_integer_kind), value :: libnum
end subroutine kmp_set_library
subroutine kmp_set_defaults(string) bind(c)
character string(*)
end subroutine kmp_set_defaults
function kmp_get_stacksize() bind(c)
import
integer (kind=omp_integer_kind) kmp_get_stacksize
end function kmp_get_stacksize
function kmp_get_stacksize_s() bind(c)
import
integer (kind=kmp_size_t_kind) kmp_get_stacksize_s
end function kmp_get_stacksize_s
function kmp_get_blocktime() bind(c)
import
integer (kind=omp_integer_kind) kmp_get_blocktime
end function kmp_get_blocktime
function kmp_get_library() bind(c)
import
integer (kind=omp_integer_kind) kmp_get_library
end function kmp_get_library
subroutine kmp_set_disp_num_buffers(num) bind(c)
import
integer (kind=omp_integer_kind), value :: num
end subroutine kmp_set_disp_num_buffers
function kmp_set_affinity(mask) bind(c)
import
integer (kind=omp_integer_kind) kmp_set_affinity
integer (kind=kmp_affinity_mask_kind) mask
end function kmp_set_affinity
function kmp_get_affinity(mask) bind(c)
import
integer (kind=omp_integer_kind) kmp_get_affinity
integer (kind=kmp_affinity_mask_kind) mask
end function kmp_get_affinity
function kmp_get_affinity_max_proc() bind(c)
import
integer (kind=omp_integer_kind) kmp_get_affinity_max_proc
end function kmp_get_affinity_max_proc
subroutine kmp_create_affinity_mask(mask) bind(c)
import
integer (kind=kmp_affinity_mask_kind) mask
end subroutine kmp_create_affinity_mask
subroutine kmp_destroy_affinity_mask(mask) bind(c)
import
integer (kind=kmp_affinity_mask_kind) mask
end subroutine kmp_destroy_affinity_mask
function kmp_set_affinity_mask_proc(proc, mask) bind(c)
import
integer (kind=omp_integer_kind) kmp_set_affinity_mask_proc
integer (kind=omp_integer_kind), value :: proc
integer (kind=kmp_affinity_mask_kind) mask
end function kmp_set_affinity_mask_proc
function kmp_unset_affinity_mask_proc(proc, mask) bind(c)
import
integer (kind=omp_integer_kind) kmp_unset_affinity_mask_proc
integer (kind=omp_integer_kind), value :: proc
integer (kind=kmp_affinity_mask_kind) mask
end function kmp_unset_affinity_mask_proc
function kmp_get_affinity_mask_proc(proc, mask) bind(c)
import
integer (kind=omp_integer_kind) kmp_get_affinity_mask_proc
integer (kind=omp_integer_kind), value :: proc
integer (kind=kmp_affinity_mask_kind) mask
end function kmp_get_affinity_mask_proc
function kmp_malloc(size) bind(c)
import
integer (kind=kmp_pointer_kind) kmp_malloc
integer (kind=kmp_size_t_kind), value :: size
end function kmp_malloc
function kmp_aligned_malloc(size, alignment) bind(c)
import
integer (kind=kmp_pointer_kind) kmp_aligned_malloc
integer (kind=kmp_size_t_kind), value :: size
integer (kind=kmp_size_t_kind), value :: alignment
end function kmp_aligned_malloc
function kmp_calloc(nelem, elsize) bind(c)
import
integer (kind=kmp_pointer_kind) kmp_calloc
integer (kind=kmp_size_t_kind), value :: nelem
integer (kind=kmp_size_t_kind), value :: elsize
end function kmp_calloc
function kmp_realloc(ptr, size) bind(c)
import
integer (kind=kmp_pointer_kind) kmp_realloc
integer (kind=kmp_pointer_kind), value :: ptr
integer (kind=kmp_size_t_kind), value :: size
end function kmp_realloc
subroutine kmp_free(ptr) bind(c)
import
integer (kind=kmp_pointer_kind), value :: ptr
end subroutine kmp_free
subroutine kmp_set_warnings_on() bind(c)
end subroutine kmp_set_warnings_on
subroutine kmp_set_warnings_off() bind(c)
end subroutine kmp_set_warnings_off
subroutine omp_init_lock_with_hint(svar, hint) bind(c)
import
integer (kind=omp_lock_kind) svar
integer (kind=omp_lock_hint_kind), value :: hint
end subroutine omp_init_lock_with_hint
subroutine omp_init_nest_lock_with_hint(nvar, hint) bind(c)
import
integer (kind=omp_nest_lock_kind) nvar
integer (kind=omp_lock_hint_kind), value :: hint
end subroutine omp_init_nest_lock_with_hint
function omp_control_tool(command, modifier, arg) bind(c)
import
integer (kind=omp_integer_kind) omp_control_tool
integer (kind=omp_control_tool_kind), value :: command
integer (kind=omp_control_tool_kind), value :: modifier
integer (kind=kmp_pointer_kind), optional :: arg
end function omp_control_tool
end interface
!DIR$ IF DEFINED (__INTEL_OFFLOAD)
!DIR$ IF(__INTEL_COMPILER.LT.1900)
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_set_num_threads
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_set_dynamic
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_set_nested
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_num_threads
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_max_threads
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_thread_num
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_num_procs
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_in_parallel
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_in_final
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_dynamic
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_nested
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_thread_limit
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_set_max_active_levels
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_max_active_levels
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_level
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_active_level
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_ancestor_thread_num
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_team_size
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_set_schedule
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_schedule
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_proc_bind
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_wtime
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_wtick
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_default_device
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_set_default_device
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_is_initial_device
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_initial_device
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_num_devices
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_device_num
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_num_teams
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_team_num
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_init_lock
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_destroy_lock
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_set_lock
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_unset_lock
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_test_lock
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_init_nest_lock
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_destroy_nest_lock
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_set_nest_lock
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_unset_nest_lock
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_test_nest_lock
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_max_task_priority
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_set_affinity_format
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_get_affinity_format
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_display_affinity
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_capture_affinity
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_set_stacksize
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_set_stacksize_s
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_set_blocktime
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_set_library_serial
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_set_library_turnaround
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_set_library_throughput
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_set_library
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_set_defaults
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_get_stacksize
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_get_stacksize_s
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_get_blocktime
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_get_library
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_set_disp_num_buffers
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_set_affinity
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_get_affinity
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_get_affinity_max_proc
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_create_affinity_mask
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_destroy_affinity_mask
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_set_affinity_mask_proc
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_unset_affinity_mask_proc
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_get_affinity_mask_proc
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_malloc
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_aligned_malloc
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_calloc
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_realloc
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_free
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_set_warnings_on
!DIR$ ATTRIBUTES OFFLOAD:MIC :: kmp_set_warnings_off
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_init_lock_with_hint
!DIR$ ATTRIBUTES OFFLOAD:MIC :: omp_init_nest_lock_with_hint
!DIR$ ENDIF
!DIR$ IF(__INTEL_COMPILER.GE.1400)
!$omp declare target(omp_set_num_threads )
!$omp declare target(omp_set_dynamic )
!$omp declare target(omp_set_nested )
!$omp declare target(omp_get_num_threads )
!$omp declare target(omp_get_max_threads )
!$omp declare target(omp_get_thread_num )
!$omp declare target(omp_get_num_procs )
!$omp declare target(omp_in_parallel )
!$omp declare target(omp_in_final )
!$omp declare target(omp_get_dynamic )
!$omp declare target(omp_get_nested )
!$omp declare target(omp_get_thread_limit )
!$omp declare target(omp_set_max_active_levels )
!$omp declare target(omp_get_max_active_levels )
!$omp declare target(omp_get_level )
!$omp declare target(omp_get_active_level )
!$omp declare target(omp_get_ancestor_thread_num )
!$omp declare target(omp_get_team_size )
!$omp declare target(omp_set_schedule )
!$omp declare target(omp_get_schedule )
!$omp declare target(omp_get_proc_bind )
!$omp declare target(omp_get_wtime )
!$omp declare target(omp_get_wtick )
!$omp declare target(omp_get_default_device )
!$omp declare target(omp_set_default_device )
!$omp declare target(omp_is_initial_device )
!$omp declare target(omp_get_initial_device )
!$omp declare target(omp_get_num_devices )
!$omp declare target(omp_get_device_num )
!$omp declare target(omp_get_num_teams )
!$omp declare target(omp_get_team_num )
!$omp declare target(omp_init_lock )
!$omp declare target(omp_destroy_lock )
!$omp declare target(omp_set_lock )
!$omp declare target(omp_unset_lock )
!$omp declare target(omp_test_lock )
!$omp declare target(omp_init_nest_lock )
!$omp declare target(omp_destroy_nest_lock )
!$omp declare target(omp_set_nest_lock )
!$omp declare target(omp_unset_nest_lock )
!$omp declare target(omp_test_nest_lock )
!$omp declare target(omp_get_max_task_priority )
!$omp declare target(omp_set_affinity_format )
!$omp declare target(omp_get_affinity_format )
!$omp declare target(omp_display_affinity )
!$omp declare target(omp_capture_affinity )
!$omp declare target(kmp_set_stacksize )
!$omp declare target(kmp_set_stacksize_s )
!$omp declare target(kmp_set_blocktime )
!$omp declare target(kmp_set_library_serial )
!$omp declare target(kmp_set_library_turnaround )
!$omp declare target(kmp_set_library_throughput )
!$omp declare target(kmp_set_library )
!$omp declare target(kmp_set_defaults )
!$omp declare target(kmp_get_stacksize )
!$omp declare target(kmp_get_stacksize_s )
!$omp declare target(kmp_get_blocktime )
!$omp declare target(kmp_get_library )
!$omp declare target(kmp_set_disp_num_buffers )
!$omp declare target(kmp_set_affinity )
!$omp declare target(kmp_get_affinity )
!$omp declare target(kmp_get_affinity_max_proc )
!$omp declare target(kmp_create_affinity_mask )
!$omp declare target(kmp_destroy_affinity_mask )
!$omp declare target(kmp_set_affinity_mask_proc )
!$omp declare target(kmp_unset_affinity_mask_proc )
!$omp declare target(kmp_get_affinity_mask_proc )
!$omp declare target(kmp_malloc )
!$omp declare target(kmp_aligned_malloc )
!$omp declare target(kmp_calloc )
!$omp declare target(kmp_realloc )
!$omp declare target(kmp_free )
!$omp declare target(kmp_set_warnings_on )
!$omp declare target(kmp_set_warnings_off )
!$omp declare target(omp_init_lock_with_hint )
!$omp declare target(omp_init_nest_lock_with_hint )
!DIR$ ENDIF
!DIR$ ENDIF

4014
runtime/src/kmp.h Normal file

File diff suppressed because it is too large Load Diff

5379
runtime/src/kmp_affinity.cpp Normal file

File diff suppressed because it is too large Load Diff

828
runtime/src/kmp_affinity.h Normal file
View File

@ -0,0 +1,828 @@
/*
* kmp_affinity.h -- header for affinity management
*/
//===----------------------------------------------------------------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is dual licensed under the MIT and the University of Illinois Open
// Source Licenses. See LICENSE.txt for details.
//
//===----------------------------------------------------------------------===//
#ifndef KMP_AFFINITY_H
#define KMP_AFFINITY_H
#include "kmp.h"
#include "kmp_os.h"
#if KMP_AFFINITY_SUPPORTED
#if KMP_USE_HWLOC
class KMPHwlocAffinity : public KMPAffinity {
public:
class Mask : public KMPAffinity::Mask {
hwloc_cpuset_t mask;
public:
Mask() {
mask = hwloc_bitmap_alloc();
this->zero();
}
~Mask() { hwloc_bitmap_free(mask); }
void set(int i) override { hwloc_bitmap_set(mask, i); }
bool is_set(int i) const override { return hwloc_bitmap_isset(mask, i); }
void clear(int i) override { hwloc_bitmap_clr(mask, i); }
void zero() override { hwloc_bitmap_zero(mask); }
void copy(const KMPAffinity::Mask *src) override {
const Mask *convert = static_cast<const Mask *>(src);
hwloc_bitmap_copy(mask, convert->mask);
}
void bitwise_and(const KMPAffinity::Mask *rhs) override {
const Mask *convert = static_cast<const Mask *>(rhs);
hwloc_bitmap_and(mask, mask, convert->mask);
}
void bitwise_or(const KMPAffinity::Mask *rhs) override {
const Mask *convert = static_cast<const Mask *>(rhs);
hwloc_bitmap_or(mask, mask, convert->mask);
}
void bitwise_not() override { hwloc_bitmap_not(mask, mask); }
int begin() const override { return hwloc_bitmap_first(mask); }
int end() const override { return -1; }
int next(int previous) const override {
return hwloc_bitmap_next(mask, previous);
}
int get_system_affinity(bool abort_on_error) override {
KMP_ASSERT2(KMP_AFFINITY_CAPABLE(),
"Illegal get affinity operation when not capable");
int retval =
hwloc_get_cpubind(__kmp_hwloc_topology, mask, HWLOC_CPUBIND_THREAD);
if (retval >= 0) {
return 0;
}
int error = errno;
if (abort_on_error) {
__kmp_fatal(KMP_MSG(FatalSysError), KMP_ERR(error), __kmp_msg_null);
}
return error;
}
int set_system_affinity(bool abort_on_error) const override {
KMP_ASSERT2(KMP_AFFINITY_CAPABLE(),
"Illegal get affinity operation when not capable");
int retval =
hwloc_set_cpubind(__kmp_hwloc_topology, mask, HWLOC_CPUBIND_THREAD);
if (retval >= 0) {
return 0;
}
int error = errno;
if (abort_on_error) {
__kmp_fatal(KMP_MSG(FatalSysError), KMP_ERR(error), __kmp_msg_null);
}
return error;
}
int get_proc_group() const override {
int group = -1;
#if KMP_OS_WINDOWS
if (__kmp_num_proc_groups == 1) {
return 1;
}
for (int i = 0; i < __kmp_num_proc_groups; i++) {
// On windows, the long type is always 32 bits
unsigned long first_32_bits = hwloc_bitmap_to_ith_ulong(mask, i * 2);
unsigned long second_32_bits =
hwloc_bitmap_to_ith_ulong(mask, i * 2 + 1);
if (first_32_bits == 0 && second_32_bits == 0) {
continue;
}
if (group >= 0) {
return -1;
}
group = i;
}
#endif /* KMP_OS_WINDOWS */
return group;
}
};
void determine_capable(const char *var) override {
const hwloc_topology_support *topology_support;
if (__kmp_hwloc_topology == NULL) {
if (hwloc_topology_init(&__kmp_hwloc_topology) < 0) {
__kmp_hwloc_error = TRUE;
if (__kmp_affinity_verbose)
KMP_WARNING(AffHwlocErrorOccurred, var, "hwloc_topology_init()");
}
if (hwloc_topology_load(__kmp_hwloc_topology) < 0) {
__kmp_hwloc_error = TRUE;
if (__kmp_affinity_verbose)
KMP_WARNING(AffHwlocErrorOccurred, var, "hwloc_topology_load()");
}
}
topology_support = hwloc_topology_get_support(__kmp_hwloc_topology);
// Is the system capable of setting/getting this thread's affinity?
// Also, is topology discovery possible? (pu indicates ability to discover
// processing units). And finally, were there no errors when calling any
// hwloc_* API functions?
if (topology_support && topology_support->cpubind->set_thisthread_cpubind &&
topology_support->cpubind->get_thisthread_cpubind &&
topology_support->discovery->pu && !__kmp_hwloc_error) {
// enables affinity according to KMP_AFFINITY_CAPABLE() macro
KMP_AFFINITY_ENABLE(TRUE);
} else {
// indicate that hwloc didn't work and disable affinity
__kmp_hwloc_error = TRUE;
KMP_AFFINITY_DISABLE();
}
}
void bind_thread(int which) override {
KMP_ASSERT2(KMP_AFFINITY_CAPABLE(),
"Illegal set affinity operation when not capable");
KMPAffinity::Mask *mask;
KMP_CPU_ALLOC_ON_STACK(mask);
KMP_CPU_ZERO(mask);
KMP_CPU_SET(which, mask);
__kmp_set_system_affinity(mask, TRUE);
KMP_CPU_FREE_FROM_STACK(mask);
}
KMPAffinity::Mask *allocate_mask() override { return new Mask(); }
void deallocate_mask(KMPAffinity::Mask *m) override { delete m; }
KMPAffinity::Mask *allocate_mask_array(int num) override {
return new Mask[num];
}
void deallocate_mask_array(KMPAffinity::Mask *array) override {
Mask *hwloc_array = static_cast<Mask *>(array);
delete[] hwloc_array;
}
KMPAffinity::Mask *index_mask_array(KMPAffinity::Mask *array,
int index) override {
Mask *hwloc_array = static_cast<Mask *>(array);
return &(hwloc_array[index]);
}
api_type get_api_type() const override { return HWLOC; }
};
#endif /* KMP_USE_HWLOC */
#if KMP_OS_LINUX
/* On some of the older OS's that we build on, these constants aren't present
in <asm/unistd.h> #included from <sys.syscall.h>. They must be the same on
all systems of the same arch where they are defined, and they cannot change.
stone forever. */
#include <sys/syscall.h>
#if KMP_ARCH_X86 || KMP_ARCH_ARM
#ifndef __NR_sched_setaffinity
#define __NR_sched_setaffinity 241
#elif __NR_sched_setaffinity != 241
#error Wrong code for setaffinity system call.
#endif /* __NR_sched_setaffinity */
#ifndef __NR_sched_getaffinity
#define __NR_sched_getaffinity 242
#elif __NR_sched_getaffinity != 242
#error Wrong code for getaffinity system call.
#endif /* __NR_sched_getaffinity */
#elif KMP_ARCH_AARCH64
#ifndef __NR_sched_setaffinity
#define __NR_sched_setaffinity 122
#elif __NR_sched_setaffinity != 122
#error Wrong code for setaffinity system call.
#endif /* __NR_sched_setaffinity */
#ifndef __NR_sched_getaffinity
#define __NR_sched_getaffinity 123
#elif __NR_sched_getaffinity != 123
#error Wrong code for getaffinity system call.
#endif /* __NR_sched_getaffinity */
#elif KMP_ARCH_X86_64
#ifndef __NR_sched_setaffinity
#define __NR_sched_setaffinity 203
#elif __NR_sched_setaffinity != 203
#error Wrong code for setaffinity system call.
#endif /* __NR_sched_setaffinity */
#ifndef __NR_sched_getaffinity
#define __NR_sched_getaffinity 204
#elif __NR_sched_getaffinity != 204
#error Wrong code for getaffinity system call.
#endif /* __NR_sched_getaffinity */
#elif KMP_ARCH_PPC64
#ifndef __NR_sched_setaffinity
#define __NR_sched_setaffinity 222
#elif __NR_sched_setaffinity != 222
#error Wrong code for setaffinity system call.
#endif /* __NR_sched_setaffinity */
#ifndef __NR_sched_getaffinity
#define __NR_sched_getaffinity 223
#elif __NR_sched_getaffinity != 223
#error Wrong code for getaffinity system call.
#endif /* __NR_sched_getaffinity */
#elif KMP_ARCH_MIPS
#ifndef __NR_sched_setaffinity
#define __NR_sched_setaffinity 4239
#elif __NR_sched_setaffinity != 4239
#error Wrong code for setaffinity system call.
#endif /* __NR_sched_setaffinity */
#ifndef __NR_sched_getaffinity
#define __NR_sched_getaffinity 4240
#elif __NR_sched_getaffinity != 4240
#error Wrong code for getaffinity system call.
#endif /* __NR_sched_getaffinity */
#elif KMP_ARCH_MIPS64
#ifndef __NR_sched_setaffinity
#define __NR_sched_setaffinity 5195
#elif __NR_sched_setaffinity != 5195
#error Wrong code for setaffinity system call.
#endif /* __NR_sched_setaffinity */
#ifndef __NR_sched_getaffinity
#define __NR_sched_getaffinity 5196
#elif __NR_sched_getaffinity != 5196
#error Wrong code for getaffinity system call.
#endif /* __NR_sched_getaffinity */
#error Unknown or unsupported architecture
#endif /* KMP_ARCH_* */
class KMPNativeAffinity : public KMPAffinity {
class Mask : public KMPAffinity::Mask {
typedef unsigned char mask_t;
static const int BITS_PER_MASK_T = sizeof(mask_t) * CHAR_BIT;
public:
mask_t *mask;
Mask() { mask = (mask_t *)__kmp_allocate(__kmp_affin_mask_size); }
~Mask() {
if (mask)
__kmp_free(mask);
}
void set(int i) override {
mask[i / BITS_PER_MASK_T] |= ((mask_t)1 << (i % BITS_PER_MASK_T));
}
bool is_set(int i) const override {
return (mask[i / BITS_PER_MASK_T] & ((mask_t)1 << (i % BITS_PER_MASK_T)));
}
void clear(int i) override {
mask[i / BITS_PER_MASK_T] &= ~((mask_t)1 << (i % BITS_PER_MASK_T));
}
void zero() override {
for (size_t i = 0; i < __kmp_affin_mask_size; ++i)
mask[i] = 0;
}
void copy(const KMPAffinity::Mask *src) override {
const Mask *convert = static_cast<const Mask *>(src);
for (size_t i = 0; i < __kmp_affin_mask_size; ++i)
mask[i] = convert->mask[i];
}
void bitwise_and(const KMPAffinity::Mask *rhs) override {
const Mask *convert = static_cast<const Mask *>(rhs);
for (size_t i = 0; i < __kmp_affin_mask_size; ++i)
mask[i] &= convert->mask[i];
}
void bitwise_or(const KMPAffinity::Mask *rhs) override {
const Mask *convert = static_cast<const Mask *>(rhs);
for (size_t i = 0; i < __kmp_affin_mask_size; ++i)
mask[i] |= convert->mask[i];
}
void bitwise_not() override {
for (size_t i = 0; i < __kmp_affin_mask_size; ++i)
mask[i] = ~(mask[i]);
}
int begin() const override {
int retval = 0;
while (retval < end() && !is_set(retval))
++retval;
return retval;
}
int end() const override { return __kmp_affin_mask_size * BITS_PER_MASK_T; }
int next(int previous) const override {
int retval = previous + 1;
while (retval < end() && !is_set(retval))
++retval;
return retval;
}
int get_system_affinity(bool abort_on_error) override {
KMP_ASSERT2(KMP_AFFINITY_CAPABLE(),
"Illegal get affinity operation when not capable");
int retval =
syscall(__NR_sched_getaffinity, 0, __kmp_affin_mask_size, mask);
if (retval >= 0) {
return 0;
}
int error = errno;
if (abort_on_error) {
__kmp_fatal(KMP_MSG(FatalSysError), KMP_ERR(error), __kmp_msg_null);
}
return error;
}
int set_system_affinity(bool abort_on_error) const override {
KMP_ASSERT2(KMP_AFFINITY_CAPABLE(),
"Illegal get affinity operation when not capable");
int retval =
syscall(__NR_sched_setaffinity, 0, __kmp_affin_mask_size, mask);
if (retval >= 0) {
return 0;
}
int error = errno;
if (abort_on_error) {
__kmp_fatal(KMP_MSG(FatalSysError), KMP_ERR(error), __kmp_msg_null);
}
return error;
}
};
void determine_capable(const char *env_var) override {
__kmp_affinity_determine_capable(env_var);
}
void bind_thread(int which) override { __kmp_affinity_bind_thread(which); }
KMPAffinity::Mask *allocate_mask() override {
KMPNativeAffinity::Mask *retval = new Mask();
return retval;
}
void deallocate_mask(KMPAffinity::Mask *m) override {
KMPNativeAffinity::Mask *native_mask =
static_cast<KMPNativeAffinity::Mask *>(m);
delete native_mask;
}
KMPAffinity::Mask *allocate_mask_array(int num) override {
return new Mask[num];
}
void deallocate_mask_array(KMPAffinity::Mask *array) override {
Mask *linux_array = static_cast<Mask *>(array);
delete[] linux_array;
}
KMPAffinity::Mask *index_mask_array(KMPAffinity::Mask *array,
int index) override {
Mask *linux_array = static_cast<Mask *>(array);
return &(linux_array[index]);
}
api_type get_api_type() const override { return NATIVE_OS; }
};
#endif /* KMP_OS_LINUX */
#if KMP_OS_WINDOWS
class KMPNativeAffinity : public KMPAffinity {
class Mask : public KMPAffinity::Mask {
typedef ULONG_PTR mask_t;
static const int BITS_PER_MASK_T = sizeof(mask_t) * CHAR_BIT;
mask_t *mask;
public:
Mask() {
mask = (mask_t *)__kmp_allocate(sizeof(mask_t) * __kmp_num_proc_groups);
}
~Mask() {
if (mask)
__kmp_free(mask);
}
void set(int i) override {
mask[i / BITS_PER_MASK_T] |= ((mask_t)1 << (i % BITS_PER_MASK_T));
}
bool is_set(int i) const override {
return (mask[i / BITS_PER_MASK_T] & ((mask_t)1 << (i % BITS_PER_MASK_T)));
}
void clear(int i) override {
mask[i / BITS_PER_MASK_T] &= ~((mask_t)1 << (i % BITS_PER_MASK_T));
}
void zero() override {
for (int i = 0; i < __kmp_num_proc_groups; ++i)
mask[i] = 0;
}
void copy(const KMPAffinity::Mask *src) override {
const Mask *convert = static_cast<const Mask *>(src);
for (int i = 0; i < __kmp_num_proc_groups; ++i)
mask[i] = convert->mask[i];
}
void bitwise_and(const KMPAffinity::Mask *rhs) override {
const Mask *convert = static_cast<const Mask *>(rhs);
for (int i = 0; i < __kmp_num_proc_groups; ++i)
mask[i] &= convert->mask[i];
}
void bitwise_or(const KMPAffinity::Mask *rhs) override {
const Mask *convert = static_cast<const Mask *>(rhs);
for (int i = 0; i < __kmp_num_proc_groups; ++i)
mask[i] |= convert->mask[i];
}
void bitwise_not() override {
for (int i = 0; i < __kmp_num_proc_groups; ++i)
mask[i] = ~(mask[i]);
}
int begin() const override {
int retval = 0;
while (retval < end() && !is_set(retval))
++retval;
return retval;
}
int end() const override { return __kmp_num_proc_groups * BITS_PER_MASK_T; }
int next(int previous) const override {
int retval = previous + 1;
while (retval < end() && !is_set(retval))
++retval;
return retval;
}
int set_system_affinity(bool abort_on_error) const override {
if (__kmp_num_proc_groups > 1) {
// Check for a valid mask.
GROUP_AFFINITY ga;
int group = get_proc_group();
if (group < 0) {
if (abort_on_error) {
KMP_FATAL(AffinityInvalidMask, "kmp_set_affinity");
}
return -1;
}
// Transform the bit vector into a GROUP_AFFINITY struct
// and make the system call to set affinity.
ga.Group = group;
ga.Mask = mask[group];
ga.Reserved[0] = ga.Reserved[1] = ga.Reserved[2] = 0;
KMP_DEBUG_ASSERT(__kmp_SetThreadGroupAffinity != NULL);
if (__kmp_SetThreadGroupAffinity(GetCurrentThread(), &ga, NULL) == 0) {
DWORD error = GetLastError();
if (abort_on_error) {
__kmp_fatal(KMP_MSG(CantSetThreadAffMask), KMP_ERR(error),
__kmp_msg_null);
}
return error;
}
} else {
if (!SetThreadAffinityMask(GetCurrentThread(), *mask)) {
DWORD error = GetLastError();
if (abort_on_error) {
__kmp_fatal(KMP_MSG(CantSetThreadAffMask), KMP_ERR(error),
__kmp_msg_null);
}
return error;
}
}
return 0;
}
int get_system_affinity(bool abort_on_error) override {
if (__kmp_num_proc_groups > 1) {
this->zero();
GROUP_AFFINITY ga;
KMP_DEBUG_ASSERT(__kmp_GetThreadGroupAffinity != NULL);
if (__kmp_GetThreadGroupAffinity(GetCurrentThread(), &ga) == 0) {
DWORD error = GetLastError();
if (abort_on_error) {
__kmp_fatal(KMP_MSG(FunctionError, "GetThreadGroupAffinity()"),
KMP_ERR(error), __kmp_msg_null);
}
return error;
}
if ((ga.Group < 0) || (ga.Group > __kmp_num_proc_groups) ||
(ga.Mask == 0)) {
return -1;
}
mask[ga.Group] = ga.Mask;
} else {
mask_t newMask, sysMask, retval;
if (!GetProcessAffinityMask(GetCurrentProcess(), &newMask, &sysMask)) {
DWORD error = GetLastError();
if (abort_on_error) {
__kmp_fatal(KMP_MSG(FunctionError, "GetProcessAffinityMask()"),
KMP_ERR(error), __kmp_msg_null);
}
return error;
}
retval = SetThreadAffinityMask(GetCurrentThread(), newMask);
if (!retval) {
DWORD error = GetLastError();
if (abort_on_error) {
__kmp_fatal(KMP_MSG(FunctionError, "SetThreadAffinityMask()"),
KMP_ERR(error), __kmp_msg_null);
}
return error;
}
newMask = SetThreadAffinityMask(GetCurrentThread(), retval);
if (!newMask) {
DWORD error = GetLastError();
if (abort_on_error) {
__kmp_fatal(KMP_MSG(FunctionError, "SetThreadAffinityMask()"),
KMP_ERR(error), __kmp_msg_null);
}
}
*mask = retval;
}
return 0;
}
int get_proc_group() const override {
int group = -1;
if (__kmp_num_proc_groups == 1) {
return 1;
}
for (int i = 0; i < __kmp_num_proc_groups; i++) {
if (mask[i] == 0)
continue;
if (group >= 0)
return -1;
group = i;
}
return group;
}
};
void determine_capable(const char *env_var) override {
__kmp_affinity_determine_capable(env_var);
}
void bind_thread(int which) override { __kmp_affinity_bind_thread(which); }
KMPAffinity::Mask *allocate_mask() override { return new Mask(); }
void deallocate_mask(KMPAffinity::Mask *m) override { delete m; }
KMPAffinity::Mask *allocate_mask_array(int num) override {
return new Mask[num];
}
void deallocate_mask_array(KMPAffinity::Mask *array) override {
Mask *windows_array = static_cast<Mask *>(array);
delete[] windows_array;
}
KMPAffinity::Mask *index_mask_array(KMPAffinity::Mask *array,
int index) override {
Mask *windows_array = static_cast<Mask *>(array);
return &(windows_array[index]);
}
api_type get_api_type() const override { return NATIVE_OS; }
};
#endif /* KMP_OS_WINDOWS */
#endif /* KMP_AFFINITY_SUPPORTED */
class Address {
public:
static const unsigned maxDepth = 32;
unsigned labels[maxDepth];
unsigned childNums[maxDepth];
unsigned depth;
unsigned leader;
Address(unsigned _depth) : depth(_depth), leader(FALSE) {}
Address &operator=(const Address &b) {
depth = b.depth;
for (unsigned i = 0; i < depth; i++) {
labels[i] = b.labels[i];
childNums[i] = b.childNums[i];
}
leader = FALSE;
return *this;
}
bool operator==(const Address &b) const {
if (depth != b.depth)
return false;
for (unsigned i = 0; i < depth; i++)
if (labels[i] != b.labels[i])
return false;
return true;
}
bool isClose(const Address &b, int level) const {
if (depth != b.depth)
return false;
if ((unsigned)level >= depth)
return true;
for (unsigned i = 0; i < (depth - level); i++)
if (labels[i] != b.labels[i])
return false;
return true;
}
bool operator!=(const Address &b) const { return !operator==(b); }
void print() const {
unsigned i;
printf("Depth: %u --- ", depth);
for (i = 0; i < depth; i++) {
printf("%u ", labels[i]);
}
}
};
class AddrUnsPair {
public:
Address first;
unsigned second;
AddrUnsPair(Address _first, unsigned _second)
: first(_first), second(_second) {}
AddrUnsPair &operator=(const AddrUnsPair &b) {
first = b.first;
second = b.second;
return *this;
}
void print() const {
printf("first = ");
first.print();
printf(" --- second = %u", second);
}
bool operator==(const AddrUnsPair &b) const {
if (first != b.first)
return false;
if (second != b.second)
return false;
return true;
}
bool operator!=(const AddrUnsPair &b) const { return !operator==(b); }
};
static int __kmp_affinity_cmp_Address_labels(const void *a, const void *b) {
const Address *aa = &(((const AddrUnsPair *)a)->first);
const Address *bb = &(((const AddrUnsPair *)b)->first);
unsigned depth = aa->depth;
unsigned i;
KMP_DEBUG_ASSERT(depth == bb->depth);
for (i = 0; i < depth; i++) {
if (aa->labels[i] < bb->labels[i])
return -1;
if (aa->labels[i] > bb->labels[i])
return 1;
}
return 0;
}
/* A structure for holding machine-specific hierarchy info to be computed once
at init. This structure represents a mapping of threads to the actual machine
hierarchy, or to our best guess at what the hierarchy might be, for the
purpose of performing an efficient barrier. In the worst case, when there is
no machine hierarchy information, it produces a tree suitable for a barrier,
similar to the tree used in the hyper barrier. */
class hierarchy_info {
public:
/* Good default values for number of leaves and branching factor, given no
affinity information. Behaves a bit like hyper barrier. */
static const kmp_uint32 maxLeaves = 4;
static const kmp_uint32 minBranch = 4;
/** Number of levels in the hierarchy. Typical levels are threads/core,
cores/package or socket, packages/node, nodes/machine, etc. We don't want
to get specific with nomenclature. When the machine is oversubscribed we
add levels to duplicate the hierarchy, doubling the thread capacity of the
hierarchy each time we add a level. */
kmp_uint32 maxLevels;
/** This is specifically the depth of the machine configuration hierarchy, in
terms of the number of levels along the longest path from root to any
leaf. It corresponds to the number of entries in numPerLevel if we exclude
all but one trailing 1. */
kmp_uint32 depth;
kmp_uint32 base_num_threads;
enum init_status { initialized = 0, not_initialized = 1, initializing = 2 };
volatile kmp_int8 uninitialized; // 0=initialized, 1=not initialized,
// 2=initialization in progress
volatile kmp_int8 resizing; // 0=not resizing, 1=resizing
/** Level 0 corresponds to leaves. numPerLevel[i] is the number of children
the parent of a node at level i has. For example, if we have a machine
with 4 packages, 4 cores/package and 2 HT per core, then numPerLevel =
{2, 4, 4, 1, 1}. All empty levels are set to 1. */
kmp_uint32 *numPerLevel;
kmp_uint32 *skipPerLevel;
void deriveLevels(AddrUnsPair *adr2os, int num_addrs) {
int hier_depth = adr2os[0].first.depth;
int level = 0;
for (int i = hier_depth - 1; i >= 0; --i) {
int max = -1;
for (int j = 0; j < num_addrs; ++j) {
int next = adr2os[j].first.childNums[i];
if (next > max)
max = next;
}
numPerLevel[level] = max + 1;
++level;
}
}
hierarchy_info()
: maxLevels(7), depth(1), uninitialized(not_initialized), resizing(0) {}
void fini() {
if (!uninitialized && numPerLevel) {
__kmp_free(numPerLevel);
numPerLevel = NULL;
uninitialized = not_initialized;
}
}
void init(AddrUnsPair *adr2os, int num_addrs) {
kmp_int8 bool_result = KMP_COMPARE_AND_STORE_ACQ8(
&uninitialized, not_initialized, initializing);
if (bool_result == 0) { // Wait for initialization
while (TCR_1(uninitialized) != initialized)
KMP_CPU_PAUSE();
return;
}
KMP_DEBUG_ASSERT(bool_result == 1);
/* Added explicit initialization of the data fields here to prevent usage of
dirty value observed when static library is re-initialized multiple times
(e.g. when non-OpenMP thread repeatedly launches/joins thread that uses
OpenMP). */
depth = 1;
resizing = 0;
maxLevels = 7;
numPerLevel =
(kmp_uint32 *)__kmp_allocate(maxLevels * 2 * sizeof(kmp_uint32));
skipPerLevel = &(numPerLevel[maxLevels]);
for (kmp_uint32 i = 0; i < maxLevels;
++i) { // init numPerLevel[*] to 1 item per level
numPerLevel[i] = 1;
skipPerLevel[i] = 1;
}
// Sort table by physical ID
if (adr2os) {
qsort(adr2os, num_addrs, sizeof(*adr2os),
__kmp_affinity_cmp_Address_labels);
deriveLevels(adr2os, num_addrs);
} else {
numPerLevel[0] = maxLeaves;
numPerLevel[1] = num_addrs / maxLeaves;
if (num_addrs % maxLeaves)
numPerLevel[1]++;
}
base_num_threads = num_addrs;
for (int i = maxLevels - 1; i >= 0;
--i) // count non-empty levels to get depth
if (numPerLevel[i] != 1 || depth > 1) // only count one top-level '1'
depth++;
kmp_uint32 branch = minBranch;
if (numPerLevel[0] == 1)
branch = num_addrs / maxLeaves;
if (branch < minBranch)
branch = minBranch;
for (kmp_uint32 d = 0; d < depth - 1; ++d) { // optimize hierarchy width
while (numPerLevel[d] > branch ||
(d == 0 && numPerLevel[d] > maxLeaves)) { // max 4 on level 0!
if (numPerLevel[d] & 1)
numPerLevel[d]++;
numPerLevel[d] = numPerLevel[d] >> 1;
if (numPerLevel[d + 1] == 1)
depth++;
numPerLevel[d + 1] = numPerLevel[d + 1] << 1;
}
if (numPerLevel[0] == 1) {
branch = branch >> 1;
if (branch < 4)
branch = minBranch;
}
}
for (kmp_uint32 i = 1; i < depth; ++i)
skipPerLevel[i] = numPerLevel[i - 1] * skipPerLevel[i - 1];
// Fill in hierarchy in the case of oversubscription
for (kmp_uint32 i = depth; i < maxLevels; ++i)
skipPerLevel[i] = 2 * skipPerLevel[i - 1];
uninitialized = initialized; // One writer
}
// Resize the hierarchy if nproc changes to something larger than before
void resize(kmp_uint32 nproc) {
kmp_int8 bool_result = KMP_COMPARE_AND_STORE_ACQ8(&resizing, 0, 1);
while (bool_result == 0) { // someone else is trying to resize
KMP_CPU_PAUSE();
if (nproc <= base_num_threads) // happy with other thread's resize
return;
else // try to resize
bool_result = KMP_COMPARE_AND_STORE_ACQ8(&resizing, 0, 1);
}
KMP_DEBUG_ASSERT(bool_result != 0);
if (nproc <= base_num_threads)
return; // happy with other thread's resize
// Calculate new maxLevels
kmp_uint32 old_sz = skipPerLevel[depth - 1];
kmp_uint32 incs = 0, old_maxLevels = maxLevels;
// First see if old maxLevels is enough to contain new size
for (kmp_uint32 i = depth; i < maxLevels && nproc > old_sz; ++i) {
skipPerLevel[i] = 2 * skipPerLevel[i - 1];
numPerLevel[i - 1] *= 2;
old_sz *= 2;
depth++;
}
if (nproc > old_sz) { // Not enough space, need to expand hierarchy
while (nproc > old_sz) {
old_sz *= 2;
incs++;
depth++;
}
maxLevels += incs;
// Resize arrays
kmp_uint32 *old_numPerLevel = numPerLevel;
kmp_uint32 *old_skipPerLevel = skipPerLevel;
numPerLevel = skipPerLevel = NULL;
numPerLevel =
(kmp_uint32 *)__kmp_allocate(maxLevels * 2 * sizeof(kmp_uint32));
skipPerLevel = &(numPerLevel[maxLevels]);
// Copy old elements from old arrays
for (kmp_uint32 i = 0; i < old_maxLevels;
++i) { // init numPerLevel[*] to 1 item per level
numPerLevel[i] = old_numPerLevel[i];
skipPerLevel[i] = old_skipPerLevel[i];
}
// Init new elements in arrays to 1
for (kmp_uint32 i = old_maxLevels; i < maxLevels;
++i) { // init numPerLevel[*] to 1 item per level
numPerLevel[i] = 1;
skipPerLevel[i] = 1;
}
// Free old arrays
__kmp_free(old_numPerLevel);
}
// Fill in oversubscription levels of hierarchy
for (kmp_uint32 i = old_maxLevels; i < maxLevels; ++i)
skipPerLevel[i] = 2 * skipPerLevel[i - 1];
base_num_threads = nproc;
resizing = 0; // One writer
}
};
#endif // KMP_AFFINITY_H

1809
runtime/src/kmp_alloc.cpp Normal file

File diff suppressed because it is too large Load Diff

3630
runtime/src/kmp_atomic.cpp Normal file

File diff suppressed because it is too large Load Diff

1776
runtime/src/kmp_atomic.h Normal file

File diff suppressed because it is too large Load Diff

2067
runtime/src/kmp_barrier.cpp Normal file

File diff suppressed because it is too large Load Diff

336
runtime/src/kmp_cancel.cpp Normal file
View File

@ -0,0 +1,336 @@
//===----------------------------------------------------------------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is dual licensed under the MIT and the University of Illinois Open
// Source Licenses. See LICENSE.txt for details.
//
//===----------------------------------------------------------------------===//
#include "kmp.h"
#include "kmp_i18n.h"
#include "kmp_io.h"
#include "kmp_str.h"
#if OMPT_SUPPORT
#include "ompt-specific.h"
#endif
#if OMP_40_ENABLED
/*!
@ingroup CANCELLATION
@param loc_ref location of the original task directive
@param gtid Global thread ID of encountering thread
@param cncl_kind Cancellation kind (parallel, for, sections, taskgroup)
@return returns true if the cancellation request has been activated and the
execution thread needs to proceed to the end of the canceled region.
Request cancellation of the binding OpenMP region.
*/
kmp_int32 __kmpc_cancel(ident_t *loc_ref, kmp_int32 gtid, kmp_int32 cncl_kind) {
kmp_info_t *this_thr = __kmp_threads[gtid];
KC_TRACE(10, ("__kmpc_cancel: T#%d request %d OMP_CANCELLATION=%d\n", gtid,
cncl_kind, __kmp_omp_cancellation));
KMP_DEBUG_ASSERT(cncl_kind != cancel_noreq);
KMP_DEBUG_ASSERT(cncl_kind == cancel_parallel || cncl_kind == cancel_loop ||
cncl_kind == cancel_sections ||
cncl_kind == cancel_taskgroup);
KMP_DEBUG_ASSERT(__kmp_get_gtid() == gtid);
if (__kmp_omp_cancellation) {
switch (cncl_kind) {
case cancel_parallel:
case cancel_loop:
case cancel_sections:
// cancellation requests for parallel and worksharing constructs
// are handled through the team structure
{
kmp_team_t *this_team = this_thr->th.th_team;
KMP_DEBUG_ASSERT(this_team);
kmp_int32 old = cancel_noreq;
this_team->t.t_cancel_request.compare_exchange_strong(old, cncl_kind);
if (old == cancel_noreq || old == cncl_kind) {
// we do not have a cancellation request in this team or we do have
// one that matches the current request -> cancel
#if OMPT_SUPPORT && OMPT_OPTIONAL
if (ompt_enabled.ompt_callback_cancel) {
ompt_data_t *task_data;
__ompt_get_task_info_internal(0, NULL, &task_data, NULL, NULL,
NULL);
ompt_cancel_flag_t type = ompt_cancel_parallel;
if (cncl_kind == cancel_parallel)
type = ompt_cancel_parallel;
else if (cncl_kind == cancel_loop)
type = ompt_cancel_loop;
else if (cncl_kind == cancel_sections)
type = ompt_cancel_sections;
ompt_callbacks.ompt_callback(ompt_callback_cancel)(
task_data, type | ompt_cancel_activated,
OMPT_GET_RETURN_ADDRESS(0));
}
#endif
return 1 /* true */;
}
break;
}
case cancel_taskgroup:
// cancellation requests for a task group
// are handled through the taskgroup structure
{
kmp_taskdata_t *task;
kmp_taskgroup_t *taskgroup;
task = this_thr->th.th_current_task;
KMP_DEBUG_ASSERT(task);
taskgroup = task->td_taskgroup;
if (taskgroup) {
kmp_int32 old = cancel_noreq;
taskgroup->cancel_request.compare_exchange_strong(old, cncl_kind);
if (old == cancel_noreq || old == cncl_kind) {
// we do not have a cancellation request in this taskgroup or we do
// have one that matches the current request -> cancel
#if OMPT_SUPPORT && OMPT_OPTIONAL
if (ompt_enabled.ompt_callback_cancel) {
ompt_data_t *task_data;
__ompt_get_task_info_internal(0, NULL, &task_data, NULL, NULL,
NULL);
ompt_callbacks.ompt_callback(ompt_callback_cancel)(
task_data, ompt_cancel_taskgroup | ompt_cancel_activated,
OMPT_GET_RETURN_ADDRESS(0));
}
#endif
return 1 /* true */;
}
} else {
// TODO: what needs to happen here?
// the specification disallows cancellation w/o taskgroups
// so we might do anything here, let's abort for now
KMP_ASSERT(0 /* false */);
}
}
break;
default:
KMP_ASSERT(0 /* false */);
}
}
// ICV OMP_CANCELLATION=false, so we ignored this cancel request
KMP_DEBUG_ASSERT(!__kmp_omp_cancellation);
return 0 /* false */;
}
/*!
@ingroup CANCELLATION
@param loc_ref location of the original task directive
@param gtid Global thread ID of encountering thread
@param cncl_kind Cancellation kind (parallel, for, sections, taskgroup)
@return returns true if a matching cancellation request has been flagged in the
RTL and the encountering thread has to cancel..
Cancellation point for the encountering thread.
*/
kmp_int32 __kmpc_cancellationpoint(ident_t *loc_ref, kmp_int32 gtid,
kmp_int32 cncl_kind) {
kmp_info_t *this_thr = __kmp_threads[gtid];
KC_TRACE(10,
("__kmpc_cancellationpoint: T#%d request %d OMP_CANCELLATION=%d\n",
gtid, cncl_kind, __kmp_omp_cancellation));
KMP_DEBUG_ASSERT(cncl_kind != cancel_noreq);
KMP_DEBUG_ASSERT(cncl_kind == cancel_parallel || cncl_kind == cancel_loop ||
cncl_kind == cancel_sections ||
cncl_kind == cancel_taskgroup);
KMP_DEBUG_ASSERT(__kmp_get_gtid() == gtid);
if (__kmp_omp_cancellation) {
switch (cncl_kind) {
case cancel_parallel:
case cancel_loop:
case cancel_sections:
// cancellation requests for parallel and worksharing constructs
// are handled through the team structure
{
kmp_team_t *this_team = this_thr->th.th_team;
KMP_DEBUG_ASSERT(this_team);
if (this_team->t.t_cancel_request) {
if (cncl_kind == this_team->t.t_cancel_request) {
// the request in the team structure matches the type of
// cancellation point so we can cancel
#if OMPT_SUPPORT && OMPT_OPTIONAL
if (ompt_enabled.ompt_callback_cancel) {
ompt_data_t *task_data;
__ompt_get_task_info_internal(0, NULL, &task_data, NULL, NULL,
NULL);
ompt_cancel_flag_t type = ompt_cancel_parallel;
if (cncl_kind == cancel_parallel)
type = ompt_cancel_parallel;
else if (cncl_kind == cancel_loop)
type = ompt_cancel_loop;
else if (cncl_kind == cancel_sections)
type = ompt_cancel_sections;
ompt_callbacks.ompt_callback(ompt_callback_cancel)(
task_data, type | ompt_cancel_detected,
OMPT_GET_RETURN_ADDRESS(0));
}
#endif
return 1 /* true */;
}
KMP_ASSERT(0 /* false */);
} else {
// we do not have a cancellation request pending, so we just
// ignore this cancellation point
return 0;
}
break;
}
case cancel_taskgroup:
// cancellation requests for a task group
// are handled through the taskgroup structure
{
kmp_taskdata_t *task;
kmp_taskgroup_t *taskgroup;
task = this_thr->th.th_current_task;
KMP_DEBUG_ASSERT(task);
taskgroup = task->td_taskgroup;
if (taskgroup) {
// return the current status of cancellation for the taskgroup
#if OMPT_SUPPORT && OMPT_OPTIONAL
if (ompt_enabled.ompt_callback_cancel &&
!!taskgroup->cancel_request) {
ompt_data_t *task_data;
__ompt_get_task_info_internal(0, NULL, &task_data, NULL, NULL,
NULL);
ompt_callbacks.ompt_callback(ompt_callback_cancel)(
task_data, ompt_cancel_taskgroup | ompt_cancel_detected,
OMPT_GET_RETURN_ADDRESS(0));
}
#endif
return !!taskgroup->cancel_request;
} else {
// if a cancellation point is encountered by a task that does not
// belong to a taskgroup, it is OK to ignore it
return 0 /* false */;
}
}
default:
KMP_ASSERT(0 /* false */);
}
}
// ICV OMP_CANCELLATION=false, so we ignore the cancellation point
KMP_DEBUG_ASSERT(!__kmp_omp_cancellation);
return 0 /* false */;
}
/*!
@ingroup CANCELLATION
@param loc_ref location of the original task directive
@param gtid Global thread ID of encountering thread
@return returns true if a matching cancellation request has been flagged in the
RTL and the encountering thread has to cancel..
Barrier with cancellation point to send threads from the barrier to the
end of the parallel region. Needs a special code pattern as documented
in the design document for the cancellation feature.
*/
kmp_int32 __kmpc_cancel_barrier(ident_t *loc, kmp_int32 gtid) {
int ret = 0 /* false */;
kmp_info_t *this_thr = __kmp_threads[gtid];
kmp_team_t *this_team = this_thr->th.th_team;
KMP_DEBUG_ASSERT(__kmp_get_gtid() == gtid);
// call into the standard barrier
__kmpc_barrier(loc, gtid);
// if cancellation is active, check cancellation flag
if (__kmp_omp_cancellation) {
// depending on which construct to cancel, check the flag and
// reset the flag
switch (KMP_ATOMIC_LD_RLX(&(this_team->t.t_cancel_request))) {
case cancel_parallel:
ret = 1;
// ensure that threads have checked the flag, when
// leaving the above barrier
__kmpc_barrier(loc, gtid);
this_team->t.t_cancel_request = cancel_noreq;
// the next barrier is the fork/join barrier, which
// synchronizes the threads leaving here
break;
case cancel_loop:
case cancel_sections:
ret = 1;
// ensure that threads have checked the flag, when
// leaving the above barrier
__kmpc_barrier(loc, gtid);
this_team->t.t_cancel_request = cancel_noreq;
// synchronize the threads again to make sure we do not have any run-away
// threads that cause a race on the cancellation flag
__kmpc_barrier(loc, gtid);
break;
case cancel_taskgroup:
// this case should not occur
KMP_ASSERT(0 /* false */);
break;
case cancel_noreq:
// do nothing
break;
default:
KMP_ASSERT(0 /* false */);
}
}
return ret;
}
/*!
@ingroup CANCELLATION
@param loc_ref location of the original task directive
@param gtid Global thread ID of encountering thread
@return returns true if a matching cancellation request has been flagged in the
RTL and the encountering thread has to cancel..
Query function to query the current status of cancellation requests.
Can be used to implement the following pattern:
if (kmp_get_cancellation_status(kmp_cancel_parallel)) {
perform_cleanup();
#pragma omp cancellation point parallel
}
*/
int __kmp_get_cancellation_status(int cancel_kind) {
if (__kmp_omp_cancellation) {
kmp_info_t *this_thr = __kmp_entry_thread();
switch (cancel_kind) {
case cancel_parallel:
case cancel_loop:
case cancel_sections: {
kmp_team_t *this_team = this_thr->th.th_team;
return this_team->t.t_cancel_request == cancel_kind;
}
case cancel_taskgroup: {
kmp_taskdata_t *task;
kmp_taskgroup_t *taskgroup;
task = this_thr->th.th_current_task;
taskgroup = task->td_taskgroup;
return taskgroup && taskgroup->cancel_request;
}
}
}
return 0 /* false */;
}
#endif

View File

@ -0,0 +1,117 @@
/*
* kmp_config.h -- Feature macros
*/
//===----------------------------------------------------------------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is dual licensed under the MIT and the University of Illinois Open
// Source Licenses. See LICENSE.txt for details.
//
//===----------------------------------------------------------------------===//
#ifndef KMP_CONFIG_H
#define KMP_CONFIG_H
#include "kmp_platform.h"
// cmakedefine01 MACRO will define MACRO as either 0 or 1
// cmakedefine MACRO 1 will define MACRO as 1 or leave undefined
#cmakedefine01 DEBUG_BUILD
#cmakedefine01 RELWITHDEBINFO_BUILD
#cmakedefine01 LIBOMP_USE_ITT_NOTIFY
#define USE_ITT_NOTIFY LIBOMP_USE_ITT_NOTIFY
#if ! LIBOMP_USE_ITT_NOTIFY
# define INTEL_NO_ITTNOTIFY_API
#endif
#cmakedefine01 LIBOMP_USE_VERSION_SYMBOLS
#if LIBOMP_USE_VERSION_SYMBOLS
# define KMP_USE_VERSION_SYMBOLS
#endif
#cmakedefine01 LIBOMP_HAVE_WEAK_ATTRIBUTE
#define KMP_HAVE_WEAK_ATTRIBUTE LIBOMP_HAVE_WEAK_ATTRIBUTE
#cmakedefine01 LIBOMP_HAVE_PSAPI
#define KMP_HAVE_PSAPI LIBOMP_HAVE_PSAPI
#cmakedefine01 LIBOMP_STATS
#define KMP_STATS_ENABLED LIBOMP_STATS
#cmakedefine01 LIBOMP_HAVE_X86INTRIN_H
#define KMP_HAVE_X86INTRIN_H LIBOMP_HAVE_X86INTRIN_H
#cmakedefine01 LIBOMP_HAVE___BUILTIN_READCYCLECOUNTER
#define KMP_HAVE___BUILTIN_READCYCLECOUNTER LIBOMP_HAVE___BUILTIN_READCYCLECOUNTER
#cmakedefine01 LIBOMP_HAVE___RDTSC
#define KMP_HAVE___RDTSC LIBOMP_HAVE___RDTSC
#cmakedefine01 LIBOMP_USE_DEBUGGER
#define USE_DEBUGGER LIBOMP_USE_DEBUGGER
#cmakedefine01 LIBOMP_OMPT_DEBUG
#define OMPT_DEBUG LIBOMP_OMPT_DEBUG
#cmakedefine01 LIBOMP_OMPT_SUPPORT
#define OMPT_SUPPORT LIBOMP_OMPT_SUPPORT
#cmakedefine01 LIBOMP_OMPT_OPTIONAL
#define OMPT_OPTIONAL LIBOMP_OMPT_OPTIONAL
#cmakedefine01 LIBOMP_USE_ADAPTIVE_LOCKS
#define KMP_USE_ADAPTIVE_LOCKS LIBOMP_USE_ADAPTIVE_LOCKS
#define KMP_DEBUG_ADAPTIVE_LOCKS 0
#cmakedefine01 LIBOMP_USE_INTERNODE_ALIGNMENT
#define KMP_USE_INTERNODE_ALIGNMENT LIBOMP_USE_INTERNODE_ALIGNMENT
#cmakedefine01 LIBOMP_ENABLE_ASSERTIONS
#define KMP_USE_ASSERT LIBOMP_ENABLE_ASSERTIONS
#cmakedefine01 LIBOMP_USE_HIER_SCHED
#define KMP_USE_HIER_SCHED LIBOMP_USE_HIER_SCHED
#cmakedefine01 STUBS_LIBRARY
#cmakedefine01 LIBOMP_USE_HWLOC
#define KMP_USE_HWLOC LIBOMP_USE_HWLOC
#cmakedefine01 LIBOMP_ENABLE_SHARED
#define KMP_DYNAMIC_LIB LIBOMP_ENABLE_SHARED
#define KMP_ARCH_STR "@LIBOMP_LEGAL_ARCH@"
#define KMP_LIBRARY_FILE "@LIBOMP_LIB_FILE@"
#define KMP_VERSION_MAJOR @LIBOMP_VERSION_MAJOR@
#define KMP_VERSION_MINOR @LIBOMP_VERSION_MINOR@
#define LIBOMP_OMP_VERSION @LIBOMP_OMP_VERSION@
#define OMP_50_ENABLED (LIBOMP_OMP_VERSION >= 50)
#define OMP_45_ENABLED (LIBOMP_OMP_VERSION >= 45)
#define OMP_40_ENABLED (LIBOMP_OMP_VERSION >= 40)
#define OMP_30_ENABLED (LIBOMP_OMP_VERSION >= 30)
#cmakedefine01 LIBOMP_TSAN_SUPPORT
#if LIBOMP_TSAN_SUPPORT
#define TSAN_SUPPORT
#endif
#cmakedefine01 MSVC
#define KMP_MSVC_COMPAT MSVC
// Configured cache line based on architecture
#if KMP_ARCH_PPC64
# define CACHE_LINE 128
#else
# define CACHE_LINE 64
#endif
#if ! KMP_32_BIT_ARCH
# define BUILD_I8 1
#endif
#define KMP_NESTED_HOT_TEAMS 1
#define KMP_ADJUST_BLOCKTIME 1
#define BUILD_PARALLEL_ORDERED 1
#define KMP_ASM_INTRINS 1
#define USE_ITT_BUILD LIBOMP_USE_ITT_NOTIFY
#define INTEL_ITTNOTIFY_PREFIX __kmp_itt_
#if ! KMP_MIC
# define USE_LOAD_BALANCE 1
#endif
#if ! (KMP_OS_WINDOWS || KMP_OS_DARWIN)
# define KMP_TDATA_GTID 1
#endif
#if STUBS_LIBRARY
# define KMP_STUB 1
#endif
#if DEBUG_BUILD || RELWITHDEBINFO_BUILD
# define KMP_DEBUG 1
#endif
#if KMP_OS_WINDOWS
# define KMP_WIN_CDECL
#else
# define BUILD_TV
# define KMP_GOMP_COMPAT
#endif
#endif // KMP_CONFIG_H

4164
runtime/src/kmp_csupport.cpp Normal file

File diff suppressed because it is too large Load Diff

132
runtime/src/kmp_debug.cpp Normal file
View File

@ -0,0 +1,132 @@
/*
* kmp_debug.cpp -- debug utilities for the Guide library
*/
//===----------------------------------------------------------------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is dual licensed under the MIT and the University of Illinois Open
// Source Licenses. See LICENSE.txt for details.
//
//===----------------------------------------------------------------------===//
#include "kmp.h"
#include "kmp_debug.h" /* really necessary? */
#include "kmp_i18n.h"
#include "kmp_io.h"
#ifdef KMP_DEBUG
void __kmp_debug_printf_stdout(char const *format, ...) {
va_list ap;
va_start(ap, format);
__kmp_vprintf(kmp_out, format, ap);
va_end(ap);
}
#endif
void __kmp_debug_printf(char const *format, ...) {
va_list ap;
va_start(ap, format);
__kmp_vprintf(kmp_err, format, ap);
va_end(ap);
}
#ifdef KMP_USE_ASSERT
int __kmp_debug_assert(char const *msg, char const *file, int line) {
if (file == NULL) {
file = KMP_I18N_STR(UnknownFile);
} else {
// Remove directories from path, leave only file name. File name is enough,
// there is no need in bothering developers and customers with full paths.
char const *slash = strrchr(file, '/');
if (slash != NULL) {
file = slash + 1;
}
}
#ifdef KMP_DEBUG
__kmp_acquire_bootstrap_lock(&__kmp_stdio_lock);
__kmp_debug_printf("Assertion failure at %s(%d): %s.\n", file, line, msg);
__kmp_release_bootstrap_lock(&__kmp_stdio_lock);
#ifdef USE_ASSERT_BREAK
#if KMP_OS_WINDOWS
DebugBreak();
#endif
#endif // USE_ASSERT_BREAK
#ifdef USE_ASSERT_STALL
/* __kmp_infinite_loop(); */
for (;;)
;
#endif // USE_ASSERT_STALL
#ifdef USE_ASSERT_SEG
{
int volatile *ZERO = (int *)0;
++(*ZERO);
}
#endif // USE_ASSERT_SEG
#endif
__kmp_fatal(KMP_MSG(AssertionFailure, file, line), KMP_HNT(SubmitBugReport),
__kmp_msg_null);
return 0;
} // __kmp_debug_assert
#endif // KMP_USE_ASSERT
/* Dump debugging buffer to stderr */
void __kmp_dump_debug_buffer(void) {
if (__kmp_debug_buffer != NULL) {
int i;
int dc = __kmp_debug_count;
char *db = &__kmp_debug_buffer[(dc % __kmp_debug_buf_lines) *
__kmp_debug_buf_chars];
char *db_end =
&__kmp_debug_buffer[__kmp_debug_buf_lines * __kmp_debug_buf_chars];
char *db2;
__kmp_acquire_bootstrap_lock(&__kmp_stdio_lock);
__kmp_printf_no_lock("\nStart dump of debugging buffer (entry=%d):\n",
dc % __kmp_debug_buf_lines);
for (i = 0; i < __kmp_debug_buf_lines; i++) {
if (*db != '\0') {
/* Fix up where no carriage return before string termination char */
for (db2 = db + 1; db2 < db + __kmp_debug_buf_chars - 1; db2++) {
if (*db2 == '\0') {
if (*(db2 - 1) != '\n') {
*db2 = '\n';
*(db2 + 1) = '\0';
}
break;
}
}
/* Handle case at end by shortening the printed message by one char if
* necessary */
if (db2 == db + __kmp_debug_buf_chars - 1 && *db2 == '\0' &&
*(db2 - 1) != '\n') {
*(db2 - 1) = '\n';
}
__kmp_printf_no_lock("%4d: %.*s", i, __kmp_debug_buf_chars, db);
*db = '\0'; /* only let it print once! */
}
db += __kmp_debug_buf_chars;
if (db >= db_end)
db = __kmp_debug_buffer;
}
__kmp_printf_no_lock("End dump of debugging buffer (entry=%d).\n\n",
(dc + i - 1) % __kmp_debug_buf_lines);
__kmp_release_bootstrap_lock(&__kmp_stdio_lock);
}
}

180
runtime/src/kmp_debug.h Normal file
View File

@ -0,0 +1,180 @@
/*
* kmp_debug.h -- debug / assertion code for Assure library
*/
//===----------------------------------------------------------------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is dual licensed under the MIT and the University of Illinois Open
// Source Licenses. See LICENSE.txt for details.
//
//===----------------------------------------------------------------------===//
#ifndef KMP_DEBUG_H
#define KMP_DEBUG_H
#include <stdarg.h>
#ifdef __cplusplus
extern "C" {
#endif // __cplusplus
// -----------------------------------------------------------------------------
// Build-time assertion.
// New C++11 style build assert
#define KMP_BUILD_ASSERT(expr) static_assert(expr, "Build condition error")
// -----------------------------------------------------------------------------
// Run-time assertions.
extern void __kmp_dump_debug_buffer(void);
#ifdef KMP_USE_ASSERT
extern int __kmp_debug_assert(char const *expr, char const *file, int line);
#ifdef KMP_DEBUG
#define KMP_ASSERT(cond) \
if (!(cond)) { \
__kmp_debug_assert(#cond, __FILE__, __LINE__); \
}
#define KMP_ASSERT2(cond, msg) \
if (!(cond)) { \
__kmp_debug_assert((msg), __FILE__, __LINE__); \
}
#define KMP_DEBUG_ASSERT(cond) KMP_ASSERT(cond)
#define KMP_DEBUG_ASSERT2(cond, msg) KMP_ASSERT2(cond, msg)
#define KMP_DEBUG_USE_VAR(x) /* Nothing (it is used!) */
#else
// Do not expose condition in release build. Use "assertion failure".
#define KMP_ASSERT(cond) \
if (!(cond)) { \
__kmp_debug_assert("assertion failure", __FILE__, __LINE__); \
}
#define KMP_ASSERT2(cond, msg) KMP_ASSERT(cond)
#define KMP_DEBUG_ASSERT(cond) /* Nothing */
#define KMP_DEBUG_ASSERT2(cond, msg) /* Nothing */
#define KMP_DEBUG_USE_VAR(x) ((void)(x))
#endif // KMP_DEBUG
#else
#define KMP_ASSERT(cond) /* Nothing */
#define KMP_ASSERT2(cond, msg) /* Nothing */
#define KMP_DEBUG_ASSERT(cond) /* Nothing */
#define KMP_DEBUG_ASSERT2(cond, msg) /* Nothing */
#define KMP_DEBUG_USE_VAR(x) ((void)(x))
#endif // KMP_USE_ASSERT
#ifdef KMP_DEBUG
extern void __kmp_debug_printf_stdout(char const *format, ...);
#endif
extern void __kmp_debug_printf(char const *format, ...);
#ifdef KMP_DEBUG
extern int kmp_a_debug;
extern int kmp_b_debug;
extern int kmp_c_debug;
extern int kmp_d_debug;
extern int kmp_e_debug;
extern int kmp_f_debug;
extern int kmp_diag;
#define KA_TRACE(d, x) \
if (kmp_a_debug >= d) { \
__kmp_debug_printf x; \
}
#define KB_TRACE(d, x) \
if (kmp_b_debug >= d) { \
__kmp_debug_printf x; \
}
#define KC_TRACE(d, x) \
if (kmp_c_debug >= d) { \
__kmp_debug_printf x; \
}
#define KD_TRACE(d, x) \
if (kmp_d_debug >= d) { \
__kmp_debug_printf x; \
}
#define KE_TRACE(d, x) \
if (kmp_e_debug >= d) { \
__kmp_debug_printf x; \
}
#define KF_TRACE(d, x) \
if (kmp_f_debug >= d) { \
__kmp_debug_printf x; \
}
#define K_DIAG(d, x) \
{ \
if (kmp_diag == d) { \
__kmp_debug_printf_stdout x; \
} \
}
#define KA_DUMP(d, x) \
if (kmp_a_debug >= d) { \
int ks; \
__kmp_disable(&ks); \
(x); \
__kmp_enable(ks); \
}
#define KB_DUMP(d, x) \
if (kmp_b_debug >= d) { \
int ks; \
__kmp_disable(&ks); \
(x); \
__kmp_enable(ks); \
}
#define KC_DUMP(d, x) \
if (kmp_c_debug >= d) { \
int ks; \
__kmp_disable(&ks); \
(x); \
__kmp_enable(ks); \
}
#define KD_DUMP(d, x) \
if (kmp_d_debug >= d) { \
int ks; \
__kmp_disable(&ks); \
(x); \
__kmp_enable(ks); \
}
#define KE_DUMP(d, x) \
if (kmp_e_debug >= d) { \
int ks; \
__kmp_disable(&ks); \
(x); \
__kmp_enable(ks); \
}
#define KF_DUMP(d, x) \
if (kmp_f_debug >= d) { \
int ks; \
__kmp_disable(&ks); \
(x); \
__kmp_enable(ks); \
}
#else
#define KA_TRACE(d, x) /* nothing to do */
#define KB_TRACE(d, x) /* nothing to do */
#define KC_TRACE(d, x) /* nothing to do */
#define KD_TRACE(d, x) /* nothing to do */
#define KE_TRACE(d, x) /* nothing to do */
#define KF_TRACE(d, x) /* nothing to do */
#define K_DIAG(d, x) \
{} /* nothing to do */
#define KA_DUMP(d, x) /* nothing to do */
#define KB_DUMP(d, x) /* nothing to do */
#define KC_DUMP(d, x) /* nothing to do */
#define KD_DUMP(d, x) /* nothing to do */
#define KE_DUMP(d, x) /* nothing to do */
#define KF_DUMP(d, x) /* nothing to do */
#endif // KMP_DEBUG
#ifdef __cplusplus
} // extern "C"
#endif // __cplusplus
#endif /* KMP_DEBUG_H */

View File

@ -0,0 +1,293 @@
#include "kmp_config.h"
#if USE_DEBUGGER
/*
* kmp_debugger.cpp -- debugger support.
*/
//===----------------------------------------------------------------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is dual licensed under the MIT and the University of Illinois Open
// Source Licenses. See LICENSE.txt for details.
//
//===----------------------------------------------------------------------===//
#include "kmp.h"
#include "kmp_lock.h"
#include "kmp_omp.h"
#include "kmp_str.h"
// NOTE: All variable names are known to the debugger, do not change!
#ifdef __cplusplus
extern "C" {
extern kmp_omp_struct_info_t __kmp_omp_debug_struct_info;
} // extern "C"
#endif // __cplusplus
int __kmp_debugging = FALSE; // Boolean whether currently debugging OpenMP RTL.
#define offset_and_size_of(structure, field) \
{ offsetof(structure, field), sizeof(((structure *)NULL)->field) }
#define offset_and_size_not_available \
{ -1, -1 }
#define addr_and_size_of(var) \
{ (kmp_uint64)(&var), sizeof(var) }
#define nthr_buffer_size 1024
static kmp_int32 kmp_omp_nthr_info_buffer[nthr_buffer_size] = {
nthr_buffer_size * sizeof(kmp_int32)};
/* TODO: Check punctuation for various platforms here */
static char func_microtask[] = "__kmp_invoke_microtask";
static char func_fork[] = "__kmpc_fork_call";
static char func_fork_teams[] = "__kmpc_fork_teams";
// Various info about runtime structures: addresses, field offsets, sizes, etc.
kmp_omp_struct_info_t __kmp_omp_debug_struct_info = {
/* Change this only if you make a fundamental data structure change here */
KMP_OMP_VERSION,
/* sanity check. Only should be checked if versions are identical
* This is also used for backward compatibility to get the runtime
* structure size if it the runtime is older than the interface */
sizeof(kmp_omp_struct_info_t),
/* OpenMP RTL version info. */
addr_and_size_of(__kmp_version_major),
addr_and_size_of(__kmp_version_minor),
addr_and_size_of(__kmp_version_build),
addr_and_size_of(__kmp_openmp_version),
{(kmp_uint64)(__kmp_copyright) + KMP_VERSION_MAGIC_LEN,
0}, // Skip magic prefix.
/* Various globals. */
addr_and_size_of(__kmp_threads),
addr_and_size_of(__kmp_root),
addr_and_size_of(__kmp_threads_capacity),
#if KMP_USE_MONITOR
addr_and_size_of(__kmp_monitor),
#endif
#if !KMP_USE_DYNAMIC_LOCK
addr_and_size_of(__kmp_user_lock_table),
#endif
addr_and_size_of(func_microtask),
addr_and_size_of(func_fork),
addr_and_size_of(func_fork_teams),
addr_and_size_of(__kmp_team_counter),
addr_and_size_of(__kmp_task_counter),
addr_and_size_of(kmp_omp_nthr_info_buffer),
sizeof(void *),
OMP_LOCK_T_SIZE < sizeof(void *),
bs_last_barrier,
INITIAL_TASK_DEQUE_SIZE,
// thread structure information
sizeof(kmp_base_info_t),
offset_and_size_of(kmp_base_info_t, th_info),
offset_and_size_of(kmp_base_info_t, th_team),
offset_and_size_of(kmp_base_info_t, th_root),
offset_and_size_of(kmp_base_info_t, th_serial_team),
offset_and_size_of(kmp_base_info_t, th_ident),
offset_and_size_of(kmp_base_info_t, th_spin_here),
offset_and_size_of(kmp_base_info_t, th_next_waiting),
offset_and_size_of(kmp_base_info_t, th_task_team),
offset_and_size_of(kmp_base_info_t, th_current_task),
offset_and_size_of(kmp_base_info_t, th_task_state),
offset_and_size_of(kmp_base_info_t, th_bar),
offset_and_size_of(kmp_bstate_t, b_worker_arrived),
#if OMP_40_ENABLED
// teams information
offset_and_size_of(kmp_base_info_t, th_teams_microtask),
offset_and_size_of(kmp_base_info_t, th_teams_level),
offset_and_size_of(kmp_teams_size_t, nteams),
offset_and_size_of(kmp_teams_size_t, nth),
#endif
// kmp_desc structure (for info field above)
sizeof(kmp_desc_base_t),
offset_and_size_of(kmp_desc_base_t, ds_tid),
offset_and_size_of(kmp_desc_base_t, ds_gtid),
// On Windows* OS, ds_thread contains a thread /handle/, which is not usable,
// while thread /id/ is in ds_thread_id.
#if KMP_OS_WINDOWS
offset_and_size_of(kmp_desc_base_t, ds_thread_id),
#else
offset_and_size_of(kmp_desc_base_t, ds_thread),
#endif
// team structure information
sizeof(kmp_base_team_t),
offset_and_size_of(kmp_base_team_t, t_master_tid),
offset_and_size_of(kmp_base_team_t, t_ident),
offset_and_size_of(kmp_base_team_t, t_parent),
offset_and_size_of(kmp_base_team_t, t_nproc),
offset_and_size_of(kmp_base_team_t, t_threads),
offset_and_size_of(kmp_base_team_t, t_serialized),
offset_and_size_of(kmp_base_team_t, t_id),
offset_and_size_of(kmp_base_team_t, t_pkfn),
offset_and_size_of(kmp_base_team_t, t_task_team),
offset_and_size_of(kmp_base_team_t, t_implicit_task_taskdata),
#if OMP_40_ENABLED
offset_and_size_of(kmp_base_team_t, t_cancel_request),
#endif
offset_and_size_of(kmp_base_team_t, t_bar),
offset_and_size_of(kmp_balign_team_t, b_master_arrived),
offset_and_size_of(kmp_balign_team_t, b_team_arrived),
// root structure information
sizeof(kmp_base_root_t),
offset_and_size_of(kmp_base_root_t, r_root_team),
offset_and_size_of(kmp_base_root_t, r_hot_team),
offset_and_size_of(kmp_base_root_t, r_uber_thread),
offset_and_size_not_available,
// ident structure information
sizeof(ident_t),
offset_and_size_of(ident_t, psource),
offset_and_size_of(ident_t, flags),
// lock structure information
sizeof(kmp_base_queuing_lock_t),
offset_and_size_of(kmp_base_queuing_lock_t, initialized),
offset_and_size_of(kmp_base_queuing_lock_t, location),
offset_and_size_of(kmp_base_queuing_lock_t, tail_id),
offset_and_size_of(kmp_base_queuing_lock_t, head_id),
offset_and_size_of(kmp_base_queuing_lock_t, next_ticket),
offset_and_size_of(kmp_base_queuing_lock_t, now_serving),
offset_and_size_of(kmp_base_queuing_lock_t, owner_id),
offset_and_size_of(kmp_base_queuing_lock_t, depth_locked),
offset_and_size_of(kmp_base_queuing_lock_t, flags),
#if !KMP_USE_DYNAMIC_LOCK
/* Lock table. */
sizeof(kmp_lock_table_t),
offset_and_size_of(kmp_lock_table_t, used),
offset_and_size_of(kmp_lock_table_t, allocated),
offset_and_size_of(kmp_lock_table_t, table),
#endif
// Task team structure information.
sizeof(kmp_base_task_team_t),
offset_and_size_of(kmp_base_task_team_t, tt_threads_data),
offset_and_size_of(kmp_base_task_team_t, tt_found_tasks),
offset_and_size_of(kmp_base_task_team_t, tt_nproc),
offset_and_size_of(kmp_base_task_team_t, tt_unfinished_threads),
offset_and_size_of(kmp_base_task_team_t, tt_active),
// task_data_t.
sizeof(kmp_taskdata_t),
offset_and_size_of(kmp_taskdata_t, td_task_id),
offset_and_size_of(kmp_taskdata_t, td_flags),
offset_and_size_of(kmp_taskdata_t, td_team),
offset_and_size_of(kmp_taskdata_t, td_parent),
offset_and_size_of(kmp_taskdata_t, td_level),
offset_and_size_of(kmp_taskdata_t, td_ident),
offset_and_size_of(kmp_taskdata_t, td_allocated_child_tasks),
offset_and_size_of(kmp_taskdata_t, td_incomplete_child_tasks),
offset_and_size_of(kmp_taskdata_t, td_taskwait_ident),
offset_and_size_of(kmp_taskdata_t, td_taskwait_counter),
offset_and_size_of(kmp_taskdata_t, td_taskwait_thread),
#if OMP_40_ENABLED
offset_and_size_of(kmp_taskdata_t, td_taskgroup),
offset_and_size_of(kmp_taskgroup_t, count),
offset_and_size_of(kmp_taskgroup_t, cancel_request),
offset_and_size_of(kmp_taskdata_t, td_depnode),
offset_and_size_of(kmp_depnode_list_t, node),
offset_and_size_of(kmp_depnode_list_t, next),
offset_and_size_of(kmp_base_depnode_t, successors),
offset_and_size_of(kmp_base_depnode_t, task),
offset_and_size_of(kmp_base_depnode_t, npredecessors),
offset_and_size_of(kmp_base_depnode_t, nrefs),
#endif
offset_and_size_of(kmp_task_t, routine),
// thread_data_t.
sizeof(kmp_thread_data_t),
offset_and_size_of(kmp_base_thread_data_t, td_deque),
offset_and_size_of(kmp_base_thread_data_t, td_deque_size),
offset_and_size_of(kmp_base_thread_data_t, td_deque_head),
offset_and_size_of(kmp_base_thread_data_t, td_deque_tail),
offset_and_size_of(kmp_base_thread_data_t, td_deque_ntasks),
offset_and_size_of(kmp_base_thread_data_t, td_deque_last_stolen),
// The last field.
KMP_OMP_VERSION,
}; // __kmp_omp_debug_struct_info
#undef offset_and_size_of
#undef addr_and_size_of
/* Intel compiler on IA-32 architecture issues a warning "conversion
from "unsigned long long" to "char *" may lose significant bits"
when 64-bit value is assigned to 32-bit pointer. Use this function
to suppress the warning. */
static inline void *__kmp_convert_to_ptr(kmp_uint64 addr) {
#if KMP_COMPILER_ICC
#pragma warning(push)
#pragma warning(disable : 810) // conversion from "unsigned long long" to "char
// *" may lose significant bits
#pragma warning(disable : 1195) // conversion from integer to smaller pointer
#endif // KMP_COMPILER_ICC
return (void *)addr;
#if KMP_COMPILER_ICC
#pragma warning(pop)
#endif // KMP_COMPILER_ICC
} // __kmp_convert_to_ptr
static int kmp_location_match(kmp_str_loc_t *loc, kmp_omp_nthr_item_t *item) {
int file_match = 0;
int func_match = 0;
int line_match = 0;
char *file = (char *)__kmp_convert_to_ptr(item->file);
char *func = (char *)__kmp_convert_to_ptr(item->func);
file_match = __kmp_str_fname_match(&loc->fname, file);
func_match =
item->func == 0 // If item->func is NULL, it allows any func name.
|| strcmp(func, "*") == 0 ||
(loc->func != NULL && strcmp(loc->func, func) == 0);
line_match =
item->begin <= loc->line &&
(item->end <= 0 ||
loc->line <= item->end); // if item->end <= 0, it means "end of file".
return (file_match && func_match && line_match);
} // kmp_location_match
int __kmp_omp_num_threads(ident_t const *ident) {
int num_threads = 0;
kmp_omp_nthr_info_t *info = (kmp_omp_nthr_info_t *)__kmp_convert_to_ptr(
__kmp_omp_debug_struct_info.nthr_info.addr);
if (info->num > 0 && info->array != 0) {
kmp_omp_nthr_item_t *items =
(kmp_omp_nthr_item_t *)__kmp_convert_to_ptr(info->array);
kmp_str_loc_t loc = __kmp_str_loc_init(ident->psource, 1);
int i;
for (i = 0; i < info->num; ++i) {
if (kmp_location_match(&loc, &items[i])) {
num_threads = items[i].num_threads;
}
}
__kmp_str_loc_free(&loc);
}
return num_threads;
;
} // __kmp_omp_num_threads
#endif /* USE_DEBUGGER */

View File

@ -0,0 +1,49 @@
#if USE_DEBUGGER
/*
* kmp_debugger.h -- debugger support.
*/
//===----------------------------------------------------------------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is dual licensed under the MIT and the University of Illinois Open
// Source Licenses. See LICENSE.txt for details.
//
//===----------------------------------------------------------------------===//
#ifndef KMP_DEBUGGER_H
#define KMP_DEBUGGER_H
#ifdef __cplusplus
extern "C" {
#endif // __cplusplus
/* This external variable can be set by any debugger to flag to the runtime
that we are currently executing inside a debugger. This will allow the
debugger to override the number of threads spawned in a parallel region by
using __kmp_omp_num_threads() (below).
* When __kmp_debugging is TRUE, each team and each task gets a unique integer
identifier that can be used by debugger to conveniently identify teams and
tasks.
* The debugger has access to __kmp_omp_debug_struct_info which contains
information about the OpenMP library's important internal structures. This
access will allow the debugger to read detailed information from the typical
OpenMP constructs (teams, threads, tasking, etc. ) during a debugging
session and offer detailed and useful information which the user can probe
about the OpenMP portion of their code. */
extern int __kmp_debugging; /* Boolean whether currently debugging OpenMP RTL */
// Return number of threads specified by the debugger for given parallel region.
/* The ident field, which represents a source file location, is used to check if
the debugger has changed the number of threads for the parallel region at
source file location ident. This way, specific parallel regions' number of
threads can be changed at the debugger's request. */
int __kmp_omp_num_threads(ident_t const *ident);
#ifdef __cplusplus
} // extern "C"
#endif // __cplusplus
#endif // KMP_DEBUGGER_H
#endif // USE_DEBUGGER

2595
runtime/src/kmp_dispatch.cpp Normal file

File diff suppressed because it is too large Load Diff

514
runtime/src/kmp_dispatch.h Normal file
View File

@ -0,0 +1,514 @@
/*
* kmp_dispatch.h: dynamic scheduling - iteration initialization and dispatch.
*/
//===----------------------------------------------------------------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is dual licensed under the MIT and the University of Illinois Open
// Source Licenses. See LICENSE.txt for details.
//
//===----------------------------------------------------------------------===//
#ifndef KMP_DISPATCH_H
#define KMP_DISPATCH_H
/* ------------------------------------------------------------------------ */
/* ------------------------------------------------------------------------ */
#include "kmp.h"
#include "kmp_error.h"
#include "kmp_i18n.h"
#include "kmp_itt.h"
#include "kmp_stats.h"
#include "kmp_str.h"
#if KMP_OS_WINDOWS && KMP_ARCH_X86
#include <float.h>
#endif
#if OMPT_SUPPORT
#include "ompt-internal.h"
#include "ompt-specific.h"
#endif
/* ------------------------------------------------------------------------ */
/* ------------------------------------------------------------------------ */
#if KMP_USE_HIER_SCHED
// Forward declarations of some hierarchical scheduling data structures
template <typename T> struct kmp_hier_t;
template <typename T> struct kmp_hier_top_unit_t;
#endif // KMP_USE_HIER_SCHED
template <typename T> struct dispatch_shared_info_template;
template <typename T> struct dispatch_private_info_template;
template <typename T>
extern void __kmp_dispatch_init_algorithm(ident_t *loc, int gtid,
dispatch_private_info_template<T> *pr,
enum sched_type schedule, T lb, T ub,
typename traits_t<T>::signed_t st,
#if USE_ITT_BUILD
kmp_uint64 *cur_chunk,
#endif
typename traits_t<T>::signed_t chunk,
T nproc, T unit_id);
template <typename T>
extern int __kmp_dispatch_next_algorithm(
int gtid, dispatch_private_info_template<T> *pr,
dispatch_shared_info_template<T> volatile *sh, kmp_int32 *p_last, T *p_lb,
T *p_ub, typename traits_t<T>::signed_t *p_st, T nproc, T unit_id);
void __kmp_dispatch_dxo_error(int *gtid_ref, int *cid_ref, ident_t *loc_ref);
void __kmp_dispatch_deo_error(int *gtid_ref, int *cid_ref, ident_t *loc_ref);
#if KMP_STATIC_STEAL_ENABLED
// replaces dispatch_private_info{32,64} structures and
// dispatch_private_info{32,64}_t types
template <typename T> struct dispatch_private_infoXX_template {
typedef typename traits_t<T>::unsigned_t UT;
typedef typename traits_t<T>::signed_t ST;
UT count; // unsigned
T ub;
/* Adding KMP_ALIGN_CACHE here doesn't help / can hurt performance */
T lb;
ST st; // signed
UT tc; // unsigned
T static_steal_counter; // for static_steal only; maybe better to put after ub
/* parm[1-4] are used in different ways by different scheduling algorithms */
// KMP_ALIGN( 32 ) ensures ( if the KMP_ALIGN macro is turned on )
// a) parm3 is properly aligned and
// b) all parm1-4 are in the same cache line.
// Because of parm1-4 are used together, performance seems to be better
// if they are in the same line (not measured though).
struct KMP_ALIGN(32) { // compiler does not accept sizeof(T)*4
T parm1;
T parm2;
T parm3;
T parm4;
};
UT ordered_lower; // unsigned
UT ordered_upper; // unsigned
#if KMP_OS_WINDOWS
T last_upper;
#endif /* KMP_OS_WINDOWS */
};
#else /* KMP_STATIC_STEAL_ENABLED */
// replaces dispatch_private_info{32,64} structures and
// dispatch_private_info{32,64}_t types
template <typename T> struct dispatch_private_infoXX_template {
typedef typename traits_t<T>::unsigned_t UT;
typedef typename traits_t<T>::signed_t ST;
T lb;
T ub;
ST st; // signed
UT tc; // unsigned
T parm1;
T parm2;
T parm3;
T parm4;
UT count; // unsigned
UT ordered_lower; // unsigned
UT ordered_upper; // unsigned
#if KMP_OS_WINDOWS
T last_upper;
#endif /* KMP_OS_WINDOWS */
};
#endif /* KMP_STATIC_STEAL_ENABLED */
template <typename T> struct KMP_ALIGN_CACHE dispatch_private_info_template {
// duplicate alignment here, otherwise size of structure is not correct in our
// compiler
union KMP_ALIGN_CACHE private_info_tmpl {
dispatch_private_infoXX_template<T> p;
dispatch_private_info64_t p64;
} u;
enum sched_type schedule; /* scheduling algorithm */
kmp_sched_flags_t flags; /* flags (e.g., ordered, nomerge, etc.) */
kmp_uint32 ordered_bumped;
// to retain the structure size after making order
kmp_int32 ordered_dummy[KMP_MAX_ORDERED - 3];
dispatch_private_info *next; /* stack of buffers for nest of serial regions */
kmp_uint32 type_size;
#if KMP_USE_HIER_SCHED
kmp_int32 hier_id;
kmp_hier_top_unit_t<T> *hier_parent;
// member functions
kmp_int32 get_hier_id() const { return hier_id; }
kmp_hier_top_unit_t<T> *get_parent() { return hier_parent; }
#endif
enum cons_type pushed_ws;
};
// replaces dispatch_shared_info{32,64} structures and
// dispatch_shared_info{32,64}_t types
template <typename T> struct dispatch_shared_infoXX_template {
typedef typename traits_t<T>::unsigned_t UT;
/* chunk index under dynamic, number of idle threads under static-steal;
iteration index otherwise */
volatile UT iteration;
volatile UT num_done;
volatile UT ordered_iteration;
// to retain the structure size making ordered_iteration scalar
UT ordered_dummy[KMP_MAX_ORDERED - 3];
};
// replaces dispatch_shared_info structure and dispatch_shared_info_t type
template <typename T> struct dispatch_shared_info_template {
typedef typename traits_t<T>::unsigned_t UT;
// we need union here to keep the structure size
union shared_info_tmpl {
dispatch_shared_infoXX_template<UT> s;
dispatch_shared_info64_t s64;
} u;
volatile kmp_uint32 buffer_index;
#if OMP_45_ENABLED
volatile kmp_int32 doacross_buf_idx; // teamwise index
kmp_uint32 *doacross_flags; // array of iteration flags (0/1)
kmp_int32 doacross_num_done; // count finished threads
#endif
#if KMP_USE_HIER_SCHED
kmp_hier_t<T> *hier;
#endif
#if KMP_USE_HWLOC
// When linking with libhwloc, the ORDERED EPCC test slowsdown on big
// machines (> 48 cores). Performance analysis showed that a cache thrash
// was occurring and this padding helps alleviate the problem.
char padding[64];
#endif
};
/* ------------------------------------------------------------------------ */
/* ------------------------------------------------------------------------ */
#undef USE_TEST_LOCKS
// test_then_add template (general template should NOT be used)
template <typename T> static __forceinline T test_then_add(volatile T *p, T d);
template <>
__forceinline kmp_int32 test_then_add<kmp_int32>(volatile kmp_int32 *p,
kmp_int32 d) {
kmp_int32 r;
r = KMP_TEST_THEN_ADD32(p, d);
return r;
}
template <>
__forceinline kmp_int64 test_then_add<kmp_int64>(volatile kmp_int64 *p,
kmp_int64 d) {
kmp_int64 r;
r = KMP_TEST_THEN_ADD64(p, d);
return r;
}
// test_then_inc_acq template (general template should NOT be used)
template <typename T> static __forceinline T test_then_inc_acq(volatile T *p);
template <>
__forceinline kmp_int32 test_then_inc_acq<kmp_int32>(volatile kmp_int32 *p) {
kmp_int32 r;
r = KMP_TEST_THEN_INC_ACQ32(p);
return r;
}
template <>
__forceinline kmp_int64 test_then_inc_acq<kmp_int64>(volatile kmp_int64 *p) {
kmp_int64 r;
r = KMP_TEST_THEN_INC_ACQ64(p);
return r;
}
// test_then_inc template (general template should NOT be used)
template <typename T> static __forceinline T test_then_inc(volatile T *p);
template <>
__forceinline kmp_int32 test_then_inc<kmp_int32>(volatile kmp_int32 *p) {
kmp_int32 r;
r = KMP_TEST_THEN_INC32(p);
return r;
}
template <>
__forceinline kmp_int64 test_then_inc<kmp_int64>(volatile kmp_int64 *p) {
kmp_int64 r;
r = KMP_TEST_THEN_INC64(p);
return r;
}
// compare_and_swap template (general template should NOT be used)
template <typename T>
static __forceinline kmp_int32 compare_and_swap(volatile T *p, T c, T s);
template <>
__forceinline kmp_int32 compare_and_swap<kmp_int32>(volatile kmp_int32 *p,
kmp_int32 c, kmp_int32 s) {
return KMP_COMPARE_AND_STORE_REL32(p, c, s);
}
template <>
__forceinline kmp_int32 compare_and_swap<kmp_int64>(volatile kmp_int64 *p,
kmp_int64 c, kmp_int64 s) {
return KMP_COMPARE_AND_STORE_REL64(p, c, s);
}
template <typename T> kmp_uint32 __kmp_ge(T value, T checker) {
return value >= checker;
}
template <typename T> kmp_uint32 __kmp_eq(T value, T checker) {
return value == checker;
}
/*
Spin wait loop that first does pause, then yield.
Waits until function returns non-zero when called with *spinner and check.
Does NOT put threads to sleep.
Arguments:
UT is unsigned 4- or 8-byte type
spinner - memory location to check value
checker - value which spinner is >, <, ==, etc.
pred - predicate function to perform binary comparison of some sort
#if USE_ITT_BUILD
obj -- is higher-level synchronization object to report to ittnotify. It
is used to report locks consistently. For example, if lock is acquired
immediately, its address is reported to ittnotify via
KMP_FSYNC_ACQUIRED(). However, it lock cannot be acquired immediately
and lock routine calls to KMP_WAIT_YIELD(), the later should report the
same address, not an address of low-level spinner.
#endif // USE_ITT_BUILD
TODO: make inline function (move to header file for icl)
*/
template <typename UT>
static UT __kmp_wait_yield(volatile UT *spinner, UT checker,
kmp_uint32 (*pred)(UT, UT)
USE_ITT_BUILD_ARG(void *obj)) {
// note: we may not belong to a team at this point
volatile UT *spin = spinner;
UT check = checker;
kmp_uint32 spins;
kmp_uint32 (*f)(UT, UT) = pred;
UT r;
KMP_FSYNC_SPIN_INIT(obj, CCAST(UT *, spin));
KMP_INIT_YIELD(spins);
// main wait spin loop
while (!f(r = *spin, check)) {
KMP_FSYNC_SPIN_PREPARE(obj);
/* GEH - remove this since it was accidentally introduced when kmp_wait was
split.
It causes problems with infinite recursion because of exit lock */
/* if ( TCR_4(__kmp_global.g.g_done) && __kmp_global.g.g_abort)
__kmp_abort_thread(); */
// if we are oversubscribed,
// or have waited a bit (and KMP_LIBRARY=throughput, then yield
// pause is in the following code
KMP_YIELD(TCR_4(__kmp_nth) > __kmp_avail_proc);
KMP_YIELD_SPIN(spins);
}
KMP_FSYNC_SPIN_ACQUIRED(obj);
return r;
}
/* ------------------------------------------------------------------------ */
/* ------------------------------------------------------------------------ */
template <typename UT>
void __kmp_dispatch_deo(int *gtid_ref, int *cid_ref, ident_t *loc_ref) {
dispatch_private_info_template<UT> *pr;
int gtid = *gtid_ref;
// int cid = *cid_ref;
kmp_info_t *th = __kmp_threads[gtid];
KMP_DEBUG_ASSERT(th->th.th_dispatch);
KD_TRACE(100, ("__kmp_dispatch_deo: T#%d called\n", gtid));
if (__kmp_env_consistency_check) {
pr = reinterpret_cast<dispatch_private_info_template<UT> *>(
th->th.th_dispatch->th_dispatch_pr_current);
if (pr->pushed_ws != ct_none) {
#if KMP_USE_DYNAMIC_LOCK
__kmp_push_sync(gtid, ct_ordered_in_pdo, loc_ref, NULL, 0);
#else
__kmp_push_sync(gtid, ct_ordered_in_pdo, loc_ref, NULL);
#endif
}
}
if (!th->th.th_team->t.t_serialized) {
dispatch_shared_info_template<UT> *sh =
reinterpret_cast<dispatch_shared_info_template<UT> *>(
th->th.th_dispatch->th_dispatch_sh_current);
UT lower;
if (!__kmp_env_consistency_check) {
pr = reinterpret_cast<dispatch_private_info_template<UT> *>(
th->th.th_dispatch->th_dispatch_pr_current);
}
lower = pr->u.p.ordered_lower;
#if !defined(KMP_GOMP_COMPAT)
if (__kmp_env_consistency_check) {
if (pr->ordered_bumped) {
struct cons_header *p = __kmp_threads[gtid]->th.th_cons;
__kmp_error_construct2(kmp_i18n_msg_CnsMultipleNesting,
ct_ordered_in_pdo, loc_ref,
&p->stack_data[p->w_top]);
}
}
#endif /* !defined(KMP_GOMP_COMPAT) */
KMP_MB();
#ifdef KMP_DEBUG
{
char *buff;
// create format specifiers before the debug output
buff = __kmp_str_format("__kmp_dispatch_deo: T#%%d before wait: "
"ordered_iter:%%%s lower:%%%s\n",
traits_t<UT>::spec, traits_t<UT>::spec);
KD_TRACE(1000, (buff, gtid, sh->u.s.ordered_iteration, lower));
__kmp_str_free(&buff);
}
#endif
__kmp_wait_yield<UT>(&sh->u.s.ordered_iteration, lower,
__kmp_ge<UT> USE_ITT_BUILD_ARG(NULL));
KMP_MB(); /* is this necessary? */
#ifdef KMP_DEBUG
{
char *buff;
// create format specifiers before the debug output
buff = __kmp_str_format("__kmp_dispatch_deo: T#%%d after wait: "
"ordered_iter:%%%s lower:%%%s\n",
traits_t<UT>::spec, traits_t<UT>::spec);
KD_TRACE(1000, (buff, gtid, sh->u.s.ordered_iteration, lower));
__kmp_str_free(&buff);
}
#endif
}
KD_TRACE(100, ("__kmp_dispatch_deo: T#%d returned\n", gtid));
}
template <typename UT>
void __kmp_dispatch_dxo(int *gtid_ref, int *cid_ref, ident_t *loc_ref) {
typedef typename traits_t<UT>::signed_t ST;
dispatch_private_info_template<UT> *pr;
int gtid = *gtid_ref;
// int cid = *cid_ref;
kmp_info_t *th = __kmp_threads[gtid];
KMP_DEBUG_ASSERT(th->th.th_dispatch);
KD_TRACE(100, ("__kmp_dispatch_dxo: T#%d called\n", gtid));
if (__kmp_env_consistency_check) {
pr = reinterpret_cast<dispatch_private_info_template<UT> *>(
th->th.th_dispatch->th_dispatch_pr_current);
if (pr->pushed_ws != ct_none) {
__kmp_pop_sync(gtid, ct_ordered_in_pdo, loc_ref);
}
}
if (!th->th.th_team->t.t_serialized) {
dispatch_shared_info_template<UT> *sh =
reinterpret_cast<dispatch_shared_info_template<UT> *>(
th->th.th_dispatch->th_dispatch_sh_current);
if (!__kmp_env_consistency_check) {
pr = reinterpret_cast<dispatch_private_info_template<UT> *>(
th->th.th_dispatch->th_dispatch_pr_current);
}
KMP_FSYNC_RELEASING(CCAST(UT *, &sh->u.s.ordered_iteration));
#if !defined(KMP_GOMP_COMPAT)
if (__kmp_env_consistency_check) {
if (pr->ordered_bumped != 0) {
struct cons_header *p = __kmp_threads[gtid]->th.th_cons;
/* How to test it? - OM */
__kmp_error_construct2(kmp_i18n_msg_CnsMultipleNesting,
ct_ordered_in_pdo, loc_ref,
&p->stack_data[p->w_top]);
}
}
#endif /* !defined(KMP_GOMP_COMPAT) */
KMP_MB(); /* Flush all pending memory write invalidates. */
pr->ordered_bumped += 1;
KD_TRACE(1000,
("__kmp_dispatch_dxo: T#%d bumping ordered ordered_bumped=%d\n",
gtid, pr->ordered_bumped));
KMP_MB(); /* Flush all pending memory write invalidates. */
/* TODO use general release procedure? */
test_then_inc<ST>((volatile ST *)&sh->u.s.ordered_iteration);
KMP_MB(); /* Flush all pending memory write invalidates. */
}
KD_TRACE(100, ("__kmp_dispatch_dxo: T#%d returned\n", gtid));
}
/* Computes and returns x to the power of y, where y must a non-negative integer
*/
template <typename UT>
static __forceinline long double __kmp_pow(long double x, UT y) {
long double s = 1.0L;
KMP_DEBUG_ASSERT(x > 0.0 && x < 1.0);
// KMP_DEBUG_ASSERT(y >= 0); // y is unsigned
while (y) {
if (y & 1)
s *= x;
x *= x;
y >>= 1;
}
return s;
}
/* Computes and returns the number of unassigned iterations after idx chunks
have been assigned
(the total number of unassigned iterations in chunks with index greater than
or equal to idx).
__forceinline seems to be broken so that if we __forceinline this function,
the behavior is wrong
(one of the unit tests, sch_guided_analytical_basic.cpp, fails)
*/
template <typename T>
static __inline typename traits_t<T>::unsigned_t
__kmp_dispatch_guided_remaining(T tc, typename traits_t<T>::floating_t base,
typename traits_t<T>::unsigned_t idx) {
/* Note: On Windows* OS on IA-32 architecture and Intel(R) 64, at
least for ICL 8.1, long double arithmetic may not really have
long double precision, even with /Qlong_double. Currently, we
workaround that in the caller code, by manipulating the FPCW for
Windows* OS on IA-32 architecture. The lack of precision is not
expected to be a correctness issue, though.
*/
typedef typename traits_t<T>::unsigned_t UT;
long double x = tc * __kmp_pow<UT>(base, idx);
UT r = (UT)x;
if (x == r)
return r;
return r + 1;
}
// Parameters of the guided-iterative algorithm:
// p2 = n * nproc * ( chunk + 1 ) // point of switching to dynamic
// p3 = 1 / ( n * nproc ) // remaining iterations multiplier
// by default n = 2. For example with n = 3 the chunks distribution will be more
// flat.
// With n = 1 first chunk is the same as for static schedule, e.g. trip / nproc.
static const int guided_int_param = 2;
static const double guided_flt_param = 0.5; // = 1.0 / guided_int_param;
#endif // KMP_DISPATCH_H

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,501 @@
/*
* kmp_environment.cpp -- Handle environment variables OS-independently.
*/
//===----------------------------------------------------------------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is dual licensed under the MIT and the University of Illinois Open
// Source Licenses. See LICENSE.txt for details.
//
//===----------------------------------------------------------------------===//
/* We use GetEnvironmentVariable for Windows* OS instead of getenv because the
act of loading a DLL on Windows* OS makes any user-set environment variables
(i.e. with putenv()) unavailable. getenv() apparently gets a clean copy of
the env variables as they existed at the start of the run. JH 12/23/2002
On Windows* OS, there are two environments (at least, see below):
1. Environment maintained by Windows* OS on IA-32 architecture. Accessible
through GetEnvironmentVariable(), SetEnvironmentVariable(), and
GetEnvironmentStrings().
2. Environment maintained by C RTL. Accessible through getenv(), putenv().
putenv() function updates both C and Windows* OS on IA-32 architecture.
getenv() function search for variables in C RTL environment only.
Windows* OS on IA-32 architecture functions work *only* with Windows* OS on
IA-32 architecture.
Windows* OS on IA-32 architecture maintained by OS, so there is always only
one Windows* OS on IA-32 architecture per process. Changes in Windows* OS on
IA-32 architecture are process-visible.
C environment maintained by C RTL. Multiple copies of C RTL may be present
in the process, and each C RTL maintains its own environment. :-(
Thus, proper way to work with environment on Windows* OS is:
1. Set variables with putenv() function -- both C and Windows* OS on IA-32
architecture are being updated. Windows* OS on IA-32 architecture may be
considered primary target, while updating C RTL environment is free bonus.
2. Get variables with GetEnvironmentVariable() -- getenv() does not
search Windows* OS on IA-32 architecture, and can not see variables
set with SetEnvironmentVariable().
2007-04-05 -- lev
*/
#include "kmp_environment.h"
#include "kmp.h" //
#include "kmp_i18n.h"
#include "kmp_os.h" // KMP_OS_*.
#include "kmp_str.h" // __kmp_str_*().
#if KMP_OS_UNIX
#include <stdlib.h> // getenv, setenv, unsetenv.
#include <string.h> // strlen, strcpy.
#if KMP_OS_DARWIN
#include <crt_externs.h>
#define environ (*_NSGetEnviron())
#else
extern char **environ;
#endif
#elif KMP_OS_WINDOWS
#include <windows.h> // GetEnvironmentVariable, SetEnvironmentVariable,
// GetLastError.
#else
#error Unknown or unsupported OS.
#endif
// TODO: Eliminate direct memory allocations, use string operations instead.
static inline void *allocate(size_t size) {
void *ptr = KMP_INTERNAL_MALLOC(size);
if (ptr == NULL) {
KMP_FATAL(MemoryAllocFailed);
}
return ptr;
} // allocate
char *__kmp_env_get(char const *name) {
char *result = NULL;
#if KMP_OS_UNIX
char const *value = getenv(name);
if (value != NULL) {
size_t len = KMP_STRLEN(value) + 1;
result = (char *)KMP_INTERNAL_MALLOC(len);
if (result == NULL) {
KMP_FATAL(MemoryAllocFailed);
}
KMP_STRNCPY_S(result, len, value, len);
}
#elif KMP_OS_WINDOWS
/* We use GetEnvironmentVariable for Windows* OS instead of getenv because the
act of loading a DLL on Windows* OS makes any user-set environment
variables (i.e. with putenv()) unavailable. getenv() apparently gets a
clean copy of the env variables as they existed at the start of the run.
JH 12/23/2002 */
DWORD rc;
rc = GetEnvironmentVariable(name, NULL, 0);
if (!rc) {
DWORD error = GetLastError();
if (error != ERROR_ENVVAR_NOT_FOUND) {
__kmp_fatal(KMP_MSG(CantGetEnvVar, name), KMP_ERR(error), __kmp_msg_null);
}
// Variable is not found, it's ok, just continue.
} else {
DWORD len = rc;
result = (char *)KMP_INTERNAL_MALLOC(len);
if (result == NULL) {
KMP_FATAL(MemoryAllocFailed);
}
rc = GetEnvironmentVariable(name, result, len);
if (!rc) {
// GetEnvironmentVariable() may return 0 if variable is empty.
// In such a case GetLastError() returns ERROR_SUCCESS.
DWORD error = GetLastError();
if (error != ERROR_SUCCESS) {
// Unexpected error. The variable should be in the environment,
// and buffer should be large enough.
__kmp_fatal(KMP_MSG(CantGetEnvVar, name), KMP_ERR(error),
__kmp_msg_null);
KMP_INTERNAL_FREE((void *)result);
result = NULL;
}
}
}
#else
#error Unknown or unsupported OS.
#endif
return result;
} // func __kmp_env_get
// TODO: Find and replace all regular free() with __kmp_env_free().
void __kmp_env_free(char const **value) {
KMP_DEBUG_ASSERT(value != NULL);
KMP_INTERNAL_FREE(CCAST(char *, *value));
*value = NULL;
} // func __kmp_env_free
int __kmp_env_exists(char const *name) {
#if KMP_OS_UNIX
char const *value = getenv(name);
return ((value == NULL) ? (0) : (1));
#elif KMP_OS_WINDOWS
DWORD rc;
rc = GetEnvironmentVariable(name, NULL, 0);
if (rc == 0) {
DWORD error = GetLastError();
if (error != ERROR_ENVVAR_NOT_FOUND) {
__kmp_fatal(KMP_MSG(CantGetEnvVar, name), KMP_ERR(error), __kmp_msg_null);
}
return 0;
}
return 1;
#else
#error Unknown or unsupported OS.
#endif
} // func __kmp_env_exists
void __kmp_env_set(char const *name, char const *value, int overwrite) {
#if KMP_OS_UNIX
int rc = setenv(name, value, overwrite);
if (rc != 0) {
// Dead code. I tried to put too many variables into Linux* OS
// environment on IA-32 architecture. When application consumes
// more than ~2.5 GB of memory, entire system feels bad. Sometimes
// application is killed (by OS?), sometimes system stops
// responding... But this error message never appears. --ln
__kmp_fatal(KMP_MSG(CantSetEnvVar, name), KMP_HNT(NotEnoughMemory),
__kmp_msg_null);
}
#elif KMP_OS_WINDOWS
BOOL rc;
if (!overwrite) {
rc = GetEnvironmentVariable(name, NULL, 0);
if (rc) {
// Variable exists, do not overwrite.
return;
}
DWORD error = GetLastError();
if (error != ERROR_ENVVAR_NOT_FOUND) {
__kmp_fatal(KMP_MSG(CantGetEnvVar, name), KMP_ERR(error), __kmp_msg_null);
}
}
rc = SetEnvironmentVariable(name, value);
if (!rc) {
DWORD error = GetLastError();
__kmp_fatal(KMP_MSG(CantSetEnvVar, name), KMP_ERR(error), __kmp_msg_null);
}
#else
#error Unknown or unsupported OS.
#endif
} // func __kmp_env_set
void __kmp_env_unset(char const *name) {
#if KMP_OS_UNIX
unsetenv(name);
#elif KMP_OS_WINDOWS
BOOL rc = SetEnvironmentVariable(name, NULL);
if (!rc) {
DWORD error = GetLastError();
__kmp_fatal(KMP_MSG(CantSetEnvVar, name), KMP_ERR(error), __kmp_msg_null);
}
#else
#error Unknown or unsupported OS.
#endif
} // func __kmp_env_unset
/* Intel OpenMP RTL string representation of environment: just a string of
characters, variables are separated with vertical bars, e. g.:
"KMP_WARNINGS=0|KMP_AFFINITY=compact|"
Empty variables are allowed and ignored:
"||KMP_WARNINGS=1||"
*/
static void
___kmp_env_blk_parse_string(kmp_env_blk_t *block, // M: Env block to fill.
char const *env // I: String to parse.
) {
char const chr_delimiter = '|';
char const str_delimiter[] = {chr_delimiter, 0};
char *bulk = NULL;
kmp_env_var_t *vars = NULL;
int count = 0; // Number of used elements in vars array.
int delimiters = 0; // Number of delimiters in input string.
// Copy original string, we will modify the copy.
bulk = __kmp_str_format("%s", env);
// Loop thru all the vars in environment block. Count delimiters (maximum
// number of variables is number of delimiters plus one).
{
char const *ptr = bulk;
for (;;) {
ptr = strchr(ptr, chr_delimiter);
if (ptr == NULL) {
break;
}
++delimiters;
ptr += 1;
}
}
// Allocate vars array.
vars = (kmp_env_var_t *)allocate((delimiters + 1) * sizeof(kmp_env_var_t));
// Loop thru all the variables.
{
char *var; // Pointer to variable (both name and value).
char *name; // Pointer to name of variable.
char *value; // Pointer to value.
char *buf; // Buffer for __kmp_str_token() function.
var = __kmp_str_token(bulk, str_delimiter, &buf); // Get the first var.
while (var != NULL) {
// Save found variable in vars array.
__kmp_str_split(var, '=', &name, &value);
KMP_DEBUG_ASSERT(count < delimiters + 1);
vars[count].name = name;
vars[count].value = value;
++count;
// Get the next var.
var = __kmp_str_token(NULL, str_delimiter, &buf);
}
}
// Fill out result.
block->bulk = bulk;
block->vars = vars;
block->count = count;
}
/* Windows* OS (actually, DOS) environment block is a piece of memory with
environment variables. Each variable is terminated with zero byte, entire
block is terminated with one extra zero byte, so we have two zero bytes at
the end of environment block, e. g.:
"HOME=C:\\users\\lev\x00OS=Windows_NT\x00\x00"
It is not clear how empty environment is represented. "\x00\x00"?
*/
#if KMP_OS_WINDOWS
static void ___kmp_env_blk_parse_windows(
kmp_env_blk_t *block, // M: Env block to fill.
char const *env // I: Pointer to Windows* OS (DOS) environment block.
) {
char *bulk = NULL;
kmp_env_var_t *vars = NULL;
int count = 0; // Number of used elements in vars array.
int size = 0; // Size of bulk.
char *name; // Pointer to name of variable.
char *value; // Pointer to value.
if (env != NULL) {
// Loop thru all the vars in environment block. Count variables, find size
// of block.
{
char const *var; // Pointer to beginning of var.
int len; // Length of variable.
count = 0;
var =
env; // The first variable starts and beginning of environment block.
len = KMP_STRLEN(var);
while (len != 0) {
++count;
size = size + len + 1;
var = var + len +
1; // Move pointer to the beginning of the next variable.
len = KMP_STRLEN(var);
}
size =
size + 1; // Total size of env block, including terminating zero byte.
}
// Copy original block to bulk, we will modify bulk, not original block.
bulk = (char *)allocate(size);
KMP_MEMCPY_S(bulk, size, env, size);
// Allocate vars array.
vars = (kmp_env_var_t *)allocate(count * sizeof(kmp_env_var_t));
// Loop thru all the vars, now in bulk.
{
char *var; // Pointer to beginning of var.
int len; // Length of variable.
count = 0;
var = bulk;
len = KMP_STRLEN(var);
while (len != 0) {
// Save variable in vars array.
__kmp_str_split(var, '=', &name, &value);
vars[count].name = name;
vars[count].value = value;
++count;
// Get the next var.
var = var + len + 1;
len = KMP_STRLEN(var);
}
}
}
// Fill out result.
block->bulk = bulk;
block->vars = vars;
block->count = count;
}
#endif
/* Unix environment block is a array of pointers to variables, last pointer in
array is NULL:
{ "HOME=/home/lev", "TERM=xterm", NULL }
*/
static void
___kmp_env_blk_parse_unix(kmp_env_blk_t *block, // M: Env block to fill.
char **env // I: Unix environment to parse.
) {
char *bulk = NULL;
kmp_env_var_t *vars = NULL;
int count = 0;
int size = 0; // Size of bulk.
// Count number of variables and length of required bulk.
{
count = 0;
size = 0;
while (env[count] != NULL) {
size += KMP_STRLEN(env[count]) + 1;
++count;
}
}
// Allocate memory.
bulk = (char *)allocate(size);
vars = (kmp_env_var_t *)allocate(count * sizeof(kmp_env_var_t));
// Loop thru all the vars.
{
char *var; // Pointer to beginning of var.
char *name; // Pointer to name of variable.
char *value; // Pointer to value.
int len; // Length of variable.
int i;
var = bulk;
for (i = 0; i < count; ++i) {
// Copy variable to bulk.
len = KMP_STRLEN(env[i]);
KMP_MEMCPY_S(var, size, env[i], len + 1);
// Save found variable in vars array.
__kmp_str_split(var, '=', &name, &value);
vars[i].name = name;
vars[i].value = value;
// Move pointer.
var += len + 1;
}
}
// Fill out result.
block->bulk = bulk;
block->vars = vars;
block->count = count;
}
void __kmp_env_blk_init(kmp_env_blk_t *block, // M: Block to initialize.
char const *bulk // I: Initialization string, or NULL.
) {
if (bulk != NULL) {
___kmp_env_blk_parse_string(block, bulk);
} else {
#if KMP_OS_UNIX
___kmp_env_blk_parse_unix(block, environ);
#elif KMP_OS_WINDOWS
{
char *mem = GetEnvironmentStrings();
if (mem == NULL) {
DWORD error = GetLastError();
__kmp_fatal(KMP_MSG(CantGetEnvironment), KMP_ERR(error),
__kmp_msg_null);
}
___kmp_env_blk_parse_windows(block, mem);
FreeEnvironmentStrings(mem);
}
#else
#error Unknown or unsupported OS.
#endif
}
} // __kmp_env_blk_init
static int ___kmp_env_var_cmp( // Comparison function for qsort().
kmp_env_var_t const *lhs, kmp_env_var_t const *rhs) {
return strcmp(lhs->name, rhs->name);
}
void __kmp_env_blk_sort(
kmp_env_blk_t *block // M: Block of environment variables to sort.
) {
qsort(CCAST(kmp_env_var_t *, block->vars), block->count,
sizeof(kmp_env_var_t),
(int (*)(void const *, void const *)) & ___kmp_env_var_cmp);
} // __kmp_env_block_sort
void __kmp_env_blk_free(
kmp_env_blk_t *block // M: Block of environment variables to free.
) {
KMP_INTERNAL_FREE(CCAST(kmp_env_var_t *, block->vars));
__kmp_str_free(&(block->bulk));
block->count = 0;
block->vars = NULL;
} // __kmp_env_blk_free
char const * // R: Value of variable or NULL if variable does not exist.
__kmp_env_blk_var(
kmp_env_blk_t *block, // I: Block of environment variables.
char const *name // I: Name of variable to find.
) {
int i;
for (i = 0; i < block->count; ++i) {
if (strcmp(block->vars[i].name, name) == 0) {
return block->vars[i].value;
}
}
return NULL;
} // __kmp_env_block_var
// end of file //

View File

@ -0,0 +1,78 @@
/*
* kmp_environment.h -- Handle environment varoiables OS-independently.
*/
//===----------------------------------------------------------------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is dual licensed under the MIT and the University of Illinois Open
// Source Licenses. See LICENSE.txt for details.
//
//===----------------------------------------------------------------------===//
#ifndef KMP_ENVIRONMENT_H
#define KMP_ENVIRONMENT_H
#ifdef __cplusplus
extern "C" {
#endif
// Return a copy of the value of environment variable or NULL if the variable
// does not exist.
// *Note*: Returned pointed *must* be freed after use with __kmp_env_free().
char *__kmp_env_get(char const *name);
void __kmp_env_free(char const **value);
// Return 1 if the environment variable exists or 0 if does not exist.
int __kmp_env_exists(char const *name);
// Set the environment variable.
void __kmp_env_set(char const *name, char const *value, int overwrite);
// Unset (remove) environment variable.
void __kmp_env_unset(char const *name);
// -----------------------------------------------------------------------------
// Working with environment blocks.
/* kmp_env_blk_t is read-only collection of environment variables (or
environment-like). Usage:
kmp_env_blk_t block;
__kmp_env_blk_init( & block, NULL ); // Initialize block from process
// environment.
// or
__kmp_env_blk_init( & block, "KMP_WARNING=1|KMP_AFFINITY=none" ); // from string
__kmp_env_blk_sort( & block ); // Optionally, sort list.
for ( i = 0; i < block.count; ++ i ) {
// Process block.vars[ i ].name and block.vars[ i ].value...
}
__kmp_env_block_free( & block );
*/
struct __kmp_env_var {
char *name;
char *value;
};
typedef struct __kmp_env_var kmp_env_var_t;
struct __kmp_env_blk {
char *bulk;
kmp_env_var_t *vars;
int count;
};
typedef struct __kmp_env_blk kmp_env_blk_t;
void __kmp_env_blk_init(kmp_env_blk_t *block, char const *bulk);
void __kmp_env_blk_free(kmp_env_blk_t *block);
void __kmp_env_blk_sort(kmp_env_blk_t *block);
char const *__kmp_env_blk_var(kmp_env_blk_t *block, char const *name);
#ifdef __cplusplus
}
#endif
#endif // KMP_ENVIRONMENT_H
// end of file //

462
runtime/src/kmp_error.cpp Normal file
View File

@ -0,0 +1,462 @@
/*
* kmp_error.cpp -- KPTS functions for error checking at runtime
*/
//===----------------------------------------------------------------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is dual licensed under the MIT and the University of Illinois Open
// Source Licenses. See LICENSE.txt for details.
//
//===----------------------------------------------------------------------===//
#include "kmp.h"
#include "kmp_error.h"
#include "kmp_i18n.h"
#include "kmp_str.h"
/* ------------------------------------------------------------------------ */
#define MIN_STACK 100
static char const *cons_text_c[] = {
"(none)", "\"parallel\"", "work-sharing", /* this is not called "for"
because of lowering of
"sections" pragmas */
"\"ordered\" work-sharing", /* this is not called "for ordered" because of
lowering of "sections" pragmas */
"\"sections\"",
"work-sharing", /* this is not called "single" because of lowering of
"sections" pragmas */
"\"taskq\"", "\"taskq\"", "\"taskq ordered\"", "\"critical\"",
"\"ordered\"", /* in PARALLEL */
"\"ordered\"", /* in PDO */
"\"ordered\"", /* in TASKQ */
"\"master\"", "\"reduce\"", "\"barrier\""};
#define get_src(ident) ((ident) == NULL ? NULL : (ident)->psource)
#define PUSH_MSG(ct, ident) \
"\tpushing on stack: %s (%s)\n", cons_text_c[(ct)], get_src((ident))
#define POP_MSG(p) \
"\tpopping off stack: %s (%s)\n", cons_text_c[(p)->stack_data[tos].type], \
get_src((p)->stack_data[tos].ident)
static int const cons_text_c_num = sizeof(cons_text_c) / sizeof(char const *);
/* --------------- START OF STATIC LOCAL ROUTINES ------------------------- */
static void __kmp_check_null_func(void) { /* nothing to do */
}
static void __kmp_expand_cons_stack(int gtid, struct cons_header *p) {
int i;
struct cons_data *d;
/* TODO for monitor perhaps? */
if (gtid < 0)
__kmp_check_null_func();
KE_TRACE(10, ("expand cons_stack (%d %d)\n", gtid, __kmp_get_gtid()));
d = p->stack_data;
p->stack_size = (p->stack_size * 2) + 100;
/* TODO free the old data */
p->stack_data = (struct cons_data *)__kmp_allocate(sizeof(struct cons_data) *
(p->stack_size + 1));
for (i = p->stack_top; i >= 0; --i)
p->stack_data[i] = d[i];
/* NOTE: we do not free the old stack_data */
}
// NOTE: Function returns allocated memory, caller must free it!
static char *__kmp_pragma(int ct, ident_t const *ident) {
char const *cons = NULL; // Construct name.
char *file = NULL; // File name.
char *func = NULL; // Function (routine) name.
char *line = NULL; // Line number.
kmp_str_buf_t buffer;
kmp_msg_t prgm;
__kmp_str_buf_init(&buffer);
if (0 < ct && ct < cons_text_c_num) {
cons = cons_text_c[ct];
} else {
KMP_DEBUG_ASSERT(0);
}
if (ident != NULL && ident->psource != NULL) {
char *tail = NULL;
__kmp_str_buf_print(&buffer, "%s",
ident->psource); // Copy source to buffer.
// Split string in buffer to file, func, and line.
tail = buffer.str;
__kmp_str_split(tail, ';', NULL, &tail);
__kmp_str_split(tail, ';', &file, &tail);
__kmp_str_split(tail, ';', &func, &tail);
__kmp_str_split(tail, ';', &line, &tail);
}
prgm = __kmp_msg_format(kmp_i18n_fmt_Pragma, cons, file, func, line);
__kmp_str_buf_free(&buffer);
return prgm.str;
} // __kmp_pragma
/* ----------------- END OF STATIC LOCAL ROUTINES ------------------------- */
void __kmp_error_construct(kmp_i18n_id_t id, // Message identifier.
enum cons_type ct, // Construct type.
ident_t const *ident // Construct ident.
) {
char *construct = __kmp_pragma(ct, ident);
__kmp_fatal(__kmp_msg_format(id, construct), __kmp_msg_null);
KMP_INTERNAL_FREE(construct);
}
void __kmp_error_construct2(kmp_i18n_id_t id, // Message identifier.
enum cons_type ct, // First construct type.
ident_t const *ident, // First construct ident.
struct cons_data const *cons // Second construct.
) {
char *construct1 = __kmp_pragma(ct, ident);
char *construct2 = __kmp_pragma(cons->type, cons->ident);
__kmp_fatal(__kmp_msg_format(id, construct1, construct2), __kmp_msg_null);
KMP_INTERNAL_FREE(construct1);
KMP_INTERNAL_FREE(construct2);
}
struct cons_header *__kmp_allocate_cons_stack(int gtid) {
struct cons_header *p;
/* TODO for monitor perhaps? */
if (gtid < 0) {
__kmp_check_null_func();
}
KE_TRACE(10, ("allocate cons_stack (%d)\n", gtid));
p = (struct cons_header *)__kmp_allocate(sizeof(struct cons_header));
p->p_top = p->w_top = p->s_top = 0;
p->stack_data = (struct cons_data *)__kmp_allocate(sizeof(struct cons_data) *
(MIN_STACK + 1));
p->stack_size = MIN_STACK;
p->stack_top = 0;
p->stack_data[0].type = ct_none;
p->stack_data[0].prev = 0;
p->stack_data[0].ident = NULL;
return p;
}
void __kmp_free_cons_stack(void *ptr) {
struct cons_header *p = (struct cons_header *)ptr;
if (p != NULL) {
if (p->stack_data != NULL) {
__kmp_free(p->stack_data);
p->stack_data = NULL;
}
__kmp_free(p);
}
}
#if KMP_DEBUG
static void dump_cons_stack(int gtid, struct cons_header *p) {
int i;
int tos = p->stack_top;
kmp_str_buf_t buffer;
__kmp_str_buf_init(&buffer);
__kmp_str_buf_print(
&buffer,
"+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-\n");
__kmp_str_buf_print(&buffer,
"Begin construct stack with %d items for thread %d\n",
tos, gtid);
__kmp_str_buf_print(&buffer, " stack_top=%d { P=%d, W=%d, S=%d }\n", tos,
p->p_top, p->w_top, p->s_top);
for (i = tos; i > 0; i--) {
struct cons_data *c = &(p->stack_data[i]);
__kmp_str_buf_print(
&buffer, " stack_data[%2d] = { %s (%s) %d %p }\n", i,
cons_text_c[c->type], get_src(c->ident), c->prev, c->name);
}
__kmp_str_buf_print(&buffer, "End construct stack for thread %d\n", gtid);
__kmp_str_buf_print(
&buffer,
"+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-\n");
__kmp_debug_printf("%s", buffer.str);
__kmp_str_buf_free(&buffer);
}
#endif
void __kmp_push_parallel(int gtid, ident_t const *ident) {
int tos;
struct cons_header *p = __kmp_threads[gtid]->th.th_cons;
KMP_DEBUG_ASSERT(__kmp_threads[gtid]->th.th_cons);
KE_TRACE(10, ("__kmp_push_parallel (%d %d)\n", gtid, __kmp_get_gtid()));
KE_TRACE(100, (PUSH_MSG(ct_parallel, ident)));
if (p->stack_top >= p->stack_size) {
__kmp_expand_cons_stack(gtid, p);
}
tos = ++p->stack_top;
p->stack_data[tos].type = ct_parallel;
p->stack_data[tos].prev = p->p_top;
p->stack_data[tos].ident = ident;
p->stack_data[tos].name = NULL;
p->p_top = tos;
KE_DUMP(1000, dump_cons_stack(gtid, p));
}
void __kmp_check_workshare(int gtid, enum cons_type ct, ident_t const *ident) {
struct cons_header *p = __kmp_threads[gtid]->th.th_cons;
KMP_DEBUG_ASSERT(__kmp_threads[gtid]->th.th_cons);
KE_TRACE(10, ("__kmp_check_workshare (%d %d)\n", gtid, __kmp_get_gtid()));
if (p->stack_top >= p->stack_size) {
__kmp_expand_cons_stack(gtid, p);
}
if (p->w_top > p->p_top &&
!(IS_CONS_TYPE_TASKQ(p->stack_data[p->w_top].type) &&
IS_CONS_TYPE_TASKQ(ct))) {
// We are already in a WORKSHARE construct for this PARALLEL region.
__kmp_error_construct2(kmp_i18n_msg_CnsInvalidNesting, ct, ident,
&p->stack_data[p->w_top]);
}
if (p->s_top > p->p_top) {
// We are already in a SYNC construct for this PARALLEL region.
__kmp_error_construct2(kmp_i18n_msg_CnsInvalidNesting, ct, ident,
&p->stack_data[p->s_top]);
}
}
void __kmp_push_workshare(int gtid, enum cons_type ct, ident_t const *ident) {
int tos;
struct cons_header *p = __kmp_threads[gtid]->th.th_cons;
KE_TRACE(10, ("__kmp_push_workshare (%d %d)\n", gtid, __kmp_get_gtid()));
__kmp_check_workshare(gtid, ct, ident);
KE_TRACE(100, (PUSH_MSG(ct, ident)));
tos = ++p->stack_top;
p->stack_data[tos].type = ct;
p->stack_data[tos].prev = p->w_top;
p->stack_data[tos].ident = ident;
p->stack_data[tos].name = NULL;
p->w_top = tos;
KE_DUMP(1000, dump_cons_stack(gtid, p));
}
void
#if KMP_USE_DYNAMIC_LOCK
__kmp_check_sync( int gtid, enum cons_type ct, ident_t const * ident, kmp_user_lock_p lck, kmp_uint32 seq )
#else
__kmp_check_sync( int gtid, enum cons_type ct, ident_t const * ident, kmp_user_lock_p lck )
#endif
{
struct cons_header *p = __kmp_threads[gtid]->th.th_cons;
KE_TRACE(10, ("__kmp_check_sync (gtid=%d)\n", __kmp_get_gtid()));
if (p->stack_top >= p->stack_size)
__kmp_expand_cons_stack(gtid, p);
if (ct == ct_ordered_in_parallel || ct == ct_ordered_in_pdo ||
ct == ct_ordered_in_taskq) {
if (p->w_top <= p->p_top) {
/* we are not in a worksharing construct */
#ifdef BUILD_PARALLEL_ORDERED
/* do not report error messages for PARALLEL ORDERED */
KMP_ASSERT(ct == ct_ordered_in_parallel);
#else
__kmp_error_construct(kmp_i18n_msg_CnsBoundToWorksharing, ct, ident);
#endif /* BUILD_PARALLEL_ORDERED */
} else {
/* inside a WORKSHARING construct for this PARALLEL region */
if (!IS_CONS_TYPE_ORDERED(p->stack_data[p->w_top].type)) {
if (p->stack_data[p->w_top].type == ct_taskq) {
__kmp_error_construct2(kmp_i18n_msg_CnsNotInTaskConstruct, ct, ident,
&p->stack_data[p->w_top]);
} else {
__kmp_error_construct2(kmp_i18n_msg_CnsNoOrderedClause, ct, ident,
&p->stack_data[p->w_top]);
}
}
}
if (p->s_top > p->p_top && p->s_top > p->w_top) {
/* inside a sync construct which is inside a worksharing construct */
int index = p->s_top;
enum cons_type stack_type;
stack_type = p->stack_data[index].type;
if (stack_type == ct_critical ||
((stack_type == ct_ordered_in_parallel ||
stack_type == ct_ordered_in_pdo ||
stack_type ==
ct_ordered_in_taskq) && /* C doesn't allow named ordered;
ordered in ordered gets error */
p->stack_data[index].ident != NULL &&
(p->stack_data[index].ident->flags & KMP_IDENT_KMPC))) {
/* we are in ORDERED which is inside an ORDERED or CRITICAL construct */
__kmp_error_construct2(kmp_i18n_msg_CnsInvalidNesting, ct, ident,
&p->stack_data[index]);
}
}
} else if (ct == ct_critical) {
#if KMP_USE_DYNAMIC_LOCK
if (lck != NULL &&
__kmp_get_user_lock_owner(lck, seq) ==
gtid) { /* this thread already has lock for this critical section */
#else
if (lck != NULL &&
__kmp_get_user_lock_owner(lck) ==
gtid) { /* this thread already has lock for this critical section */
#endif
int index = p->s_top;
struct cons_data cons = {NULL, ct_critical, 0, NULL};
/* walk up construct stack and try to find critical with matching name */
while (index != 0 && p->stack_data[index].name != lck) {
index = p->stack_data[index].prev;
}
if (index != 0) {
/* found match on the stack (may not always because of interleaved
* critical for Fortran) */
cons = p->stack_data[index];
}
/* we are in CRITICAL which is inside a CRITICAL construct of same name */
__kmp_error_construct2(kmp_i18n_msg_CnsNestingSameName, ct, ident, &cons);
}
} else if (ct == ct_master || ct == ct_reduce) {
if (p->w_top > p->p_top) {
/* inside a WORKSHARING construct for this PARALLEL region */
__kmp_error_construct2(kmp_i18n_msg_CnsInvalidNesting, ct, ident,
&p->stack_data[p->w_top]);
}
if (ct == ct_reduce && p->s_top > p->p_top) {
/* inside a another SYNC construct for this PARALLEL region */
__kmp_error_construct2(kmp_i18n_msg_CnsInvalidNesting, ct, ident,
&p->stack_data[p->s_top]);
}
}
}
void
#if KMP_USE_DYNAMIC_LOCK
__kmp_push_sync( int gtid, enum cons_type ct, ident_t const * ident, kmp_user_lock_p lck, kmp_uint32 seq )
#else
__kmp_push_sync( int gtid, enum cons_type ct, ident_t const * ident, kmp_user_lock_p lck )
#endif
{
int tos;
struct cons_header *p = __kmp_threads[gtid]->th.th_cons;
KMP_ASSERT(gtid == __kmp_get_gtid());
KE_TRACE(10, ("__kmp_push_sync (gtid=%d)\n", gtid));
#if KMP_USE_DYNAMIC_LOCK
__kmp_check_sync(gtid, ct, ident, lck, seq);
#else
__kmp_check_sync(gtid, ct, ident, lck);
#endif
KE_TRACE(100, (PUSH_MSG(ct, ident)));
tos = ++p->stack_top;
p->stack_data[tos].type = ct;
p->stack_data[tos].prev = p->s_top;
p->stack_data[tos].ident = ident;
p->stack_data[tos].name = lck;
p->s_top = tos;
KE_DUMP(1000, dump_cons_stack(gtid, p));
}
/* ------------------------------------------------------------------------ */
void __kmp_pop_parallel(int gtid, ident_t const *ident) {
int tos;
struct cons_header *p = __kmp_threads[gtid]->th.th_cons;
tos = p->stack_top;
KE_TRACE(10, ("__kmp_pop_parallel (%d %d)\n", gtid, __kmp_get_gtid()));
if (tos == 0 || p->p_top == 0) {
__kmp_error_construct(kmp_i18n_msg_CnsDetectedEnd, ct_parallel, ident);
}
if (tos != p->p_top || p->stack_data[tos].type != ct_parallel) {
__kmp_error_construct2(kmp_i18n_msg_CnsExpectedEnd, ct_parallel, ident,
&p->stack_data[tos]);
}
KE_TRACE(100, (POP_MSG(p)));
p->p_top = p->stack_data[tos].prev;
p->stack_data[tos].type = ct_none;
p->stack_data[tos].ident = NULL;
p->stack_top = tos - 1;
KE_DUMP(1000, dump_cons_stack(gtid, p));
}
enum cons_type __kmp_pop_workshare(int gtid, enum cons_type ct,
ident_t const *ident) {
int tos;
struct cons_header *p = __kmp_threads[gtid]->th.th_cons;
tos = p->stack_top;
KE_TRACE(10, ("__kmp_pop_workshare (%d %d)\n", gtid, __kmp_get_gtid()));
if (tos == 0 || p->w_top == 0) {
__kmp_error_construct(kmp_i18n_msg_CnsDetectedEnd, ct, ident);
}
if (tos != p->w_top ||
(p->stack_data[tos].type != ct &&
// below are two exceptions to the rule that construct types must match
!(p->stack_data[tos].type == ct_pdo_ordered && ct == ct_pdo) &&
!(p->stack_data[tos].type == ct_task_ordered && ct == ct_task))) {
__kmp_check_null_func();
__kmp_error_construct2(kmp_i18n_msg_CnsExpectedEnd, ct, ident,
&p->stack_data[tos]);
}
KE_TRACE(100, (POP_MSG(p)));
p->w_top = p->stack_data[tos].prev;
p->stack_data[tos].type = ct_none;
p->stack_data[tos].ident = NULL;
p->stack_top = tos - 1;
KE_DUMP(1000, dump_cons_stack(gtid, p));
return p->stack_data[p->w_top].type;
}
void __kmp_pop_sync(int gtid, enum cons_type ct, ident_t const *ident) {
int tos;
struct cons_header *p = __kmp_threads[gtid]->th.th_cons;
tos = p->stack_top;
KE_TRACE(10, ("__kmp_pop_sync (%d %d)\n", gtid, __kmp_get_gtid()));
if (tos == 0 || p->s_top == 0) {
__kmp_error_construct(kmp_i18n_msg_CnsDetectedEnd, ct, ident);
}
if (tos != p->s_top || p->stack_data[tos].type != ct) {
__kmp_check_null_func();
__kmp_error_construct2(kmp_i18n_msg_CnsExpectedEnd, ct, ident,
&p->stack_data[tos]);
}
if (gtid < 0) {
__kmp_check_null_func();
}
KE_TRACE(100, (POP_MSG(p)));
p->s_top = p->stack_data[tos].prev;
p->stack_data[tos].type = ct_none;
p->stack_data[tos].ident = NULL;
p->stack_top = tos - 1;
KE_DUMP(1000, dump_cons_stack(gtid, p));
}
/* ------------------------------------------------------------------------ */
void __kmp_check_barrier(int gtid, enum cons_type ct, ident_t const *ident) {
struct cons_header *p = __kmp_threads[gtid]->th.th_cons;
KE_TRACE(10, ("__kmp_check_barrier (loc: %p, gtid: %d %d)\n", ident, gtid,
__kmp_get_gtid()));
if (ident != 0) {
__kmp_check_null_func();
}
if (p->w_top > p->p_top) {
/* we are already in a WORKSHARING construct for this PARALLEL region */
__kmp_error_construct2(kmp_i18n_msg_CnsInvalidNesting, ct, ident,
&p->stack_data[p->w_top]);
}
if (p->s_top > p->p_top) {
/* we are already in a SYNC construct for this PARALLEL region */
__kmp_error_construct2(kmp_i18n_msg_CnsInvalidNesting, ct, ident,
&p->stack_data[p->s_top]);
}
}

61
runtime/src/kmp_error.h Normal file
View File

@ -0,0 +1,61 @@
/*
* kmp_error.h -- PTS functions for error checking at runtime.
*/
//===----------------------------------------------------------------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is dual licensed under the MIT and the University of Illinois Open
// Source Licenses. See LICENSE.txt for details.
//
//===----------------------------------------------------------------------===//
#ifndef KMP_ERROR_H
#define KMP_ERROR_H
#include "kmp_i18n.h"
/* ------------------------------------------------------------------------ */
#ifdef __cplusplus
extern "C" {
#endif
void __kmp_error_construct(kmp_i18n_id_t id, enum cons_type ct,
ident_t const *ident);
void __kmp_error_construct2(kmp_i18n_id_t id, enum cons_type ct,
ident_t const *ident, struct cons_data const *cons);
struct cons_header *__kmp_allocate_cons_stack(int gtid);
void __kmp_free_cons_stack(void *ptr);
void __kmp_push_parallel(int gtid, ident_t const *ident);
void __kmp_push_workshare(int gtid, enum cons_type ct, ident_t const *ident);
#if KMP_USE_DYNAMIC_LOCK
void __kmp_push_sync(int gtid, enum cons_type ct, ident_t const *ident,
kmp_user_lock_p name, kmp_uint32);
#else
void __kmp_push_sync(int gtid, enum cons_type ct, ident_t const *ident,
kmp_user_lock_p name);
#endif
void __kmp_check_workshare(int gtid, enum cons_type ct, ident_t const *ident);
#if KMP_USE_DYNAMIC_LOCK
void __kmp_check_sync(int gtid, enum cons_type ct, ident_t const *ident,
kmp_user_lock_p name, kmp_uint32);
#else
void __kmp_check_sync(int gtid, enum cons_type ct, ident_t const *ident,
kmp_user_lock_p name);
#endif
void __kmp_pop_parallel(int gtid, ident_t const *ident);
enum cons_type __kmp_pop_workshare(int gtid, enum cons_type ct,
ident_t const *ident);
void __kmp_pop_sync(int gtid, enum cons_type ct, ident_t const *ident);
void __kmp_check_barrier(int gtid, enum cons_type ct, ident_t const *ident);
#ifdef __cplusplus
} // extern "C"
#endif
#endif // KMP_ERROR_H

View File

@ -0,0 +1,35 @@
/*
* kmp_ftn_cdecl.cpp -- Fortran __cdecl linkage support for OpenMP.
*/
//===----------------------------------------------------------------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is dual licensed under the MIT and the University of Illinois Open
// Source Licenses. See LICENSE.txt for details.
//
//===----------------------------------------------------------------------===//
#include "kmp.h"
#include "kmp_affinity.h"
#if KMP_OS_WINDOWS
#if defined KMP_WIN_CDECL || !KMP_DYNAMIC_LIB
#define KMP_FTN_ENTRIES KMP_FTN_UPPER
#endif
#elif KMP_OS_UNIX
#define KMP_FTN_ENTRIES KMP_FTN_PLAIN
#endif
// Note: This string is not printed when KMP_VERSION=1.
char const __kmp_version_ftncdecl[] =
KMP_VERSION_PREFIX "Fortran __cdecl OMP support: "
#ifdef KMP_FTN_ENTRIES
"yes";
#define FTN_STDCALL /* no stdcall */
#include "kmp_ftn_os.h"
#include "kmp_ftn_entry.h"
#else
"no";
#endif /* KMP_FTN_ENTRIES */

1446
runtime/src/kmp_ftn_entry.h Normal file

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,33 @@
/*
* kmp_ftn_extra.cpp -- Fortran 'extra' linkage support for OpenMP.
*/
//===----------------------------------------------------------------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is dual licensed under the MIT and the University of Illinois Open
// Source Licenses. See LICENSE.txt for details.
//
//===----------------------------------------------------------------------===//
#include "kmp.h"
#include "kmp_affinity.h"
#if KMP_OS_WINDOWS
#define KMP_FTN_ENTRIES KMP_FTN_PLAIN
#elif KMP_OS_UNIX
#define KMP_FTN_ENTRIES KMP_FTN_APPEND
#endif
// Note: This string is not printed when KMP_VERSION=1.
char const __kmp_version_ftnextra[] =
KMP_VERSION_PREFIX "Fortran \"extra\" OMP support: "
#ifdef KMP_FTN_ENTRIES
"yes";
#define FTN_STDCALL /* nothing to do */
#include "kmp_ftn_os.h"
#include "kmp_ftn_entry.h"
#else
"no";
#endif /* KMP_FTN_ENTRIES */

668
runtime/src/kmp_ftn_os.h Normal file
View File

@ -0,0 +1,668 @@
/*
* kmp_ftn_os.h -- KPTS Fortran defines header file.
*/
//===----------------------------------------------------------------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is dual licensed under the MIT and the University of Illinois Open
// Source Licenses. See LICENSE.txt for details.
//
//===----------------------------------------------------------------------===//
#ifndef KMP_FTN_OS_H
#define KMP_FTN_OS_H
// KMP_FNT_ENTRIES may be one of: KMP_FTN_PLAIN, KMP_FTN_UPPER, KMP_FTN_APPEND,
// KMP_FTN_UAPPEND.
/* -------------------------- External definitions ------------------------ */
#if KMP_FTN_ENTRIES == KMP_FTN_PLAIN
#define FTN_SET_STACKSIZE kmp_set_stacksize
#define FTN_SET_STACKSIZE_S kmp_set_stacksize_s
#define FTN_GET_STACKSIZE kmp_get_stacksize
#define FTN_GET_STACKSIZE_S kmp_get_stacksize_s
#define FTN_SET_BLOCKTIME kmp_set_blocktime
#define FTN_GET_BLOCKTIME kmp_get_blocktime
#define FTN_SET_LIBRARY_SERIAL kmp_set_library_serial
#define FTN_SET_LIBRARY_TURNAROUND kmp_set_library_turnaround
#define FTN_SET_LIBRARY_THROUGHPUT kmp_set_library_throughput
#define FTN_SET_LIBRARY kmp_set_library
#define FTN_GET_LIBRARY kmp_get_library
#define FTN_SET_DEFAULTS kmp_set_defaults
#define FTN_SET_DISP_NUM_BUFFERS kmp_set_disp_num_buffers
#define FTN_SET_AFFINITY kmp_set_affinity
#define FTN_GET_AFFINITY kmp_get_affinity
#define FTN_GET_AFFINITY_MAX_PROC kmp_get_affinity_max_proc
#define FTN_CREATE_AFFINITY_MASK kmp_create_affinity_mask
#define FTN_DESTROY_AFFINITY_MASK kmp_destroy_affinity_mask
#define FTN_SET_AFFINITY_MASK_PROC kmp_set_affinity_mask_proc
#define FTN_UNSET_AFFINITY_MASK_PROC kmp_unset_affinity_mask_proc
#define FTN_GET_AFFINITY_MASK_PROC kmp_get_affinity_mask_proc
#define FTN_MALLOC kmp_malloc
#define FTN_ALIGNED_MALLOC kmp_aligned_malloc
#define FTN_CALLOC kmp_calloc
#define FTN_REALLOC kmp_realloc
#define FTN_KFREE kmp_free
#define FTN_GET_NUM_KNOWN_THREADS kmp_get_num_known_threads
#define FTN_SET_NUM_THREADS omp_set_num_threads
#define FTN_GET_NUM_THREADS omp_get_num_threads
#define FTN_GET_MAX_THREADS omp_get_max_threads
#define FTN_GET_THREAD_NUM omp_get_thread_num
#define FTN_GET_NUM_PROCS omp_get_num_procs
#define FTN_SET_DYNAMIC omp_set_dynamic
#define FTN_GET_DYNAMIC omp_get_dynamic
#define FTN_SET_NESTED omp_set_nested
#define FTN_GET_NESTED omp_get_nested
#define FTN_IN_PARALLEL omp_in_parallel
#define FTN_GET_THREAD_LIMIT omp_get_thread_limit
#define FTN_SET_SCHEDULE omp_set_schedule
#define FTN_GET_SCHEDULE omp_get_schedule
#define FTN_SET_MAX_ACTIVE_LEVELS omp_set_max_active_levels
#define FTN_GET_MAX_ACTIVE_LEVELS omp_get_max_active_levels
#define FTN_GET_ACTIVE_LEVEL omp_get_active_level
#define FTN_GET_LEVEL omp_get_level
#define FTN_GET_ANCESTOR_THREAD_NUM omp_get_ancestor_thread_num
#define FTN_GET_TEAM_SIZE omp_get_team_size
#define FTN_IN_FINAL omp_in_final
// #define FTN_SET_PROC_BIND omp_set_proc_bind
#define FTN_GET_PROC_BIND omp_get_proc_bind
// #define FTN_CURR_PROC_BIND omp_curr_proc_bind
#if OMP_40_ENABLED
#define FTN_GET_NUM_TEAMS omp_get_num_teams
#define FTN_GET_TEAM_NUM omp_get_team_num
#endif
#define FTN_INIT_LOCK omp_init_lock
#if KMP_USE_DYNAMIC_LOCK
#define FTN_INIT_LOCK_WITH_HINT omp_init_lock_with_hint
#define FTN_INIT_NEST_LOCK_WITH_HINT omp_init_nest_lock_with_hint
#endif
#define FTN_DESTROY_LOCK omp_destroy_lock
#define FTN_SET_LOCK omp_set_lock
#define FTN_UNSET_LOCK omp_unset_lock
#define FTN_TEST_LOCK omp_test_lock
#define FTN_INIT_NEST_LOCK omp_init_nest_lock
#define FTN_DESTROY_NEST_LOCK omp_destroy_nest_lock
#define FTN_SET_NEST_LOCK omp_set_nest_lock
#define FTN_UNSET_NEST_LOCK omp_unset_nest_lock
#define FTN_TEST_NEST_LOCK omp_test_nest_lock
#define FTN_SET_WARNINGS_ON kmp_set_warnings_on
#define FTN_SET_WARNINGS_OFF kmp_set_warnings_off
#define FTN_GET_WTIME omp_get_wtime
#define FTN_GET_WTICK omp_get_wtick
#if OMP_40_ENABLED
#define FTN_GET_NUM_DEVICES omp_get_num_devices
#define FTN_GET_DEFAULT_DEVICE omp_get_default_device
#define FTN_SET_DEFAULT_DEVICE omp_set_default_device
#define FTN_IS_INITIAL_DEVICE omp_is_initial_device
#endif
#if OMP_40_ENABLED
#define FTN_GET_CANCELLATION omp_get_cancellation
#define FTN_GET_CANCELLATION_STATUS kmp_get_cancellation_status
#endif
#if OMP_45_ENABLED
#define FTN_GET_MAX_TASK_PRIORITY omp_get_max_task_priority
#define FTN_GET_NUM_PLACES omp_get_num_places
#define FTN_GET_PLACE_NUM_PROCS omp_get_place_num_procs
#define FTN_GET_PLACE_PROC_IDS omp_get_place_proc_ids
#define FTN_GET_PLACE_NUM omp_get_place_num
#define FTN_GET_PARTITION_NUM_PLACES omp_get_partition_num_places
#define FTN_GET_PARTITION_PLACE_NUMS omp_get_partition_place_nums
#define FTN_GET_INITIAL_DEVICE omp_get_initial_device
#ifdef KMP_STUB
#define FTN_TARGET_ALLOC omp_target_alloc
#define FTN_TARGET_FREE omp_target_free
#define FTN_TARGET_IS_PRESENT omp_target_is_present
#define FTN_TARGET_MEMCPY omp_target_memcpy
#define FTN_TARGET_MEMCPY_RECT omp_target_memcpy_rect
#define FTN_TARGET_ASSOCIATE_PTR omp_target_associate_ptr
#define FTN_TARGET_DISASSOCIATE_PTR omp_target_disassociate_ptr
#endif
#endif
#if OMP_50_ENABLED
#define FTN_CONTROL_TOOL omp_control_tool
#define FTN_SET_DEFAULT_ALLOCATOR omp_set_default_allocator
#define FTN_GET_DEFAULT_ALLOCATOR omp_get_default_allocator
#define FTN_ALLOC omp_alloc
#define FTN_FREE omp_free
#define FTN_GET_DEVICE_NUM omp_get_device_num
#define FTN_SET_AFFINITY_FORMAT omp_set_affinity_format
#define FTN_GET_AFFINITY_FORMAT omp_get_affinity_format
#define FTN_DISPLAY_AFFINITY omp_display_affinity
#define FTN_CAPTURE_AFFINITY omp_capture_affinity
#endif
#endif /* KMP_FTN_PLAIN */
/* ------------------------------------------------------------------------ */
#if KMP_FTN_ENTRIES == KMP_FTN_APPEND
#define FTN_SET_STACKSIZE kmp_set_stacksize_
#define FTN_SET_STACKSIZE_S kmp_set_stacksize_s_
#define FTN_GET_STACKSIZE kmp_get_stacksize_
#define FTN_GET_STACKSIZE_S kmp_get_stacksize_s_
#define FTN_SET_BLOCKTIME kmp_set_blocktime_
#define FTN_GET_BLOCKTIME kmp_get_blocktime_
#define FTN_SET_LIBRARY_SERIAL kmp_set_library_serial_
#define FTN_SET_LIBRARY_TURNAROUND kmp_set_library_turnaround_
#define FTN_SET_LIBRARY_THROUGHPUT kmp_set_library_throughput_
#define FTN_SET_LIBRARY kmp_set_library_
#define FTN_GET_LIBRARY kmp_get_library_
#define FTN_SET_DEFAULTS kmp_set_defaults_
#define FTN_SET_DISP_NUM_BUFFERS kmp_set_disp_num_buffers_
#define FTN_SET_AFFINITY kmp_set_affinity_
#define FTN_GET_AFFINITY kmp_get_affinity_
#define FTN_GET_AFFINITY_MAX_PROC kmp_get_affinity_max_proc_
#define FTN_CREATE_AFFINITY_MASK kmp_create_affinity_mask_
#define FTN_DESTROY_AFFINITY_MASK kmp_destroy_affinity_mask_
#define FTN_SET_AFFINITY_MASK_PROC kmp_set_affinity_mask_proc_
#define FTN_UNSET_AFFINITY_MASK_PROC kmp_unset_affinity_mask_proc_
#define FTN_GET_AFFINITY_MASK_PROC kmp_get_affinity_mask_proc_
#define FTN_MALLOC kmp_malloc_
#define FTN_ALIGNED_MALLOC kmp_aligned_malloc_
#define FTN_CALLOC kmp_calloc_
#define FTN_REALLOC kmp_realloc_
#define FTN_KFREE kmp_free_
#define FTN_GET_NUM_KNOWN_THREADS kmp_get_num_known_threads_
#define FTN_SET_NUM_THREADS omp_set_num_threads_
#define FTN_GET_NUM_THREADS omp_get_num_threads_
#define FTN_GET_MAX_THREADS omp_get_max_threads_
#define FTN_GET_THREAD_NUM omp_get_thread_num_
#define FTN_GET_NUM_PROCS omp_get_num_procs_
#define FTN_SET_DYNAMIC omp_set_dynamic_
#define FTN_GET_DYNAMIC omp_get_dynamic_
#define FTN_SET_NESTED omp_set_nested_
#define FTN_GET_NESTED omp_get_nested_
#define FTN_IN_PARALLEL omp_in_parallel_
#define FTN_GET_THREAD_LIMIT omp_get_thread_limit_
#define FTN_SET_SCHEDULE omp_set_schedule_
#define FTN_GET_SCHEDULE omp_get_schedule_
#define FTN_SET_MAX_ACTIVE_LEVELS omp_set_max_active_levels_
#define FTN_GET_MAX_ACTIVE_LEVELS omp_get_max_active_levels_
#define FTN_GET_ACTIVE_LEVEL omp_get_active_level_
#define FTN_GET_LEVEL omp_get_level_
#define FTN_GET_ANCESTOR_THREAD_NUM omp_get_ancestor_thread_num_
#define FTN_GET_TEAM_SIZE omp_get_team_size_
#define FTN_IN_FINAL omp_in_final_
// #define FTN_SET_PROC_BIND omp_set_proc_bind_
#define FTN_GET_PROC_BIND omp_get_proc_bind_
// #define FTN_CURR_PROC_BIND omp_curr_proc_bind_
#if OMP_40_ENABLED
#define FTN_GET_NUM_TEAMS omp_get_num_teams_
#define FTN_GET_TEAM_NUM omp_get_team_num_
#endif
#define FTN_INIT_LOCK omp_init_lock_
#if KMP_USE_DYNAMIC_LOCK
#define FTN_INIT_LOCK_WITH_HINT omp_init_lock_with_hint_
#define FTN_INIT_NEST_LOCK_WITH_HINT omp_init_nest_lock_with_hint_
#endif
#define FTN_DESTROY_LOCK omp_destroy_lock_
#define FTN_SET_LOCK omp_set_lock_
#define FTN_UNSET_LOCK omp_unset_lock_
#define FTN_TEST_LOCK omp_test_lock_
#define FTN_INIT_NEST_LOCK omp_init_nest_lock_
#define FTN_DESTROY_NEST_LOCK omp_destroy_nest_lock_
#define FTN_SET_NEST_LOCK omp_set_nest_lock_
#define FTN_UNSET_NEST_LOCK omp_unset_nest_lock_
#define FTN_TEST_NEST_LOCK omp_test_nest_lock_
#define FTN_SET_WARNINGS_ON kmp_set_warnings_on_
#define FTN_SET_WARNINGS_OFF kmp_set_warnings_off_
#define FTN_GET_WTIME omp_get_wtime_
#define FTN_GET_WTICK omp_get_wtick_
#if OMP_40_ENABLED
#define FTN_GET_NUM_DEVICES omp_get_num_devices_
#define FTN_GET_DEFAULT_DEVICE omp_get_default_device_
#define FTN_SET_DEFAULT_DEVICE omp_set_default_device_
#define FTN_IS_INITIAL_DEVICE omp_is_initial_device_
#endif
#if OMP_40_ENABLED
#define FTN_GET_CANCELLATION omp_get_cancellation_
#define FTN_GET_CANCELLATION_STATUS kmp_get_cancellation_status_
#endif
#if OMP_45_ENABLED
#define FTN_GET_MAX_TASK_PRIORITY omp_get_max_task_priority_
#define FTN_GET_NUM_PLACES omp_get_num_places_
#define FTN_GET_PLACE_NUM_PROCS omp_get_place_num_procs_
#define FTN_GET_PLACE_PROC_IDS omp_get_place_proc_ids_
#define FTN_GET_PLACE_NUM omp_get_place_num_
#define FTN_GET_PARTITION_NUM_PLACES omp_get_partition_num_places_
#define FTN_GET_PARTITION_PLACE_NUMS omp_get_partition_place_nums_
#define FTN_GET_INITIAL_DEVICE omp_get_initial_device_
#ifdef KMP_STUB
#define FTN_TARGET_ALLOC omp_target_alloc_
#define FTN_TARGET_FREE omp_target_free_
#define FTN_TARGET_IS_PRESENT omp_target_is_present_
#define FTN_TARGET_MEMCPY omp_target_memcpy_
#define FTN_TARGET_MEMCPY_RECT omp_target_memcpy_rect_
#define FTN_TARGET_ASSOCIATE_PTR omp_target_associate_ptr_
#define FTN_TARGET_DISASSOCIATE_PTR omp_target_disassociate_ptr_
#endif
#endif
#if OMP_50_ENABLED
#define FTN_CONTROL_TOOL omp_control_tool_
#define FTN_SET_DEFAULT_ALLOCATOR omp_set_default_allocator_
#define FTN_GET_DEFAULT_ALLOCATOR omp_get_default_allocator_
#define FTN_ALLOC omp_alloc_
#define FTN_FREE omp_free_
#define FTN_GET_DEVICE_NUM omp_get_device_num_
#define FTN_SET_AFFINITY_FORMAT omp_set_affinity_format_
#define FTN_GET_AFFINITY_FORMAT omp_get_affinity_format_
#define FTN_DISPLAY_AFFINITY omp_display_affinity_
#define FTN_CAPTURE_AFFINITY omp_capture_affinity_
#endif
#endif /* KMP_FTN_APPEND */
/* ------------------------------------------------------------------------ */
#if KMP_FTN_ENTRIES == KMP_FTN_UPPER
#define FTN_SET_STACKSIZE KMP_SET_STACKSIZE
#define FTN_SET_STACKSIZE_S KMP_SET_STACKSIZE_S
#define FTN_GET_STACKSIZE KMP_GET_STACKSIZE
#define FTN_GET_STACKSIZE_S KMP_GET_STACKSIZE_S
#define FTN_SET_BLOCKTIME KMP_SET_BLOCKTIME
#define FTN_GET_BLOCKTIME KMP_GET_BLOCKTIME
#define FTN_SET_LIBRARY_SERIAL KMP_SET_LIBRARY_SERIAL
#define FTN_SET_LIBRARY_TURNAROUND KMP_SET_LIBRARY_TURNAROUND
#define FTN_SET_LIBRARY_THROUGHPUT KMP_SET_LIBRARY_THROUGHPUT
#define FTN_SET_LIBRARY KMP_SET_LIBRARY
#define FTN_GET_LIBRARY KMP_GET_LIBRARY
#define FTN_SET_DEFAULTS KMP_SET_DEFAULTS
#define FTN_SET_DISP_NUM_BUFFERS KMP_SET_DISP_NUM_BUFFERS
#define FTN_SET_AFFINITY KMP_SET_AFFINITY
#define FTN_GET_AFFINITY KMP_GET_AFFINITY
#define FTN_GET_AFFINITY_MAX_PROC KMP_GET_AFFINITY_MAX_PROC
#define FTN_CREATE_AFFINITY_MASK KMP_CREATE_AFFINITY_MASK
#define FTN_DESTROY_AFFINITY_MASK KMP_DESTROY_AFFINITY_MASK
#define FTN_SET_AFFINITY_MASK_PROC KMP_SET_AFFINITY_MASK_PROC
#define FTN_UNSET_AFFINITY_MASK_PROC KMP_UNSET_AFFINITY_MASK_PROC
#define FTN_GET_AFFINITY_MASK_PROC KMP_GET_AFFINITY_MASK_PROC
#define FTN_MALLOC KMP_MALLOC
#define FTN_ALIGNED_MALLOC KMP_ALIGNED_MALLOC
#define FTN_CALLOC KMP_CALLOC
#define FTN_REALLOC KMP_REALLOC
#define FTN_KFREE KMP_FREE
#define FTN_GET_NUM_KNOWN_THREADS KMP_GET_NUM_KNOWN_THREADS
#define FTN_SET_NUM_THREADS OMP_SET_NUM_THREADS
#define FTN_GET_NUM_THREADS OMP_GET_NUM_THREADS
#define FTN_GET_MAX_THREADS OMP_GET_MAX_THREADS
#define FTN_GET_THREAD_NUM OMP_GET_THREAD_NUM
#define FTN_GET_NUM_PROCS OMP_GET_NUM_PROCS
#define FTN_SET_DYNAMIC OMP_SET_DYNAMIC
#define FTN_GET_DYNAMIC OMP_GET_DYNAMIC
#define FTN_SET_NESTED OMP_SET_NESTED
#define FTN_GET_NESTED OMP_GET_NESTED
#define FTN_IN_PARALLEL OMP_IN_PARALLEL
#define FTN_GET_THREAD_LIMIT OMP_GET_THREAD_LIMIT
#define FTN_SET_SCHEDULE OMP_SET_SCHEDULE
#define FTN_GET_SCHEDULE OMP_GET_SCHEDULE
#define FTN_SET_MAX_ACTIVE_LEVELS OMP_SET_MAX_ACTIVE_LEVELS
#define FTN_GET_MAX_ACTIVE_LEVELS OMP_GET_MAX_ACTIVE_LEVELS
#define FTN_GET_ACTIVE_LEVEL OMP_GET_ACTIVE_LEVEL
#define FTN_GET_LEVEL OMP_GET_LEVEL
#define FTN_GET_ANCESTOR_THREAD_NUM OMP_GET_ANCESTOR_THREAD_NUM
#define FTN_GET_TEAM_SIZE OMP_GET_TEAM_SIZE
#define FTN_IN_FINAL OMP_IN_FINAL
// #define FTN_SET_PROC_BIND OMP_SET_PROC_BIND
#define FTN_GET_PROC_BIND OMP_GET_PROC_BIND
// #define FTN_CURR_PROC_BIND OMP_CURR_PROC_BIND
#if OMP_40_ENABLED
#define FTN_GET_NUM_TEAMS OMP_GET_NUM_TEAMS
#define FTN_GET_TEAM_NUM OMP_GET_TEAM_NUM
#endif
#define FTN_INIT_LOCK OMP_INIT_LOCK
#if KMP_USE_DYNAMIC_LOCK
#define FTN_INIT_LOCK_WITH_HINT OMP_INIT_LOCK_WITH_HINT
#define FTN_INIT_NEST_LOCK_WITH_HINT OMP_INIT_NEST_LOCK_WITH_HINT
#endif
#define FTN_DESTROY_LOCK OMP_DESTROY_LOCK
#define FTN_SET_LOCK OMP_SET_LOCK
#define FTN_UNSET_LOCK OMP_UNSET_LOCK
#define FTN_TEST_LOCK OMP_TEST_LOCK
#define FTN_INIT_NEST_LOCK OMP_INIT_NEST_LOCK
#define FTN_DESTROY_NEST_LOCK OMP_DESTROY_NEST_LOCK
#define FTN_SET_NEST_LOCK OMP_SET_NEST_LOCK
#define FTN_UNSET_NEST_LOCK OMP_UNSET_NEST_LOCK
#define FTN_TEST_NEST_LOCK OMP_TEST_NEST_LOCK
#define FTN_SET_WARNINGS_ON KMP_SET_WARNINGS_ON
#define FTN_SET_WARNINGS_OFF KMP_SET_WARNINGS_OFF
#define FTN_GET_WTIME OMP_GET_WTIME
#define FTN_GET_WTICK OMP_GET_WTICK
#if OMP_40_ENABLED
#define FTN_GET_NUM_DEVICES OMP_GET_NUM_DEVICES
#define FTN_GET_DEFAULT_DEVICE OMP_GET_DEFAULT_DEVICE
#define FTN_SET_DEFAULT_DEVICE OMP_SET_DEFAULT_DEVICE
#define FTN_IS_INITIAL_DEVICE OMP_IS_INITIAL_DEVICE
#endif
#if OMP_40_ENABLED
#define FTN_GET_CANCELLATION OMP_GET_CANCELLATION
#define FTN_GET_CANCELLATION_STATUS KMP_GET_CANCELLATION_STATUS
#endif
#if OMP_45_ENABLED
#define FTN_GET_MAX_TASK_PRIORITY OMP_GET_MAX_TASK_PRIORITY
#define FTN_GET_NUM_PLACES OMP_GET_NUM_PLACES
#define FTN_GET_PLACE_NUM_PROCS OMP_GET_PLACE_NUM_PROCS
#define FTN_GET_PLACE_PROC_IDS OMP_GET_PLACE_PROC_IDS
#define FTN_GET_PLACE_NUM OMP_GET_PLACE_NUM
#define FTN_GET_PARTITION_NUM_PLACES OMP_GET_PARTITION_NUM_PLACES
#define FTN_GET_PARTITION_PLACE_NUMS OMP_GET_PARTITION_PLACE_NUMS
#define FTN_GET_INITIAL_DEVICE OMP_GET_INITIAL_DEVICE
#ifdef KMP_STUB
#define FTN_TARGET_ALLOC OMP_TARGET_ALLOC
#define FTN_TARGET_FREE OMP_TARGET_FREE
#define FTN_TARGET_IS_PRESENT OMP_TARGET_IS_PRESENT
#define FTN_TARGET_MEMCPY OMP_TARGET_MEMCPY
#define FTN_TARGET_MEMCPY_RECT OMP_TARGET_MEMCPY_RECT
#define FTN_TARGET_ASSOCIATE_PTR OMP_TARGET_ASSOCIATE_PTR
#define FTN_TARGET_DISASSOCIATE_PTR OMP_TARGET_DISASSOCIATE_PTR
#endif
#endif
#if OMP_50_ENABLED
#define FTN_CONTROL_TOOL OMP_CONTROL_TOOL
#define FTN_SET_DEFAULT_ALLOCATOR OMP_SET_DEFAULT_ALLOCATOR
#define FTN_GET_DEFAULT_ALLOCATOR OMP_GET_DEFAULT_ALLOCATOR
#define FTN_ALLOC OMP_ALLOC
#define FTN_FREE OMP_FREE
#define FTN_GET_DEVICE_NUM OMP_GET_DEVICE_NUM
#define FTN_SET_AFFINITY_FORMAT OMP_SET_AFFINITY_FORMAT
#define FTN_GET_AFFINITY_FORMAT OMP_GET_AFFINITY_FORMAT
#define FTN_DISPLAY_AFFINITY OMP_DISPLAY_AFFINITY
#define FTN_CAPTURE_AFFINITY OMP_CAPTURE_AFFINITY
#endif
#endif /* KMP_FTN_UPPER */
/* ------------------------------------------------------------------------ */
#if KMP_FTN_ENTRIES == KMP_FTN_UAPPEND
#define FTN_SET_STACKSIZE KMP_SET_STACKSIZE_
#define FTN_SET_STACKSIZE_S KMP_SET_STACKSIZE_S_
#define FTN_GET_STACKSIZE KMP_GET_STACKSIZE_
#define FTN_GET_STACKSIZE_S KMP_GET_STACKSIZE_S_
#define FTN_SET_BLOCKTIME KMP_SET_BLOCKTIME_
#define FTN_GET_BLOCKTIME KMP_GET_BLOCKTIME_
#define FTN_SET_LIBRARY_SERIAL KMP_SET_LIBRARY_SERIAL_
#define FTN_SET_LIBRARY_TURNAROUND KMP_SET_LIBRARY_TURNAROUND_
#define FTN_SET_LIBRARY_THROUGHPUT KMP_SET_LIBRARY_THROUGHPUT_
#define FTN_SET_LIBRARY KMP_SET_LIBRARY_
#define FTN_GET_LIBRARY KMP_GET_LIBRARY_
#define FTN_SET_DEFAULTS KMP_SET_DEFAULTS_
#define FTN_SET_DISP_NUM_BUFFERS KMP_SET_DISP_NUM_BUFFERS_
#define FTN_SET_AFFINITY KMP_SET_AFFINITY_
#define FTN_GET_AFFINITY KMP_GET_AFFINITY_
#define FTN_GET_AFFINITY_MAX_PROC KMP_GET_AFFINITY_MAX_PROC_
#define FTN_CREATE_AFFINITY_MASK KMP_CREATE_AFFINITY_MASK_
#define FTN_DESTROY_AFFINITY_MASK KMP_DESTROY_AFFINITY_MASK_
#define FTN_SET_AFFINITY_MASK_PROC KMP_SET_AFFINITY_MASK_PROC_
#define FTN_UNSET_AFFINITY_MASK_PROC KMP_UNSET_AFFINITY_MASK_PROC_
#define FTN_GET_AFFINITY_MASK_PROC KMP_GET_AFFINITY_MASK_PROC_
#define FTN_MALLOC KMP_MALLOC_
#define FTN_ALIGNED_MALLOC KMP_ALIGNED_MALLOC_
#define FTN_CALLOC KMP_CALLOC_
#define FTN_REALLOC KMP_REALLOC_
#define FTN_KFREE KMP_FREE_
#define FTN_GET_NUM_KNOWN_THREADS KMP_GET_NUM_KNOWN_THREADS_
#define FTN_SET_NUM_THREADS OMP_SET_NUM_THREADS_
#define FTN_GET_NUM_THREADS OMP_GET_NUM_THREADS_
#define FTN_GET_MAX_THREADS OMP_GET_MAX_THREADS_
#define FTN_GET_THREAD_NUM OMP_GET_THREAD_NUM_
#define FTN_GET_NUM_PROCS OMP_GET_NUM_PROCS_
#define FTN_SET_DYNAMIC OMP_SET_DYNAMIC_
#define FTN_GET_DYNAMIC OMP_GET_DYNAMIC_
#define FTN_SET_NESTED OMP_SET_NESTED_
#define FTN_GET_NESTED OMP_GET_NESTED_
#define FTN_IN_PARALLEL OMP_IN_PARALLEL_
#define FTN_GET_THREAD_LIMIT OMP_GET_THREAD_LIMIT_
#define FTN_SET_SCHEDULE OMP_SET_SCHEDULE_
#define FTN_GET_SCHEDULE OMP_GET_SCHEDULE_
#define FTN_SET_MAX_ACTIVE_LEVELS OMP_SET_MAX_ACTIVE_LEVELS_
#define FTN_GET_MAX_ACTIVE_LEVELS OMP_GET_MAX_ACTIVE_LEVELS_
#define FTN_GET_ACTIVE_LEVEL OMP_GET_ACTIVE_LEVEL_
#define FTN_GET_LEVEL OMP_GET_LEVEL_
#define FTN_GET_ANCESTOR_THREAD_NUM OMP_GET_ANCESTOR_THREAD_NUM_
#define FTN_GET_TEAM_SIZE OMP_GET_TEAM_SIZE_
#define FTN_IN_FINAL OMP_IN_FINAL_
// #define FTN_SET_PROC_BIND OMP_SET_PROC_BIND_
#define FTN_GET_PROC_BIND OMP_GET_PROC_BIND_
// #define FTN_CURR_PROC_BIND OMP_CURR_PROC_BIND_
#if OMP_40_ENABLED
#define FTN_GET_NUM_TEAMS OMP_GET_NUM_TEAMS_
#define FTN_GET_TEAM_NUM OMP_GET_TEAM_NUM_
#endif
#define FTN_INIT_LOCK OMP_INIT_LOCK_
#if KMP_USE_DYNAMIC_LOCK
#define FTN_INIT_LOCK_WITH_HINT OMP_INIT_LOCK_WITH_HINT_
#define FTN_INIT_NEST_LOCK_WITH_HINT OMP_INIT_NEST_LOCK_WITH_HINT_
#endif
#define FTN_DESTROY_LOCK OMP_DESTROY_LOCK_
#define FTN_SET_LOCK OMP_SET_LOCK_
#define FTN_UNSET_LOCK OMP_UNSET_LOCK_
#define FTN_TEST_LOCK OMP_TEST_LOCK_
#define FTN_INIT_NEST_LOCK OMP_INIT_NEST_LOCK_
#define FTN_DESTROY_NEST_LOCK OMP_DESTROY_NEST_LOCK_
#define FTN_SET_NEST_LOCK OMP_SET_NEST_LOCK_
#define FTN_UNSET_NEST_LOCK OMP_UNSET_NEST_LOCK_
#define FTN_TEST_NEST_LOCK OMP_TEST_NEST_LOCK_
#define FTN_SET_WARNINGS_ON KMP_SET_WARNINGS_ON_
#define FTN_SET_WARNINGS_OFF KMP_SET_WARNINGS_OFF_
#define FTN_GET_WTIME OMP_GET_WTIME_
#define FTN_GET_WTICK OMP_GET_WTICK_
#if OMP_40_ENABLED
#define FTN_GET_NUM_DEVICES OMP_GET_NUM_DEVICES_
#define FTN_GET_DEFAULT_DEVICE OMP_GET_DEFAULT_DEVICE_
#define FTN_SET_DEFAULT_DEVICE OMP_SET_DEFAULT_DEVICE_
#define FTN_IS_INITIAL_DEVICE OMP_IS_INITIAL_DEVICE_
#endif
#if OMP_40_ENABLED
#define FTN_GET_CANCELLATION OMP_GET_CANCELLATION_
#define FTN_GET_CANCELLATION_STATUS KMP_GET_CANCELLATION_STATUS_
#endif
#if OMP_45_ENABLED
#define FTN_GET_MAX_TASK_PRIORITY OMP_GET_MAX_TASK_PRIORITY_
#define FTN_GET_NUM_PLACES OMP_GET_NUM_PLACES_
#define FTN_GET_PLACE_NUM_PROCS OMP_GET_PLACE_NUM_PROCS_
#define FTN_GET_PLACE_PROC_IDS OMP_GET_PLACE_PROC_IDS_
#define FTN_GET_PLACE_NUM OMP_GET_PLACE_NUM_
#define FTN_GET_PARTITION_NUM_PLACES OMP_GET_PARTITION_NUM_PLACES_
#define FTN_GET_PARTITION_PLACE_NUMS OMP_GET_PARTITION_PLACE_NUMS_
#define FTN_GET_INITIAL_DEVICE OMP_GET_INITIAL_DEVICE_
#ifdef KMP_STUB
#define FTN_TARGET_ALLOC OMP_TARGET_ALLOC_
#define FTN_TARGET_FREE OMP_TARGET_FREE_
#define FTN_TARGET_IS_PRESENT OMP_TARGET_IS_PRESENT_
#define FTN_TARGET_MEMCPY OMP_TARGET_MEMCPY_
#define FTN_TARGET_MEMCPY_RECT OMP_TARGET_MEMCPY_RECT_
#define FTN_TARGET_ASSOCIATE_PTR OMP_TARGET_ASSOCIATE_PTR_
#define FTN_TARGET_DISASSOCIATE_PTR OMP_TARGET_DISASSOCIATE_PTR_
#endif
#endif
#if OMP_50_ENABLED
#define FTN_CONTROL_TOOL OMP_CONTROL_TOOL_
#define FTN_SET_DEFAULT_ALLOCATOR OMP_SET_DEFAULT_ALLOCATOR_
#define FTN_GET_DEFAULT_ALLOCATOR OMP_GET_DEFAULT_ALLOCATOR_
#define FTN_ALLOC OMP_ALLOC_
#define FTN_FREE OMP_FREE_
#define FTN_GET_DEVICE_NUM OMP_GET_DEVICE_NUM_
#define FTN_SET_AFFINITY_FORMAT OMP_SET_AFFINITY_FORMAT_
#define FTN_GET_AFFINITY_FORMAT OMP_GET_AFFINITY_FORMAT_
#define FTN_DISPLAY_AFFINITY OMP_DISPLAY_AFFINITY_
#define FTN_CAPTURE_AFFINITY OMP_CAPTURE_AFFINITY_
#endif
#endif /* KMP_FTN_UAPPEND */
/* -------------------------- GOMP API NAMES ------------------------ */
// All GOMP_1.0 symbols
#define KMP_API_NAME_GOMP_ATOMIC_END GOMP_atomic_end
#define KMP_API_NAME_GOMP_ATOMIC_START GOMP_atomic_start
#define KMP_API_NAME_GOMP_BARRIER GOMP_barrier
#define KMP_API_NAME_GOMP_CRITICAL_END GOMP_critical_end
#define KMP_API_NAME_GOMP_CRITICAL_NAME_END GOMP_critical_name_end
#define KMP_API_NAME_GOMP_CRITICAL_NAME_START GOMP_critical_name_start
#define KMP_API_NAME_GOMP_CRITICAL_START GOMP_critical_start
#define KMP_API_NAME_GOMP_LOOP_DYNAMIC_NEXT GOMP_loop_dynamic_next
#define KMP_API_NAME_GOMP_LOOP_DYNAMIC_START GOMP_loop_dynamic_start
#define KMP_API_NAME_GOMP_LOOP_END GOMP_loop_end
#define KMP_API_NAME_GOMP_LOOP_END_NOWAIT GOMP_loop_end_nowait
#define KMP_API_NAME_GOMP_LOOP_GUIDED_NEXT GOMP_loop_guided_next
#define KMP_API_NAME_GOMP_LOOP_GUIDED_START GOMP_loop_guided_start
#define KMP_API_NAME_GOMP_LOOP_ORDERED_DYNAMIC_NEXT \
GOMP_loop_ordered_dynamic_next
#define KMP_API_NAME_GOMP_LOOP_ORDERED_DYNAMIC_START \
GOMP_loop_ordered_dynamic_start
#define KMP_API_NAME_GOMP_LOOP_ORDERED_GUIDED_NEXT GOMP_loop_ordered_guided_next
#define KMP_API_NAME_GOMP_LOOP_ORDERED_GUIDED_START \
GOMP_loop_ordered_guided_start
#define KMP_API_NAME_GOMP_LOOP_ORDERED_RUNTIME_NEXT \
GOMP_loop_ordered_runtime_next
#define KMP_API_NAME_GOMP_LOOP_ORDERED_RUNTIME_START \
GOMP_loop_ordered_runtime_start
#define KMP_API_NAME_GOMP_LOOP_ORDERED_STATIC_NEXT GOMP_loop_ordered_static_next
#define KMP_API_NAME_GOMP_LOOP_ORDERED_STATIC_START \
GOMP_loop_ordered_static_start
#define KMP_API_NAME_GOMP_LOOP_RUNTIME_NEXT GOMP_loop_runtime_next
#define KMP_API_NAME_GOMP_LOOP_RUNTIME_START GOMP_loop_runtime_start
#define KMP_API_NAME_GOMP_LOOP_STATIC_NEXT GOMP_loop_static_next
#define KMP_API_NAME_GOMP_LOOP_STATIC_START GOMP_loop_static_start
#define KMP_API_NAME_GOMP_ORDERED_END GOMP_ordered_end
#define KMP_API_NAME_GOMP_ORDERED_START GOMP_ordered_start
#define KMP_API_NAME_GOMP_PARALLEL_END GOMP_parallel_end
#define KMP_API_NAME_GOMP_PARALLEL_LOOP_DYNAMIC_START \
GOMP_parallel_loop_dynamic_start
#define KMP_API_NAME_GOMP_PARALLEL_LOOP_GUIDED_START \
GOMP_parallel_loop_guided_start
#define KMP_API_NAME_GOMP_PARALLEL_LOOP_RUNTIME_START \
GOMP_parallel_loop_runtime_start
#define KMP_API_NAME_GOMP_PARALLEL_LOOP_STATIC_START \
GOMP_parallel_loop_static_start
#define KMP_API_NAME_GOMP_PARALLEL_SECTIONS_START GOMP_parallel_sections_start
#define KMP_API_NAME_GOMP_PARALLEL_START GOMP_parallel_start
#define KMP_API_NAME_GOMP_SECTIONS_END GOMP_sections_end
#define KMP_API_NAME_GOMP_SECTIONS_END_NOWAIT GOMP_sections_end_nowait
#define KMP_API_NAME_GOMP_SECTIONS_NEXT GOMP_sections_next
#define KMP_API_NAME_GOMP_SECTIONS_START GOMP_sections_start
#define KMP_API_NAME_GOMP_SINGLE_COPY_END GOMP_single_copy_end
#define KMP_API_NAME_GOMP_SINGLE_COPY_START GOMP_single_copy_start
#define KMP_API_NAME_GOMP_SINGLE_START GOMP_single_start
// All GOMP_2.0 symbols
#define KMP_API_NAME_GOMP_TASK GOMP_task
#define KMP_API_NAME_GOMP_TASKWAIT GOMP_taskwait
#define KMP_API_NAME_GOMP_LOOP_ULL_DYNAMIC_NEXT GOMP_loop_ull_dynamic_next
#define KMP_API_NAME_GOMP_LOOP_ULL_DYNAMIC_START GOMP_loop_ull_dynamic_start
#define KMP_API_NAME_GOMP_LOOP_ULL_GUIDED_NEXT GOMP_loop_ull_guided_next
#define KMP_API_NAME_GOMP_LOOP_ULL_GUIDED_START GOMP_loop_ull_guided_start
#define KMP_API_NAME_GOMP_LOOP_ULL_ORDERED_DYNAMIC_NEXT \
GOMP_loop_ull_ordered_dynamic_next
#define KMP_API_NAME_GOMP_LOOP_ULL_ORDERED_DYNAMIC_START \
GOMP_loop_ull_ordered_dynamic_start
#define KMP_API_NAME_GOMP_LOOP_ULL_ORDERED_GUIDED_NEXT \
GOMP_loop_ull_ordered_guided_next
#define KMP_API_NAME_GOMP_LOOP_ULL_ORDERED_GUIDED_START \
GOMP_loop_ull_ordered_guided_start
#define KMP_API_NAME_GOMP_LOOP_ULL_ORDERED_RUNTIME_NEXT \
GOMP_loop_ull_ordered_runtime_next
#define KMP_API_NAME_GOMP_LOOP_ULL_ORDERED_RUNTIME_START \
GOMP_loop_ull_ordered_runtime_start
#define KMP_API_NAME_GOMP_LOOP_ULL_ORDERED_STATIC_NEXT \
GOMP_loop_ull_ordered_static_next
#define KMP_API_NAME_GOMP_LOOP_ULL_ORDERED_STATIC_START \
GOMP_loop_ull_ordered_static_start
#define KMP_API_NAME_GOMP_LOOP_ULL_RUNTIME_NEXT GOMP_loop_ull_runtime_next
#define KMP_API_NAME_GOMP_LOOP_ULL_RUNTIME_START GOMP_loop_ull_runtime_start
#define KMP_API_NAME_GOMP_LOOP_ULL_STATIC_NEXT GOMP_loop_ull_static_next
#define KMP_API_NAME_GOMP_LOOP_ULL_STATIC_START GOMP_loop_ull_static_start
// All GOMP_3.0 symbols
#define KMP_API_NAME_GOMP_TASKYIELD GOMP_taskyield
// All GOMP_4.0 symbols
// TODO: As of 2013-10-14, none of the GOMP_4.0 functions are implemented in
// libomp
#define KMP_API_NAME_GOMP_BARRIER_CANCEL GOMP_barrier_cancel
#define KMP_API_NAME_GOMP_CANCEL GOMP_cancel
#define KMP_API_NAME_GOMP_CANCELLATION_POINT GOMP_cancellation_point
#define KMP_API_NAME_GOMP_LOOP_END_CANCEL GOMP_loop_end_cancel
#define KMP_API_NAME_GOMP_PARALLEL_LOOP_DYNAMIC GOMP_parallel_loop_dynamic
#define KMP_API_NAME_GOMP_PARALLEL_LOOP_GUIDED GOMP_parallel_loop_guided
#define KMP_API_NAME_GOMP_PARALLEL_LOOP_RUNTIME GOMP_parallel_loop_runtime
#define KMP_API_NAME_GOMP_PARALLEL_LOOP_STATIC GOMP_parallel_loop_static
#define KMP_API_NAME_GOMP_PARALLEL_SECTIONS GOMP_parallel_sections
#define KMP_API_NAME_GOMP_PARALLEL GOMP_parallel
#define KMP_API_NAME_GOMP_SECTIONS_END_CANCEL GOMP_sections_end_cancel
#define KMP_API_NAME_GOMP_TASKGROUP_START GOMP_taskgroup_start
#define KMP_API_NAME_GOMP_TASKGROUP_END GOMP_taskgroup_end
/* Target functions should be taken care of by liboffload */
#define KMP_API_NAME_GOMP_TARGET GOMP_target
#define KMP_API_NAME_GOMP_TARGET_DATA GOMP_target_data
#define KMP_API_NAME_GOMP_TARGET_END_DATA GOMP_target_end_data
#define KMP_API_NAME_GOMP_TARGET_UPDATE GOMP_target_update
#define KMP_API_NAME_GOMP_TEAMS GOMP_teams
// All GOMP_4.5 symbols
#define KMP_API_NAME_GOMP_TASKLOOP GOMP_taskloop
#define KMP_API_NAME_GOMP_TASKLOOP_ULL GOMP_taskloop_ull
#define KMP_API_NAME_GOMP_DOACROSS_POST GOMP_doacross_post
#define KMP_API_NAME_GOMP_DOACROSS_WAIT GOMP_doacross_wait
#define KMP_API_NAME_GOMP_LOOP_DOACROSS_STATIC_START \
GOMP_loop_doacross_static_start
#define KMP_API_NAME_GOMP_LOOP_DOACROSS_DYNAMIC_START \
GOMP_loop_doacross_dynamic_start
#define KMP_API_NAME_GOMP_LOOP_DOACROSS_GUIDED_START \
GOMP_loop_doacross_guided_start
#define KMP_API_NAME_GOMP_LOOP_DOACROSS_RUNTIME_START \
GOMP_loop_doacross_runtime_start
#define KMP_API_NAME_GOMP_DOACROSS_ULL_POST GOMP_doacross_ull_post
#define KMP_API_NAME_GOMP_DOACROSS_ULL_WAIT GOMP_doacross_ull_wait
#define KMP_API_NAME_GOMP_LOOP_ULL_DOACROSS_STATIC_START \
GOMP_loop_ull_doacross_static_start
#define KMP_API_NAME_GOMP_LOOP_ULL_DOACROSS_DYNAMIC_START \
GOMP_loop_ull_doacross_dynamic_start
#define KMP_API_NAME_GOMP_LOOP_ULL_DOACROSS_GUIDED_START \
GOMP_loop_ull_doacross_guided_start
#define KMP_API_NAME_GOMP_LOOP_ULL_DOACROSS_RUNTIME_START \
GOMP_loop_ull_doacross_runtime_start
#endif /* KMP_FTN_OS_H */

View File

@ -0,0 +1,33 @@
/*
* kmp_ftn_stdcall.cpp -- Fortran __stdcall linkage support for OpenMP.
*/
//===----------------------------------------------------------------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is dual licensed under the MIT and the University of Illinois Open
// Source Licenses. See LICENSE.txt for details.
//
//===----------------------------------------------------------------------===//
#include "kmp.h"
// Note: This string is not printed when KMP_VERSION=1.
char const __kmp_version_ftnstdcall[] =
KMP_VERSION_PREFIX "Fortran __stdcall OMP support: "
#ifdef USE_FTN_STDCALL
"yes";
#else
"no";
#endif
#ifdef USE_FTN_STDCALL
#define FTN_STDCALL KMP_STDCALL
#define KMP_FTN_ENTRIES USE_FTN_STDCALL
#include "kmp_ftn_entry.h"
#include "kmp_ftn_os.h"
#endif /* USE_FTN_STDCALL */

537
runtime/src/kmp_global.cpp Normal file
View File

@ -0,0 +1,537 @@
/*
* kmp_global.cpp -- KPTS global variables for runtime support library
*/
//===----------------------------------------------------------------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is dual licensed under the MIT and the University of Illinois Open
// Source Licenses. See LICENSE.txt for details.
//
//===----------------------------------------------------------------------===//
#include "kmp.h"
#include "kmp_affinity.h"
#if KMP_USE_HIER_SCHED
#include "kmp_dispatch_hier.h"
#endif
kmp_key_t __kmp_gtid_threadprivate_key;
#if KMP_ARCH_X86 || KMP_ARCH_X86_64
kmp_cpuinfo_t __kmp_cpuinfo = {0}; // Not initialized
#endif
#if KMP_STATS_ENABLED
#include "kmp_stats.h"
// lock for modifying the global __kmp_stats_list
kmp_tas_lock_t __kmp_stats_lock;
// global list of per thread stats, the head is a sentinel node which
// accumulates all stats produced before __kmp_create_worker is called.
kmp_stats_list *__kmp_stats_list;
// thread local pointer to stats node within list
KMP_THREAD_LOCAL kmp_stats_list *__kmp_stats_thread_ptr = NULL;
// gives reference tick for all events (considered the 0 tick)
tsc_tick_count __kmp_stats_start_time;
#endif
/* ----------------------------------------------------- */
/* INITIALIZATION VARIABLES */
/* they are syncronized to write during init, but read anytime */
volatile int __kmp_init_serial = FALSE;
volatile int __kmp_init_gtid = FALSE;
volatile int __kmp_init_common = FALSE;
volatile int __kmp_init_middle = FALSE;
volatile int __kmp_init_parallel = FALSE;
#if KMP_USE_MONITOR
volatile int __kmp_init_monitor =
0; /* 1 - launched, 2 - actually started (Windows* OS only) */
#endif
volatile int __kmp_init_user_locks = FALSE;
/* list of address of allocated caches for commons */
kmp_cached_addr_t *__kmp_threadpriv_cache_list = NULL;
int __kmp_init_counter = 0;
int __kmp_root_counter = 0;
int __kmp_version = 0;
std::atomic<kmp_int32> __kmp_team_counter = ATOMIC_VAR_INIT(0);
std::atomic<kmp_int32> __kmp_task_counter = ATOMIC_VAR_INIT(0);
unsigned int __kmp_init_wait =
KMP_DEFAULT_INIT_WAIT; /* initial number of spin-tests */
unsigned int __kmp_next_wait =
KMP_DEFAULT_NEXT_WAIT; /* susequent number of spin-tests */
size_t __kmp_stksize = KMP_DEFAULT_STKSIZE;
#if KMP_USE_MONITOR
size_t __kmp_monitor_stksize = 0; // auto adjust
#endif
size_t __kmp_stkoffset = KMP_DEFAULT_STKOFFSET;
int __kmp_stkpadding = KMP_MIN_STKPADDING;
size_t __kmp_malloc_pool_incr = KMP_DEFAULT_MALLOC_POOL_INCR;
// Barrier method defaults, settings, and strings.
// branch factor = 2^branch_bits (only relevant for tree & hyper barrier types)
kmp_uint32 __kmp_barrier_gather_bb_dflt = 2;
/* branch_factor = 4 */ /* hyper2: C78980 */
kmp_uint32 __kmp_barrier_release_bb_dflt = 2;
/* branch_factor = 4 */ /* hyper2: C78980 */
kmp_bar_pat_e __kmp_barrier_gather_pat_dflt = bp_hyper_bar;
/* hyper2: C78980 */
kmp_bar_pat_e __kmp_barrier_release_pat_dflt = bp_hyper_bar;
/* hyper2: C78980 */
kmp_uint32 __kmp_barrier_gather_branch_bits[bs_last_barrier] = {0};
kmp_uint32 __kmp_barrier_release_branch_bits[bs_last_barrier] = {0};
kmp_bar_pat_e __kmp_barrier_gather_pattern[bs_last_barrier] = {bp_linear_bar};
kmp_bar_pat_e __kmp_barrier_release_pattern[bs_last_barrier] = {bp_linear_bar};
char const *__kmp_barrier_branch_bit_env_name[bs_last_barrier] = {
"KMP_PLAIN_BARRIER", "KMP_FORKJOIN_BARRIER"
#if KMP_FAST_REDUCTION_BARRIER
,
"KMP_REDUCTION_BARRIER"
#endif // KMP_FAST_REDUCTION_BARRIER
};
char const *__kmp_barrier_pattern_env_name[bs_last_barrier] = {
"KMP_PLAIN_BARRIER_PATTERN", "KMP_FORKJOIN_BARRIER_PATTERN"
#if KMP_FAST_REDUCTION_BARRIER
,
"KMP_REDUCTION_BARRIER_PATTERN"
#endif // KMP_FAST_REDUCTION_BARRIER
};
char const *__kmp_barrier_type_name[bs_last_barrier] = {"plain", "forkjoin"
#if KMP_FAST_REDUCTION_BARRIER
,
"reduction"
#endif // KMP_FAST_REDUCTION_BARRIER
};
char const *__kmp_barrier_pattern_name[bp_last_bar] = {"linear", "tree",
"hyper", "hierarchical"};
int __kmp_allThreadsSpecified = 0;
size_t __kmp_align_alloc = CACHE_LINE;
int __kmp_generate_warnings = kmp_warnings_low;
int __kmp_reserve_warn = 0;
int __kmp_xproc = 0;
int __kmp_avail_proc = 0;
size_t __kmp_sys_min_stksize = KMP_MIN_STKSIZE;
int __kmp_sys_max_nth = KMP_MAX_NTH;
int __kmp_max_nth = 0;
int __kmp_cg_max_nth = 0;
int __kmp_teams_max_nth = 0;
int __kmp_threads_capacity = 0;
int __kmp_dflt_team_nth = 0;
int __kmp_dflt_team_nth_ub = 0;
int __kmp_tp_capacity = 0;
int __kmp_tp_cached = 0;
int __kmp_dflt_nested = FALSE;
int __kmp_dispatch_num_buffers = KMP_DFLT_DISP_NUM_BUFF;
int __kmp_dflt_max_active_levels =
KMP_MAX_ACTIVE_LEVELS_LIMIT; /* max_active_levels limit */
#if KMP_NESTED_HOT_TEAMS
int __kmp_hot_teams_mode = 0; /* 0 - free extra threads when reduced */
/* 1 - keep extra threads when reduced */
int __kmp_hot_teams_max_level = 1; /* nesting level of hot teams */
#endif
enum library_type __kmp_library = library_none;
enum sched_type __kmp_sched =
kmp_sch_default; /* scheduling method for runtime scheduling */
enum sched_type __kmp_static =
kmp_sch_static_greedy; /* default static scheduling method */
enum sched_type __kmp_guided =
kmp_sch_guided_iterative_chunked; /* default guided scheduling method */
enum sched_type __kmp_auto =
kmp_sch_guided_analytical_chunked; /* default auto scheduling method */
#if KMP_USE_HIER_SCHED
int __kmp_dispatch_hand_threading = 0;
int __kmp_hier_max_units[kmp_hier_layer_e::LAYER_LAST + 1];
int __kmp_hier_threads_per[kmp_hier_layer_e::LAYER_LAST + 1];
kmp_hier_sched_env_t __kmp_hier_scheds = {0, 0, NULL, NULL, NULL};
#endif
int __kmp_dflt_blocktime = KMP_DEFAULT_BLOCKTIME;
#if KMP_USE_MONITOR
int __kmp_monitor_wakeups = KMP_MIN_MONITOR_WAKEUPS;
int __kmp_bt_intervals = KMP_INTERVALS_FROM_BLOCKTIME(KMP_DEFAULT_BLOCKTIME,
KMP_MIN_MONITOR_WAKEUPS);
#endif
#ifdef KMP_ADJUST_BLOCKTIME
int __kmp_zero_bt = FALSE;
#endif /* KMP_ADJUST_BLOCKTIME */
#ifdef KMP_DFLT_NTH_CORES
int __kmp_ncores = 0;
#endif
int __kmp_chunk = 0;
int __kmp_abort_delay = 0;
#if KMP_OS_LINUX && defined(KMP_TDATA_GTID)
int __kmp_gtid_mode = 3; /* use __declspec(thread) TLS to store gtid */
int __kmp_adjust_gtid_mode = FALSE;
#elif KMP_OS_WINDOWS
int __kmp_gtid_mode = 2; /* use TLS functions to store gtid */
int __kmp_adjust_gtid_mode = FALSE;
#else
int __kmp_gtid_mode = 0; /* select method to get gtid based on #threads */
int __kmp_adjust_gtid_mode = TRUE;
#endif /* KMP_OS_LINUX && defined(KMP_TDATA_GTID) */
#ifdef KMP_TDATA_GTID
KMP_THREAD_LOCAL int __kmp_gtid = KMP_GTID_DNE;
#endif /* KMP_TDATA_GTID */
int __kmp_tls_gtid_min = INT_MAX;
int __kmp_foreign_tp = TRUE;
#if KMP_ARCH_X86 || KMP_ARCH_X86_64
int __kmp_inherit_fp_control = TRUE;
kmp_int16 __kmp_init_x87_fpu_control_word = 0;
kmp_uint32 __kmp_init_mxcsr = 0;
#endif /* KMP_ARCH_X86 || KMP_ARCH_X86_64 */
#ifdef USE_LOAD_BALANCE
double __kmp_load_balance_interval = 1.0;
#endif /* USE_LOAD_BALANCE */
kmp_nested_nthreads_t __kmp_nested_nth = {NULL, 0, 0};
#if KMP_USE_ADAPTIVE_LOCKS
kmp_adaptive_backoff_params_t __kmp_adaptive_backoff_params = {
1, 1024}; // TODO: tune it!
#if KMP_DEBUG_ADAPTIVE_LOCKS
const char *__kmp_speculative_statsfile = "-";
#endif
#endif // KMP_USE_ADAPTIVE_LOCKS
#if OMP_40_ENABLED
int __kmp_display_env = FALSE;
int __kmp_display_env_verbose = FALSE;
int __kmp_omp_cancellation = FALSE;
#endif
/* map OMP 3.0 schedule types with our internal schedule types */
enum sched_type __kmp_sch_map[kmp_sched_upper - kmp_sched_lower_ext +
kmp_sched_upper_std - kmp_sched_lower - 2] = {
kmp_sch_static_chunked, // ==> kmp_sched_static = 1
kmp_sch_dynamic_chunked, // ==> kmp_sched_dynamic = 2
kmp_sch_guided_chunked, // ==> kmp_sched_guided = 3
kmp_sch_auto, // ==> kmp_sched_auto = 4
kmp_sch_trapezoidal // ==> kmp_sched_trapezoidal = 101
// will likely not be used, introduced here just to debug the code
// of public intel extension schedules
};
#if KMP_OS_LINUX
enum clock_function_type __kmp_clock_function;
int __kmp_clock_function_param;
#endif /* KMP_OS_LINUX */
#if KMP_MIC_SUPPORTED
enum mic_type __kmp_mic_type = non_mic;
#endif
#if KMP_AFFINITY_SUPPORTED
KMPAffinity *__kmp_affinity_dispatch = NULL;
#if KMP_USE_HWLOC
int __kmp_hwloc_error = FALSE;
hwloc_topology_t __kmp_hwloc_topology = NULL;
int __kmp_numa_detected = FALSE;
int __kmp_tile_depth = 0;
#endif
#if KMP_OS_WINDOWS
#if KMP_GROUP_AFFINITY
int __kmp_num_proc_groups = 1;
#endif /* KMP_GROUP_AFFINITY */
kmp_GetActiveProcessorCount_t __kmp_GetActiveProcessorCount = NULL;
kmp_GetActiveProcessorGroupCount_t __kmp_GetActiveProcessorGroupCount = NULL;
kmp_GetThreadGroupAffinity_t __kmp_GetThreadGroupAffinity = NULL;
kmp_SetThreadGroupAffinity_t __kmp_SetThreadGroupAffinity = NULL;
#endif /* KMP_OS_WINDOWS */
size_t __kmp_affin_mask_size = 0;
enum affinity_type __kmp_affinity_type = affinity_default;
enum affinity_gran __kmp_affinity_gran = affinity_gran_default;
int __kmp_affinity_gran_levels = -1;
int __kmp_affinity_dups = TRUE;
enum affinity_top_method __kmp_affinity_top_method =
affinity_top_method_default;
int __kmp_affinity_compact = 0;
int __kmp_affinity_offset = 0;
int __kmp_affinity_verbose = FALSE;
int __kmp_affinity_warnings = TRUE;
int __kmp_affinity_respect_mask = affinity_respect_mask_default;
char *__kmp_affinity_proclist = NULL;
kmp_affin_mask_t *__kmp_affinity_masks = NULL;
unsigned __kmp_affinity_num_masks = 0;
char *__kmp_cpuinfo_file = NULL;
#endif /* KMP_AFFINITY_SUPPORTED */
#if OMP_40_ENABLED
kmp_nested_proc_bind_t __kmp_nested_proc_bind = {NULL, 0, 0};
int __kmp_affinity_num_places = 0;
#endif
#if OMP_50_ENABLED
int __kmp_display_affinity = FALSE;
char *__kmp_affinity_format = NULL;
#endif // OMP_50_ENABLED
kmp_hws_item_t __kmp_hws_socket = {0, 0};
kmp_hws_item_t __kmp_hws_node = {0, 0};
kmp_hws_item_t __kmp_hws_tile = {0, 0};
kmp_hws_item_t __kmp_hws_core = {0, 0};
kmp_hws_item_t __kmp_hws_proc = {0, 0};
int __kmp_hws_requested = 0;
int __kmp_hws_abs_flag = 0; // absolute or per-item number requested
#if OMP_40_ENABLED
kmp_int32 __kmp_default_device = 0;
#endif
kmp_tasking_mode_t __kmp_tasking_mode = tskm_task_teams;
#if OMP_45_ENABLED
kmp_int32 __kmp_max_task_priority = 0;
kmp_uint64 __kmp_taskloop_min_tasks = 0;
#endif
#if OMP_50_ENABLED
int __kmp_memkind_available = 0;
int __kmp_hbw_mem_available = 0;
const omp_allocator_t *OMP_NULL_ALLOCATOR = NULL;
const omp_allocator_t *omp_default_mem_alloc = (const omp_allocator_t *)1;
const omp_allocator_t *omp_large_cap_mem_alloc = (const omp_allocator_t *)2;
const omp_allocator_t *omp_const_mem_alloc = (const omp_allocator_t *)3;
const omp_allocator_t *omp_high_bw_mem_alloc = (const omp_allocator_t *)4;
const omp_allocator_t *omp_low_lat_mem_alloc = (const omp_allocator_t *)5;
const omp_allocator_t *omp_cgroup_mem_alloc = (const omp_allocator_t *)6;
const omp_allocator_t *omp_pteam_mem_alloc = (const omp_allocator_t *)7;
const omp_allocator_t *omp_thread_mem_alloc = (const omp_allocator_t *)8;
void *const *__kmp_def_allocator = omp_default_mem_alloc;
#endif
/* This check ensures that the compiler is passing the correct data type for the
flags formal parameter of the function kmpc_omp_task_alloc(). If the type is
not a 4-byte type, then give an error message about a non-positive length
array pointing here. If that happens, the kmp_tasking_flags_t structure must
be redefined to have exactly 32 bits. */
KMP_BUILD_ASSERT(sizeof(kmp_tasking_flags_t) == 4);
int __kmp_task_stealing_constraint = 1; /* Constrain task stealing by default */
#ifdef DEBUG_SUSPEND
int __kmp_suspend_count = 0;
#endif
int __kmp_settings = FALSE;
int __kmp_duplicate_library_ok = 0;
#if USE_ITT_BUILD
int __kmp_forkjoin_frames = 1;
int __kmp_forkjoin_frames_mode = 3;
#endif
PACKED_REDUCTION_METHOD_T __kmp_force_reduction_method =
reduction_method_not_defined;
int __kmp_determ_red = FALSE;
#ifdef KMP_DEBUG
int kmp_a_debug = 0;
int kmp_b_debug = 0;
int kmp_c_debug = 0;
int kmp_d_debug = 0;
int kmp_e_debug = 0;
int kmp_f_debug = 0;
int kmp_diag = 0;
#endif
/* For debug information logging using rotating buffer */
int __kmp_debug_buf =
FALSE; /* TRUE means use buffer, FALSE means print to stderr */
int __kmp_debug_buf_lines =
KMP_DEBUG_BUF_LINES_INIT; /* Lines of debug stored in buffer */
int __kmp_debug_buf_chars =
KMP_DEBUG_BUF_CHARS_INIT; /* Characters allowed per line in buffer */
int __kmp_debug_buf_atomic =
FALSE; /* TRUE means use atomic update of buffer entry pointer */
char *__kmp_debug_buffer = NULL; /* Debug buffer itself */
std::atomic<int> __kmp_debug_count =
ATOMIC_VAR_INIT(0); /* number of lines printed in buffer so far */
int __kmp_debug_buf_warn_chars =
0; /* Keep track of char increase recommended in warnings */
/* end rotating debug buffer */
#ifdef KMP_DEBUG
int __kmp_par_range; /* +1 => only go par for constructs in range */
/* -1 => only go par for constructs outside range */
char __kmp_par_range_routine[KMP_PAR_RANGE_ROUTINE_LEN] = {'\0'};
char __kmp_par_range_filename[KMP_PAR_RANGE_FILENAME_LEN] = {'\0'};
int __kmp_par_range_lb = 0;
int __kmp_par_range_ub = INT_MAX;
#endif /* KMP_DEBUG */
/* For printing out dynamic storage map for threads and teams */
int __kmp_storage_map =
FALSE; /* True means print storage map for threads and teams */
int __kmp_storage_map_verbose =
FALSE; /* True means storage map includes placement info */
int __kmp_storage_map_verbose_specified = FALSE;
/* Initialize the library data structures when we fork a child process, defaults
* to TRUE */
int __kmp_need_register_atfork =
TRUE; /* At initialization, call pthread_atfork to install fork handler */
int __kmp_need_register_atfork_specified = TRUE;
int __kmp_env_stksize = FALSE; /* KMP_STACKSIZE specified? */
int __kmp_env_blocktime = FALSE; /* KMP_BLOCKTIME specified? */
int __kmp_env_checks = FALSE; /* KMP_CHECKS specified? */
int __kmp_env_consistency_check = FALSE; /* KMP_CONSISTENCY_CHECK specified? */
kmp_uint32 __kmp_yield_init = KMP_INIT_WAIT;
kmp_uint32 __kmp_yield_next = KMP_NEXT_WAIT;
#if KMP_USE_MONITOR
kmp_uint32 __kmp_yielding_on = 1;
#endif
#if KMP_OS_CNK
kmp_uint32 __kmp_yield_cycle = 0;
#else
kmp_uint32 __kmp_yield_cycle = 1; /* Yield-cycle is on by default */
#endif
kmp_int32 __kmp_yield_on_count =
10; /* By default, yielding is on for 10 monitor periods. */
kmp_int32 __kmp_yield_off_count =
1; /* By default, yielding is off for 1 monitor periods. */
/* ------------------------------------------------------ */
/* STATE mostly syncronized with global lock */
/* data written to rarely by masters, read often by workers */
/* TODO: None of this global padding stuff works consistently because the order
of declaration is not necessarily correlated to storage order. To fix this,
all the important globals must be put in a big structure instead. */
KMP_ALIGN_CACHE
kmp_info_t **__kmp_threads = NULL;
kmp_root_t **__kmp_root = NULL;
/* data read/written to often by masters */
KMP_ALIGN_CACHE
volatile int __kmp_nth = 0;
volatile int __kmp_all_nth = 0;
int __kmp_thread_pool_nth = 0;
volatile kmp_info_t *__kmp_thread_pool = NULL;
volatile kmp_team_t *__kmp_team_pool = NULL;
KMP_ALIGN_CACHE
std::atomic<int> __kmp_thread_pool_active_nth = ATOMIC_VAR_INIT(0);
/* -------------------------------------------------
* GLOBAL/ROOT STATE */
KMP_ALIGN_CACHE
kmp_global_t __kmp_global = {{0}};
/* ----------------------------------------------- */
/* GLOBAL SYNCHRONIZATION LOCKS */
/* TODO verify the need for these locks and if they need to be global */
#if KMP_USE_INTERNODE_ALIGNMENT
/* Multinode systems have larger cache line granularity which can cause
* false sharing if the alignment is not large enough for these locks */
KMP_ALIGN_CACHE_INTERNODE
KMP_BOOTSTRAP_LOCK_INIT(__kmp_initz_lock); /* Control initializations */
KMP_ALIGN_CACHE_INTERNODE
KMP_BOOTSTRAP_LOCK_INIT(__kmp_forkjoin_lock); /* control fork/join access */
KMP_ALIGN_CACHE_INTERNODE
KMP_BOOTSTRAP_LOCK_INIT(__kmp_exit_lock); /* exit() is not always thread-safe */
#if KMP_USE_MONITOR
/* control monitor thread creation */
KMP_ALIGN_CACHE_INTERNODE
KMP_BOOTSTRAP_LOCK_INIT(__kmp_monitor_lock);
#endif
/* used for the hack to allow threadprivate cache and __kmp_threads expansion
to co-exist */
KMP_ALIGN_CACHE_INTERNODE
KMP_BOOTSTRAP_LOCK_INIT(__kmp_tp_cached_lock);
KMP_ALIGN_CACHE_INTERNODE
KMP_LOCK_INIT(__kmp_global_lock); /* Control OS/global access */
KMP_ALIGN_CACHE_INTERNODE
kmp_queuing_lock_t __kmp_dispatch_lock; /* Control dispatch access */
KMP_ALIGN_CACHE_INTERNODE
KMP_LOCK_INIT(__kmp_debug_lock); /* Control I/O access for KMP_DEBUG */
#else
KMP_ALIGN_CACHE
KMP_BOOTSTRAP_LOCK_INIT(__kmp_initz_lock); /* Control initializations */
KMP_BOOTSTRAP_LOCK_INIT(__kmp_forkjoin_lock); /* control fork/join access */
KMP_BOOTSTRAP_LOCK_INIT(__kmp_exit_lock); /* exit() is not always thread-safe */
#if KMP_USE_MONITOR
/* control monitor thread creation */
KMP_BOOTSTRAP_LOCK_INIT(__kmp_monitor_lock);
#endif
/* used for the hack to allow threadprivate cache and __kmp_threads expansion
to co-exist */
KMP_BOOTSTRAP_LOCK_INIT(__kmp_tp_cached_lock);
KMP_ALIGN(128)
KMP_LOCK_INIT(__kmp_global_lock); /* Control OS/global access */
KMP_ALIGN(128)
kmp_queuing_lock_t __kmp_dispatch_lock; /* Control dispatch access */
KMP_ALIGN(128)
KMP_LOCK_INIT(__kmp_debug_lock); /* Control I/O access for KMP_DEBUG */
#endif
/* ----------------------------------------------- */
#if KMP_HANDLE_SIGNALS
/* Signal handling is disabled by default, because it confuses users: In case of
sigsegv (or other trouble) in user code signal handler catches the signal,
which then "appears" in the monitor thread (when the monitor executes raise()
function). Users see signal in the monitor thread and blame OpenMP RTL.
Grant said signal handling required on some older OSes (Irix?) supported by
KAI, because bad applications hung but not aborted. Currently it is not a
problem for Linux* OS, OS X* and Windows* OS.
Grant: Found new hangs for EL4, EL5, and a Fedora Core machine. So I'm
putting the default back for now to see if that fixes hangs on those
machines.
2010-04013 Lev: It was a bug in Fortran RTL. Fortran RTL prints a kind of
stack backtrace when program is aborting, but the code is not signal-safe.
When multiple signals raised at the same time (which occurs in dynamic
negative tests because all the worker threads detects the same error),
Fortran RTL may hang. The bug finally fixed in Fortran RTL library provided
by Steve R., and will be available soon. */
int __kmp_handle_signals = FALSE;
#endif
#ifdef DEBUG_SUSPEND
int get_suspend_count_(void) {
int count = __kmp_suspend_count;
__kmp_suspend_count = 0;
return count;
}
void set_suspend_count_(int *value) { __kmp_suspend_count = *value; }
#endif
// Symbols for MS mutual detection.
int _You_must_link_with_exactly_one_OpenMP_library = 1;
int _You_must_link_with_Intel_OpenMP_library = 1;
#if KMP_OS_WINDOWS && (KMP_VERSION_MAJOR > 4)
int _You_must_link_with_Microsoft_OpenMP_library = 1;
#endif
#if OMP_50_ENABLED
kmp_target_offload_kind_t __kmp_target_offload = tgt_default;
#endif
// end of file //

2000
runtime/src/kmp_gsupport.cpp Normal file

File diff suppressed because it is too large Load Diff

872
runtime/src/kmp_i18n.cpp Normal file
View File

@ -0,0 +1,872 @@
/*
* kmp_i18n.cpp
*/
//===----------------------------------------------------------------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is dual licensed under the MIT and the University of Illinois Open
// Source Licenses. See LICENSE.txt for details.
//
//===----------------------------------------------------------------------===//
#include "kmp_i18n.h"
#include "kmp.h"
#include "kmp_debug.h"
#include "kmp_io.h" // __kmp_printf.
#include "kmp_lock.h"
#include "kmp_os.h"
#include <errno.h>
#include <locale.h>
#include <stdarg.h>
#include <stdio.h>
#include <string.h>
#include "kmp_environment.h"
#include "kmp_i18n_default.inc"
#include "kmp_str.h"
#undef KMP_I18N_OK
#define get_section(id) ((id) >> 16)
#define get_number(id) ((id)&0xFFFF)
kmp_msg_t __kmp_msg_null = {kmp_mt_dummy, 0, NULL, 0};
static char const *no_message_available = "(No message available)";
static void __kmp_msg(kmp_msg_severity_t severity, kmp_msg_t message,
va_list ap);
enum kmp_i18n_cat_status {
KMP_I18N_CLOSED, // Not yet opened or closed.
KMP_I18N_OPENED, // Opened successfully, ready to use.
KMP_I18N_ABSENT // Opening failed, message catalog should not be used.
}; // enum kmp_i18n_cat_status
typedef enum kmp_i18n_cat_status kmp_i18n_cat_status_t;
static volatile kmp_i18n_cat_status_t status = KMP_I18N_CLOSED;
/* Message catalog is opened at first usage, so we have to synchronize opening
to avoid race and multiple openings.
Closing does not require synchronization, because catalog is closed very late
at library shutting down, when no other threads are alive. */
static void __kmp_i18n_do_catopen();
static kmp_bootstrap_lock_t lock = KMP_BOOTSTRAP_LOCK_INITIALIZER(lock);
// `lock' variable may be placed into __kmp_i18n_catopen function because it is
// used only by that function. But we afraid a (buggy) compiler may treat it
// wrongly. So we put it outside of function just in case.
void __kmp_i18n_catopen() {
if (status == KMP_I18N_CLOSED) {
__kmp_acquire_bootstrap_lock(&lock);
if (status == KMP_I18N_CLOSED) {
__kmp_i18n_do_catopen();
}
__kmp_release_bootstrap_lock(&lock);
}
} // func __kmp_i18n_catopen
/* Linux* OS and OS X* part */
#if KMP_OS_UNIX
#define KMP_I18N_OK
#include <nl_types.h>
#define KMP_I18N_NULLCAT ((nl_catd)(-1))
static nl_catd cat = KMP_I18N_NULLCAT; // !!! Shall it be volatile?
static char const *name =
(KMP_VERSION_MAJOR == 4 ? "libguide.cat" : "libomp.cat");
/* Useful links:
http://www.opengroup.org/onlinepubs/000095399/basedefs/xbd_chap08.html#tag_08_02
http://www.opengroup.org/onlinepubs/000095399/functions/catopen.html
http://www.opengroup.org/onlinepubs/000095399/functions/setlocale.html
*/
void __kmp_i18n_do_catopen() {
int english = 0;
char *lang = __kmp_env_get("LANG");
// TODO: What about LC_ALL or LC_MESSAGES?
KMP_DEBUG_ASSERT(status == KMP_I18N_CLOSED);
KMP_DEBUG_ASSERT(cat == KMP_I18N_NULLCAT);
english = lang == NULL || // In all these cases English language is used.
strcmp(lang, "") == 0 || strcmp(lang, " ") == 0 ||
// Workaround for Fortran RTL bug DPD200137873 "Fortran runtime
// resets LANG env var to space if it is not set".
strcmp(lang, "C") == 0 || strcmp(lang, "POSIX") == 0;
if (!english) { // English language is not yet detected, let us continue.
// Format of LANG is: [language[_territory][.codeset][@modifier]]
// Strip all parts except language.
char *tail = NULL;
__kmp_str_split(lang, '@', &lang, &tail);
__kmp_str_split(lang, '.', &lang, &tail);
__kmp_str_split(lang, '_', &lang, &tail);
english = (strcmp(lang, "en") == 0);
}
KMP_INTERNAL_FREE(lang);
// Do not try to open English catalog because internal messages are
// exact copy of messages in English catalog.
if (english) {
status = KMP_I18N_ABSENT; // mark catalog as absent so it will not
// be re-opened.
return;
}
cat = catopen(name, 0);
// TODO: Why do we pass 0 in flags?
status = (cat == KMP_I18N_NULLCAT ? KMP_I18N_ABSENT : KMP_I18N_OPENED);
if (status == KMP_I18N_ABSENT) {
if (__kmp_generate_warnings > kmp_warnings_low) {
// AC: only issue warning in case explicitly asked to
int error = errno; // Save errno immediately.
char *nlspath = __kmp_env_get("NLSPATH");
char *lang = __kmp_env_get("LANG");
// Infinite recursion will not occur -- status is KMP_I18N_ABSENT now, so
// __kmp_i18n_catgets() will not try to open catalog, but will return
// default message.
kmp_msg_t err_code = KMP_ERR(error);
__kmp_msg(kmp_ms_warning, KMP_MSG(CantOpenMessageCatalog, name), err_code,
KMP_HNT(CheckEnvVar, "NLSPATH", nlspath),
KMP_HNT(CheckEnvVar, "LANG", lang), __kmp_msg_null);
if (__kmp_generate_warnings == kmp_warnings_off) {
__kmp_str_free(&err_code.str);
}
KMP_INFORM(WillUseDefaultMessages);
KMP_INTERNAL_FREE(nlspath);
KMP_INTERNAL_FREE(lang);
}
} else { // status == KMP_I18N_OPENED
int section = get_section(kmp_i18n_prp_Version);
int number = get_number(kmp_i18n_prp_Version);
char const *expected = __kmp_i18n_default_table.sect[section].str[number];
// Expected version of the catalog.
kmp_str_buf_t version; // Actual version of the catalog.
__kmp_str_buf_init(&version);
__kmp_str_buf_print(&version, "%s", catgets(cat, section, number, NULL));
// String returned by catgets is invalid after closing catalog, so copy it.
if (strcmp(version.str, expected) != 0) {
__kmp_i18n_catclose(); // Close bad catalog.
status = KMP_I18N_ABSENT; // And mark it as absent.
if (__kmp_generate_warnings > kmp_warnings_low) {
// AC: only issue warning in case explicitly asked to
// And now print a warning using default messages.
char const *name = "NLSPATH";
char const *nlspath = __kmp_env_get(name);
__kmp_msg(kmp_ms_warning,
KMP_MSG(WrongMessageCatalog, name, version.str, expected),
KMP_HNT(CheckEnvVar, name, nlspath), __kmp_msg_null);
KMP_INFORM(WillUseDefaultMessages);
KMP_INTERNAL_FREE(CCAST(char *, nlspath));
} // __kmp_generate_warnings
}
__kmp_str_buf_free(&version);
}
} // func __kmp_i18n_do_catopen
void __kmp_i18n_catclose() {
if (status == KMP_I18N_OPENED) {
KMP_DEBUG_ASSERT(cat != KMP_I18N_NULLCAT);
catclose(cat);
cat = KMP_I18N_NULLCAT;
}
status = KMP_I18N_CLOSED;
} // func __kmp_i18n_catclose
char const *__kmp_i18n_catgets(kmp_i18n_id_t id) {
int section = get_section(id);
int number = get_number(id);
char const *message = NULL;
if (1 <= section && section <= __kmp_i18n_default_table.size) {
if (1 <= number && number <= __kmp_i18n_default_table.sect[section].size) {
if (status == KMP_I18N_CLOSED) {
__kmp_i18n_catopen();
}
if (status == KMP_I18N_OPENED) {
message = catgets(cat, section, number,
__kmp_i18n_default_table.sect[section].str[number]);
}
if (message == NULL) {
message = __kmp_i18n_default_table.sect[section].str[number];
}
}
}
if (message == NULL) {
message = no_message_available;
}
return message;
} // func __kmp_i18n_catgets
#endif // KMP_OS_UNIX
/* Windows* OS part. */
#if KMP_OS_WINDOWS
#define KMP_I18N_OK
#include "kmp_environment.h"
#include <windows.h>
#define KMP_I18N_NULLCAT NULL
static HMODULE cat = KMP_I18N_NULLCAT; // !!! Shall it be volatile?
static char const *name =
(KMP_VERSION_MAJOR == 4 ? "libguide40ui.dll" : "libompui.dll");
static kmp_i18n_table_t table = {0, NULL};
// Messages formatted by FormatMessage() should be freed, but catgets()
// interface assumes user will not free messages. So we cache all the retrieved
// messages in the table, which are freed at catclose().
static UINT const default_code_page = CP_OEMCP;
static UINT code_page = default_code_page;
static char const *___catgets(kmp_i18n_id_t id);
static UINT get_code_page();
static void kmp_i18n_table_free(kmp_i18n_table_t *table);
static UINT get_code_page() {
UINT cp = default_code_page;
char const *value = __kmp_env_get("KMP_CODEPAGE");
if (value != NULL) {
if (_stricmp(value, "ANSI") == 0) {
cp = CP_ACP;
} else if (_stricmp(value, "OEM") == 0) {
cp = CP_OEMCP;
} else if (_stricmp(value, "UTF-8") == 0 || _stricmp(value, "UTF8") == 0) {
cp = CP_UTF8;
} else if (_stricmp(value, "UTF-7") == 0 || _stricmp(value, "UTF7") == 0) {
cp = CP_UTF7;
} else {
// !!! TODO: Issue a warning?
}
}
KMP_INTERNAL_FREE((void *)value);
return cp;
} // func get_code_page
static void kmp_i18n_table_free(kmp_i18n_table_t *table) {
int s;
int m;
for (s = 0; s < table->size; ++s) {
for (m = 0; m < table->sect[s].size; ++m) {
// Free message.
KMP_INTERNAL_FREE((void *)table->sect[s].str[m]);
table->sect[s].str[m] = NULL;
}
table->sect[s].size = 0;
// Free section itself.
KMP_INTERNAL_FREE((void *)table->sect[s].str);
table->sect[s].str = NULL;
}
table->size = 0;
KMP_INTERNAL_FREE((void *)table->sect);
table->sect = NULL;
} // kmp_i18n_table_free
void __kmp_i18n_do_catopen() {
LCID locale_id = GetThreadLocale();
WORD lang_id = LANGIDFROMLCID(locale_id);
WORD primary_lang_id = PRIMARYLANGID(lang_id);
kmp_str_buf_t path;
KMP_DEBUG_ASSERT(status == KMP_I18N_CLOSED);
KMP_DEBUG_ASSERT(cat == KMP_I18N_NULLCAT);
__kmp_str_buf_init(&path);
// Do not try to open English catalog because internal messages are exact copy
// of messages in English catalog.
if (primary_lang_id == LANG_ENGLISH) {
status = KMP_I18N_ABSENT; // mark catalog as absent so it will not
// be re-opened.
goto end;
}
// Construct resource DLL name.
/* Simple LoadLibrary( name ) is not suitable due to security issue (see
http://www.microsoft.com/technet/security/advisory/2269637.mspx). We have
to specify full path to the message catalog. */
{
// Get handle of our DLL first.
HMODULE handle;
BOOL brc = GetModuleHandleEx(
GET_MODULE_HANDLE_EX_FLAG_FROM_ADDRESS |
GET_MODULE_HANDLE_EX_FLAG_UNCHANGED_REFCOUNT,
reinterpret_cast<LPCSTR>(&__kmp_i18n_do_catopen), &handle);
if (!brc) { // Error occurred.
status = KMP_I18N_ABSENT; // mark catalog as absent so it will not be
// re-opened.
goto end;
// TODO: Enable multiple messages (KMP_MSG) to be passed to __kmp_msg; and
// print a proper warning.
}
// Now get path to the our DLL.
for (;;) {
DWORD drc = GetModuleFileName(handle, path.str, path.size);
if (drc == 0) { // Error occurred.
status = KMP_I18N_ABSENT;
goto end;
}
if (drc < path.size) {
path.used = drc;
break;
}
__kmp_str_buf_reserve(&path, path.size * 2);
}
// Now construct the name of message catalog.
kmp_str_fname fname;
__kmp_str_fname_init(&fname, path.str);
__kmp_str_buf_clear(&path);
__kmp_str_buf_print(&path, "%s%lu/%s", fname.dir,
(unsigned long)(locale_id), name);
__kmp_str_fname_free(&fname);
}
// For security reasons, use LoadLibraryEx() and load message catalog as a
// data file.
cat = LoadLibraryEx(path.str, NULL, LOAD_LIBRARY_AS_DATAFILE);
status = (cat == KMP_I18N_NULLCAT ? KMP_I18N_ABSENT : KMP_I18N_OPENED);
if (status == KMP_I18N_ABSENT) {
if (__kmp_generate_warnings > kmp_warnings_low) {
// AC: only issue warning in case explicitly asked to
DWORD error = GetLastError();
// Infinite recursion will not occur -- status is KMP_I18N_ABSENT now, so
// __kmp_i18n_catgets() will not try to open catalog but will return
// default message.
/* If message catalog for another architecture found (e.g. OpenMP RTL for
IA-32 architecture opens libompui.dll for Intel(R) 64) Windows* OS
returns error 193 (ERROR_BAD_EXE_FORMAT). However, FormatMessage fails
to return a message for this error, so user will see:
OMP: Warning #2: Cannot open message catalog "1041\libompui.dll":
OMP: System error #193: (No system error message available)
OMP: Info #3: Default messages will be used.
Issue hint in this case so cause of trouble is more understandable. */
kmp_msg_t err_code = KMP_SYSERRCODE(error);
__kmp_msg(kmp_ms_warning, KMP_MSG(CantOpenMessageCatalog, path.str),
err_code, (error == ERROR_BAD_EXE_FORMAT
? KMP_HNT(BadExeFormat, path.str, KMP_ARCH_STR)
: __kmp_msg_null),
__kmp_msg_null);
if (__kmp_generate_warnings == kmp_warnings_off) {
__kmp_str_free(&err_code.str);
}
KMP_INFORM(WillUseDefaultMessages);
}
} else { // status == KMP_I18N_OPENED
int section = get_section(kmp_i18n_prp_Version);
int number = get_number(kmp_i18n_prp_Version);
char const *expected = __kmp_i18n_default_table.sect[section].str[number];
kmp_str_buf_t version; // Actual version of the catalog.
__kmp_str_buf_init(&version);
__kmp_str_buf_print(&version, "%s", ___catgets(kmp_i18n_prp_Version));
// String returned by catgets is invalid after closing catalog, so copy it.
if (strcmp(version.str, expected) != 0) {
// Close bad catalog.
__kmp_i18n_catclose();
status = KMP_I18N_ABSENT; // And mark it as absent.
if (__kmp_generate_warnings > kmp_warnings_low) {
// And now print a warning using default messages.
__kmp_msg(kmp_ms_warning,
KMP_MSG(WrongMessageCatalog, path.str, version.str, expected),
__kmp_msg_null);
KMP_INFORM(WillUseDefaultMessages);
} // __kmp_generate_warnings
}
__kmp_str_buf_free(&version);
}
code_page = get_code_page();
end:
__kmp_str_buf_free(&path);
return;
} // func __kmp_i18n_do_catopen
void __kmp_i18n_catclose() {
if (status == KMP_I18N_OPENED) {
KMP_DEBUG_ASSERT(cat != KMP_I18N_NULLCAT);
kmp_i18n_table_free(&table);
FreeLibrary(cat);
cat = KMP_I18N_NULLCAT;
}
code_page = default_code_page;
status = KMP_I18N_CLOSED;
} // func __kmp_i18n_catclose
/* We use FormatMessage() to get strings from catalog, get system error
messages, etc. FormatMessage() tends to return Windows* OS-style
end-of-lines, "\r\n". When string is printed, printf() also replaces all the
occurrences of "\n" with "\r\n" (again!), so sequences like "\r\r\r\n"
appear in output. It is not too good.
Additional mess comes from message catalog: Our catalog source en_US.mc file
(generated by message-converter.pl) contains only "\n" characters, but
en_US_msg_1033.bin file (produced by mc.exe) may contain "\r\n" or just "\n".
This mess goes from en_US_msg_1033.bin file to message catalog,
libompui.dll. For example, message
Error
(there is "\n" at the end) is compiled by mc.exe to "Error\r\n", while
OMP: Error %1!d!: %2!s!\n
(there is "\n" at the end as well) is compiled to "OMP: Error %1!d!:
%2!s!\r\n\n".
Thus, stripping all "\r" normalizes string and returns it to canonical form,
so printf() will produce correct end-of-line sequences.
___strip_crs() serves for this purpose: it removes all the occurrences of
"\r" in-place and returns new length of string. */
static int ___strip_crs(char *str) {
int in = 0; // Input character index.
int out = 0; // Output character index.
for (;;) {
if (str[in] != '\r') {
str[out] = str[in];
++out;
}
if (str[in] == 0) {
break;
}
++in;
}
return out - 1;
} // func __strip_crs
static char const *___catgets(kmp_i18n_id_t id) {
char *result = NULL;
PVOID addr = NULL;
wchar_t *wmsg = NULL;
DWORD wlen = 0;
char *msg = NULL;
int len = 0;
int rc;
KMP_DEBUG_ASSERT(cat != KMP_I18N_NULLCAT);
wlen = // wlen does *not* include terminating null.
FormatMessageW(FORMAT_MESSAGE_ALLOCATE_BUFFER |
FORMAT_MESSAGE_FROM_HMODULE |
FORMAT_MESSAGE_IGNORE_INSERTS,
cat, id,
0, // LangId
(LPWSTR)&addr,
0, // Size in elements, not in bytes.
NULL);
if (wlen <= 0) {
goto end;
}
wmsg = (wchar_t *)addr; // Warning: wmsg may be not nul-terminated!
// Calculate length of multibyte message.
// Since wlen does not include terminating null, len does not include it also.
len = WideCharToMultiByte(code_page,
0, // Flags.
wmsg, wlen, // Wide buffer and size.
NULL, 0, // Buffer and size.
NULL, NULL // Default char and used default char.
);
if (len <= 0) {
goto end;
}
// Allocate memory.
msg = (char *)KMP_INTERNAL_MALLOC(len + 1);
// Convert wide message to multibyte one.
rc = WideCharToMultiByte(code_page,
0, // Flags.
wmsg, wlen, // Wide buffer and size.
msg, len, // Buffer and size.
NULL, NULL // Default char and used default char.
);
if (rc <= 0 || rc > len) {
goto end;
}
KMP_DEBUG_ASSERT(rc == len);
len = rc;
msg[len] = 0; // Put terminating null to the end.
// Stripping all "\r" before stripping last end-of-line simplifies the task.
len = ___strip_crs(msg);
// Every message in catalog is terminated with "\n". Strip it.
if (len >= 1 && msg[len - 1] == '\n') {
--len;
msg[len] = 0;
}
// Everything looks ok.
result = msg;
msg = NULL;
end:
if (msg != NULL) {
KMP_INTERNAL_FREE(msg);
}
if (wmsg != NULL) {
LocalFree(wmsg);
}
return result;
} // ___catgets
char const *__kmp_i18n_catgets(kmp_i18n_id_t id) {
int section = get_section(id);
int number = get_number(id);
char const *message = NULL;
if (1 <= section && section <= __kmp_i18n_default_table.size) {
if (1 <= number && number <= __kmp_i18n_default_table.sect[section].size) {
if (status == KMP_I18N_CLOSED) {
__kmp_i18n_catopen();
}
if (cat != KMP_I18N_NULLCAT) {
if (table.size == 0) {
table.sect = (kmp_i18n_section_t *)KMP_INTERNAL_CALLOC(
(__kmp_i18n_default_table.size + 2), sizeof(kmp_i18n_section_t));
table.size = __kmp_i18n_default_table.size;
}
if (table.sect[section].size == 0) {
table.sect[section].str = (const char **)KMP_INTERNAL_CALLOC(
__kmp_i18n_default_table.sect[section].size + 2,
sizeof(char const *));
table.sect[section].size =
__kmp_i18n_default_table.sect[section].size;
}
if (table.sect[section].str[number] == NULL) {
table.sect[section].str[number] = ___catgets(id);
}
message = table.sect[section].str[number];
}
if (message == NULL) {
// Catalog is not opened or message is not found, return default
// message.
message = __kmp_i18n_default_table.sect[section].str[number];
}
}
}
if (message == NULL) {
message = no_message_available;
}
return message;
} // func __kmp_i18n_catgets
#endif // KMP_OS_WINDOWS
// -----------------------------------------------------------------------------
#ifndef KMP_I18N_OK
#error I18n support is not implemented for this OS.
#endif // KMP_I18N_OK
// -----------------------------------------------------------------------------
void __kmp_i18n_dump_catalog(kmp_str_buf_t *buffer) {
struct kmp_i18n_id_range_t {
kmp_i18n_id_t first;
kmp_i18n_id_t last;
}; // struct kmp_i18n_id_range_t
static struct kmp_i18n_id_range_t ranges[] = {
{kmp_i18n_prp_first, kmp_i18n_prp_last},
{kmp_i18n_str_first, kmp_i18n_str_last},
{kmp_i18n_fmt_first, kmp_i18n_fmt_last},
{kmp_i18n_msg_first, kmp_i18n_msg_last},
{kmp_i18n_hnt_first, kmp_i18n_hnt_last}}; // ranges
int num_of_ranges = sizeof(ranges) / sizeof(struct kmp_i18n_id_range_t);
int range;
kmp_i18n_id_t id;
for (range = 0; range < num_of_ranges; ++range) {
__kmp_str_buf_print(buffer, "*** Set #%d ***\n", range + 1);
for (id = (kmp_i18n_id_t)(ranges[range].first + 1); id < ranges[range].last;
id = (kmp_i18n_id_t)(id + 1)) {
__kmp_str_buf_print(buffer, "%d: <<%s>>\n", id, __kmp_i18n_catgets(id));
}
}
__kmp_printf("%s", buffer->str);
} // __kmp_i18n_dump_catalog
// -----------------------------------------------------------------------------
kmp_msg_t __kmp_msg_format(unsigned id_arg, ...) {
kmp_msg_t msg;
va_list args;
kmp_str_buf_t buffer;
__kmp_str_buf_init(&buffer);
va_start(args, id_arg);
// We use unsigned for the ID argument and explicitly cast it here to the
// right enumerator because variadic functions are not compatible with
// default promotions.
kmp_i18n_id_t id = (kmp_i18n_id_t)id_arg;
#if KMP_OS_UNIX
// On Linux* OS and OS X*, printf() family functions process parameter
// numbers, for example: "%2$s %1$s".
__kmp_str_buf_vprint(&buffer, __kmp_i18n_catgets(id), args);
#elif KMP_OS_WINDOWS
// On Winodws, printf() family functions does not recognize GNU style
// parameter numbers, so we have to use FormatMessage() instead. It recognizes
// parameter numbers, e. g.: "%2!s! "%1!s!".
{
LPTSTR str = NULL;
int len;
FormatMessage(FORMAT_MESSAGE_FROM_STRING | FORMAT_MESSAGE_ALLOCATE_BUFFER,
__kmp_i18n_catgets(id), 0, 0, (LPTSTR)(&str), 0, &args);
len = ___strip_crs(str);
__kmp_str_buf_cat(&buffer, str, len);
LocalFree(str);
}
#else
#error
#endif
va_end(args);
__kmp_str_buf_detach(&buffer);
msg.type = (kmp_msg_type_t)(id >> 16);
msg.num = id & 0xFFFF;
msg.str = buffer.str;
msg.len = buffer.used;
return msg;
} // __kmp_msg_format
// -----------------------------------------------------------------------------
static char *sys_error(int err) {
char *message = NULL;
#if KMP_OS_WINDOWS
LPVOID buffer = NULL;
int len;
DWORD rc;
rc = FormatMessage(
FORMAT_MESSAGE_ALLOCATE_BUFFER | FORMAT_MESSAGE_FROM_SYSTEM, NULL, err,
MAKELANGID(LANG_NEUTRAL, SUBLANG_DEFAULT), // Default language.
(LPTSTR)&buffer, 0, NULL);
if (rc > 0) {
// Message formatted. Copy it (so we can free it later with normal free().
message = __kmp_str_format("%s", (char *)buffer);
len = ___strip_crs(message); // Delete carriage returns if any.
// Strip trailing newlines.
while (len > 0 && message[len - 1] == '\n') {
--len;
}
message[len] = 0;
} else {
// FormatMessage() failed to format system error message. GetLastError()
// would give us error code, which we would convert to message... this it
// dangerous recursion, which cannot clarify original error, so we will not
// even start it.
}
if (buffer != NULL) {
LocalFree(buffer);
}
#else // Non-Windows* OS: Linux* OS or OS X*
/* There are 2 incompatible versions of strerror_r:
char * strerror_r( int, char *, size_t ); // GNU version
int strerror_r( int, char *, size_t ); // XSI version
*/
#if (defined(__GLIBC__) && defined(_GNU_SOURCE)) || \
(defined(__BIONIC__) && defined(_GNU_SOURCE) && \
__ANDROID_API__ >= __ANDROID_API_M__)
// GNU version of strerror_r.
char buffer[2048];
char *const err_msg = strerror_r(err, buffer, sizeof(buffer));
// Do not eliminate this assignment to temporary variable, otherwise compiler
// would not issue warning if strerror_r() returns `int' instead of expected
// `char *'.
message = __kmp_str_format("%s", err_msg);
#else // OS X*, FreeBSD* etc.
// XSI version of strerror_r.
int size = 2048;
char *buffer = (char *)KMP_INTERNAL_MALLOC(size);
int rc;
if (buffer == NULL) {
KMP_FATAL(MemoryAllocFailed);
}
rc = strerror_r(err, buffer, size);
if (rc == -1) {
rc = errno; // XSI version sets errno.
}
while (rc == ERANGE) { // ERANGE means the buffer is too small.
KMP_INTERNAL_FREE(buffer);
size *= 2;
buffer = (char *)KMP_INTERNAL_MALLOC(size);
if (buffer == NULL) {
KMP_FATAL(MemoryAllocFailed);
}
rc = strerror_r(err, buffer, size);
if (rc == -1) {
rc = errno; // XSI version sets errno.
}
}
if (rc == 0) {
message = buffer;
} else { // Buffer is unused. Free it.
KMP_INTERNAL_FREE(buffer);
}
#endif
#endif /* KMP_OS_WINDOWS */
if (message == NULL) {
// TODO: I18n this message.
message = __kmp_str_format("%s", "(No system error message available)");
}
return message;
} // sys_error
// -----------------------------------------------------------------------------
kmp_msg_t __kmp_msg_error_code(int code) {
kmp_msg_t msg;
msg.type = kmp_mt_syserr;
msg.num = code;
msg.str = sys_error(code);
msg.len = KMP_STRLEN(msg.str);
return msg;
} // __kmp_msg_error_code
// -----------------------------------------------------------------------------
kmp_msg_t __kmp_msg_error_mesg(char const *mesg) {
kmp_msg_t msg;
msg.type = kmp_mt_syserr;
msg.num = 0;
msg.str = __kmp_str_format("%s", mesg);
msg.len = KMP_STRLEN(msg.str);
return msg;
} // __kmp_msg_error_mesg
// -----------------------------------------------------------------------------
void __kmp_msg(kmp_msg_severity_t severity, kmp_msg_t message, va_list args) {
kmp_i18n_id_t format; // format identifier
kmp_msg_t fmsg; // formatted message
kmp_str_buf_t buffer;
if (severity != kmp_ms_fatal && __kmp_generate_warnings == kmp_warnings_off)
return; // no reason to form a string in order to not print it
__kmp_str_buf_init(&buffer);
// Format the primary message.
switch (severity) {
case kmp_ms_inform: {
format = kmp_i18n_fmt_Info;
} break;
case kmp_ms_warning: {
format = kmp_i18n_fmt_Warning;
} break;
case kmp_ms_fatal: {
format = kmp_i18n_fmt_Fatal;
} break;
default: { KMP_DEBUG_ASSERT(0); }
}
fmsg = __kmp_msg_format(format, message.num, message.str);
__kmp_str_free(&message.str);
__kmp_str_buf_cat(&buffer, fmsg.str, fmsg.len);
__kmp_str_free(&fmsg.str);
// Format other messages.
for (;;) {
message = va_arg(args, kmp_msg_t);
if (message.type == kmp_mt_dummy && message.str == NULL) {
break;
}
switch (message.type) {
case kmp_mt_hint: {
format = kmp_i18n_fmt_Hint;
// we cannot skip %1$ and only use %2$ to print the message without the
// number
fmsg = __kmp_msg_format(format, message.str);
} break;
case kmp_mt_syserr: {
format = kmp_i18n_fmt_SysErr;
fmsg = __kmp_msg_format(format, message.num, message.str);
} break;
default: { KMP_DEBUG_ASSERT(0); }
}
__kmp_str_free(&message.str);
__kmp_str_buf_cat(&buffer, fmsg.str, fmsg.len);
__kmp_str_free(&fmsg.str);
}
// Print formatted messages.
// This lock prevents multiple fatal errors on the same problem.
// __kmp_acquire_bootstrap_lock( & lock ); // GEH - This lock causing tests
// to hang on OS X*.
__kmp_printf("%s", buffer.str);
__kmp_str_buf_free(&buffer);
// __kmp_release_bootstrap_lock( & lock ); // GEH - this lock causing tests
// to hang on OS X*.
} // __kmp_msg
void __kmp_msg(kmp_msg_severity_t severity, kmp_msg_t message, ...) {
va_list args;
va_start(args, message);
__kmp_msg(severity, message, args);
va_end(args);
}
void __kmp_fatal(kmp_msg_t message, ...) {
va_list args;
va_start(args, message);
__kmp_msg(kmp_ms_fatal, message, args);
va_end(args);
#if KMP_OS_WINDOWS
// Delay to give message a chance to appear before reaping
__kmp_thread_sleep(500);
#endif
__kmp_abort_process();
} // __kmp_fatal
// end of file //

179
runtime/src/kmp_i18n.h Normal file
View File

@ -0,0 +1,179 @@
/*
* kmp_i18n.h
*/
//===----------------------------------------------------------------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is dual licensed under the MIT and the University of Illinois Open
// Source Licenses. See LICENSE.txt for details.
//
//===----------------------------------------------------------------------===//
#ifndef KMP_I18N_H
#define KMP_I18N_H
#include "kmp_str.h"
#ifdef __cplusplus
extern "C" {
#endif // __cplusplus
/* kmp_i18n_id.inc defines kmp_i18n_id_t type. It is an enumeration with
identifiers of all the messages in the catalog. There is one special
identifier: kmp_i18n_null, which denotes absence of message. */
#include "kmp_i18n_id.inc" // Generated file. Do not edit it manually.
/* Low-level functions handling message catalog. __kmp_i18n_open() opens message
catalog, __kmp_i18n_closes() it. Explicit opening is not required: if message
catalog is not yet open, __kmp_i18n_catgets() will open it implicitly.
However, catalog should be explicitly closed, otherwise resources (mamory,
handles) may leak.
__kmp_i18n_catgets() returns read-only string. It should not be freed.
KMP_I18N_STR macro simplifies acces to strings in message catalog a bit.
Following two lines are equivalent:
__kmp_i18n_catgets( kmp_i18n_str_Warning )
KMP_I18N_STR( Warning )
*/
void __kmp_i18n_catopen();
void __kmp_i18n_catclose();
char const *__kmp_i18n_catgets(kmp_i18n_id_t id);
#define KMP_I18N_STR(id) __kmp_i18n_catgets(kmp_i18n_str_##id)
/* High-level interface for printing strings targeted to the user.
All the strings are divided into 3 types:
* messages,
* hints,
* system errors.
There are 3 kind of message severities:
* informational messages,
* warnings (non-fatal errors),
* fatal errors.
For example:
OMP: Warning #2: Cannot open message catalog "libguide.cat": (1)
OMP: System error #2: No such file or directory (2)
OMP: Hint: Please check NLSPATH environment variable. (3)
OMP: Info #3: Default messages will be used. (4)
where
(1) is a message of warning severity,
(2) is a system error caused the previous warning,
(3) is a hint for the user how to fix the problem,
(4) is a message of informational severity.
Usage in complex cases (message is accompanied with hints and system errors):
int error = errno; // We need save errno immediately, because it may
// be changed.
__kmp_msg(
kmp_ms_warning, // Severity
KMP_MSG( CantOpenMessageCatalog, name ), // Primary message
KMP_ERR( error ), // System error
KMP_HNT( CheckNLSPATH ), // Hint
__kmp_msg_null // Variadic argument list finisher
);
Usage in simple cases (just a message, no system errors or hints):
KMP_INFORM( WillUseDefaultMessages );
KMP_WARNING( CantOpenMessageCatalog, name );
KMP_FATAL( StackOverlap );
KMP_SYSFAIL( "pthread_create", status );
KMP_CHECK_SYSFAIL( "pthread_create", status );
KMP_CHECK_SYSFAIL_ERRNO( "gettimeofday", status );
*/
enum kmp_msg_type {
kmp_mt_dummy = 0, // Special type for internal purposes.
kmp_mt_mesg =
4, // Primary OpenMP message, could be information, warning, or fatal.
kmp_mt_hint = 5, // Hint to the user.
kmp_mt_syserr = -1 // System error message.
}; // enum kmp_msg_type
typedef enum kmp_msg_type kmp_msg_type_t;
struct kmp_msg {
kmp_msg_type_t type;
int num;
char *str;
int len;
}; // struct kmp_message
typedef struct kmp_msg kmp_msg_t;
// Special message to denote the end of variadic list of arguments.
extern kmp_msg_t __kmp_msg_null;
// Helper functions. Creates messages either from message catalog or from
// system. Note: these functions allocate memory. You should pass created
// messages to __kmp_msg() function, it will print messages and destroy them.
kmp_msg_t __kmp_msg_format(unsigned id_arg, ...);
kmp_msg_t __kmp_msg_error_code(int code);
kmp_msg_t __kmp_msg_error_mesg(char const *mesg);
// Helper macros to make calls shorter.
#define KMP_MSG(...) __kmp_msg_format(kmp_i18n_msg_##__VA_ARGS__)
#define KMP_HNT(...) __kmp_msg_format(kmp_i18n_hnt_##__VA_ARGS__)
#define KMP_SYSERRCODE(code) __kmp_msg_error_code(code)
#define KMP_SYSERRMESG(mesg) __kmp_msg_error_mesg(mesg)
#define KMP_ERR KMP_SYSERRCODE
// Message severity.
enum kmp_msg_severity {
kmp_ms_inform, // Just information for the user.
kmp_ms_warning, // Non-fatal error, execution continues.
kmp_ms_fatal // Fatal error, program aborts.
}; // enum kmp_msg_severity
typedef enum kmp_msg_severity kmp_msg_severity_t;
// Primary function for printing messages for the user. The first message is
// mandatory. Any number of system errors and hints may be specified. Argument
// list must be finished with __kmp_msg_null.
void __kmp_msg(kmp_msg_severity_t severity, kmp_msg_t message, ...);
KMP_NORETURN void __kmp_fatal(kmp_msg_t message, ...);
// Helper macros to make calls shorter in simple cases.
#define KMP_INFORM(...) \
__kmp_msg(kmp_ms_inform, KMP_MSG(__VA_ARGS__), __kmp_msg_null)
#define KMP_WARNING(...) \
__kmp_msg(kmp_ms_warning, KMP_MSG(__VA_ARGS__), __kmp_msg_null)
#define KMP_FATAL(...) __kmp_fatal(KMP_MSG(__VA_ARGS__), __kmp_msg_null)
#define KMP_SYSFAIL(func, error) \
__kmp_fatal(KMP_MSG(FunctionError, func), KMP_SYSERRCODE(error), \
__kmp_msg_null)
// Check error, if not zero, generate fatal error message.
#define KMP_CHECK_SYSFAIL(func, error) \
{ \
if (error) { \
KMP_SYSFAIL(func, error); \
} \
}
// Check status, if not zero, generate fatal error message using errno.
#define KMP_CHECK_SYSFAIL_ERRNO(func, status) \
{ \
if (status != 0) { \
int error = errno; \
KMP_SYSFAIL(func, error); \
} \
}
#ifdef KMP_DEBUG
void __kmp_i18n_dump_catalog(kmp_str_buf_t *buffer);
#endif // KMP_DEBUG
#ifdef __cplusplus
}; // extern "C"
#endif // __cplusplus
#endif // KMP_I18N_H
// end of file //

View File

@ -0,0 +1,34 @@
/*
* kmp_import.cpp
*/
//===----------------------------------------------------------------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is dual licensed under the MIT and the University of Illinois Open
// Source Licenses. See LICENSE.txt for details.
//
//===----------------------------------------------------------------------===//
/* Object generated from this source file is linked to Windows* OS DLL import
library (libompmd.lib) only! It is not a part of regular static or dynamic
OpenMP RTL. Any code that just needs to go in the libompmd.lib (but not in
libompmt.lib and libompmd.dll) should be placed in this file. */
#ifdef __cplusplus
extern "C" {
#endif
/*These symbols are required for mutual exclusion with Microsoft OpenMP RTL
(and compatibility with MS Compiler). */
int _You_must_link_with_exactly_one_OpenMP_library = 1;
int _You_must_link_with_Intel_OpenMP_library = 1;
int _You_must_link_with_Microsoft_OpenMP_library = 1;
#ifdef __cplusplus
}
#endif
// end of file //

230
runtime/src/kmp_io.cpp Normal file
View File

@ -0,0 +1,230 @@
/*
* kmp_io.cpp -- RTL IO
*/
//===----------------------------------------------------------------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is dual licensed under the MIT and the University of Illinois Open
// Source Licenses. See LICENSE.txt for details.
//
//===----------------------------------------------------------------------===//
#include <stdarg.h>
#include <stddef.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#ifndef __ABSOFT_WIN
#include <sys/types.h>
#endif
#include "kmp.h" // KMP_GTID_DNE, __kmp_debug_buf, etc
#include "kmp_io.h"
#include "kmp_lock.h"
#include "kmp_os.h"
#include "kmp_str.h"
#if KMP_OS_WINDOWS
#if KMP_MSVC_COMPAT
#pragma warning(push)
#pragma warning(disable : 271 310)
#endif
#include <windows.h>
#if KMP_MSVC_COMPAT
#pragma warning(pop)
#endif
#endif
/* ------------------------------------------------------------------------ */
kmp_bootstrap_lock_t __kmp_stdio_lock = KMP_BOOTSTRAP_LOCK_INITIALIZER(
__kmp_stdio_lock); /* Control stdio functions */
kmp_bootstrap_lock_t __kmp_console_lock = KMP_BOOTSTRAP_LOCK_INITIALIZER(
__kmp_console_lock); /* Control console initialization */
#if KMP_OS_WINDOWS
static HANDLE __kmp_stdout = NULL;
static HANDLE __kmp_stderr = NULL;
static int __kmp_console_exists = FALSE;
static kmp_str_buf_t __kmp_console_buf;
static int is_console(void) {
char buffer[128];
DWORD rc = 0;
DWORD err = 0;
// Try to get console title.
SetLastError(0);
// GetConsoleTitle does not reset last error in case of success or short
// buffer, so we need to clear it explicitly.
rc = GetConsoleTitle(buffer, sizeof(buffer));
if (rc == 0) {
// rc == 0 means getting console title failed. Let us find out why.
err = GetLastError();
// err == 0 means buffer too short (we suppose console exists).
// In Window applications we usually have err == 6 (invalid handle).
}
return rc > 0 || err == 0;
}
void __kmp_close_console(void) {
/* wait until user presses return before closing window */
/* TODO only close if a window was opened */
if (__kmp_console_exists) {
__kmp_stdout = NULL;
__kmp_stderr = NULL;
__kmp_str_buf_free(&__kmp_console_buf);
__kmp_console_exists = FALSE;
}
}
/* For windows, call this before stdout, stderr, or stdin are used.
It opens a console window and starts processing */
static void __kmp_redirect_output(void) {
__kmp_acquire_bootstrap_lock(&__kmp_console_lock);
if (!__kmp_console_exists) {
HANDLE ho;
HANDLE he;
__kmp_str_buf_init(&__kmp_console_buf);
AllocConsole();
// We do not check the result of AllocConsole because
// 1. the call is harmless
// 2. it is not clear how to communicate failue
// 3. we will detect failure later when we get handle(s)
ho = GetStdHandle(STD_OUTPUT_HANDLE);
if (ho == INVALID_HANDLE_VALUE || ho == NULL) {
DWORD err = GetLastError();
// TODO: output error somehow (maybe message box)
__kmp_stdout = NULL;
} else {
__kmp_stdout = ho; // temporary code, need new global for ho
}
he = GetStdHandle(STD_ERROR_HANDLE);
if (he == INVALID_HANDLE_VALUE || he == NULL) {
DWORD err = GetLastError();
// TODO: output error somehow (maybe message box)
__kmp_stderr = NULL;
} else {
__kmp_stderr = he; // temporary code, need new global
}
__kmp_console_exists = TRUE;
}
__kmp_release_bootstrap_lock(&__kmp_console_lock);
}
#else
#define __kmp_stderr (stderr)
#define __kmp_stdout (stdout)
#endif /* KMP_OS_WINDOWS */
void __kmp_vprintf(enum kmp_io out_stream, char const *format, va_list ap) {
#if KMP_OS_WINDOWS
if (!__kmp_console_exists) {
__kmp_redirect_output();
}
if (!__kmp_stderr && out_stream == kmp_err) {
return;
}
if (!__kmp_stdout && out_stream == kmp_out) {
return;
}
#endif /* KMP_OS_WINDOWS */
auto stream = ((out_stream == kmp_out) ? __kmp_stdout : __kmp_stderr);
if (__kmp_debug_buf && __kmp_debug_buffer != NULL) {
int dc = __kmp_debug_count++ % __kmp_debug_buf_lines;
char *db = &__kmp_debug_buffer[dc * __kmp_debug_buf_chars];
int chars = 0;
#ifdef KMP_DEBUG_PIDS
chars = KMP_SNPRINTF(db, __kmp_debug_buf_chars, "pid=%d: ",
(kmp_int32)getpid());
#endif
chars += KMP_VSNPRINTF(db, __kmp_debug_buf_chars, format, ap);
if (chars + 1 > __kmp_debug_buf_chars) {
if (chars + 1 > __kmp_debug_buf_warn_chars) {
#if KMP_OS_WINDOWS
DWORD count;
__kmp_str_buf_print(&__kmp_console_buf, "OMP warning: Debugging buffer "
"overflow; increase "
"KMP_DEBUG_BUF_CHARS to %d\n",
chars + 1);
WriteFile(stream, __kmp_console_buf.str, __kmp_console_buf.used, &count,
NULL);
__kmp_str_buf_clear(&__kmp_console_buf);
#else
fprintf(stream, "OMP warning: Debugging buffer overflow; "
"increase KMP_DEBUG_BUF_CHARS to %d\n",
chars + 1);
fflush(stream);
#endif
__kmp_debug_buf_warn_chars = chars + 1;
}
/* terminate string if overflow occurred */
db[__kmp_debug_buf_chars - 2] = '\n';
db[__kmp_debug_buf_chars - 1] = '\0';
}
} else {
#if KMP_OS_WINDOWS
DWORD count;
#ifdef KMP_DEBUG_PIDS
__kmp_str_buf_print(&__kmp_console_buf, "pid=%d: ", (kmp_int32)getpid());
#endif
__kmp_str_buf_vprint(&__kmp_console_buf, format, ap);
WriteFile(stream, __kmp_console_buf.str, __kmp_console_buf.used, &count,
NULL);
__kmp_str_buf_clear(&__kmp_console_buf);
#else
#ifdef KMP_DEBUG_PIDS
fprintf(stream, "pid=%d: ", (kmp_int32)getpid());
#endif
vfprintf(stream, format, ap);
fflush(stream);
#endif
}
}
void __kmp_printf(char const *format, ...) {
va_list ap;
va_start(ap, format);
__kmp_acquire_bootstrap_lock(&__kmp_stdio_lock);
__kmp_vprintf(kmp_err, format, ap);
__kmp_release_bootstrap_lock(&__kmp_stdio_lock);
va_end(ap);
}
void __kmp_printf_no_lock(char const *format, ...) {
va_list ap;
va_start(ap, format);
__kmp_vprintf(kmp_err, format, ap);
va_end(ap);
}
void __kmp_fprintf(enum kmp_io stream, char const *format, ...) {
va_list ap;
va_start(ap, format);
__kmp_acquire_bootstrap_lock(&__kmp_stdio_lock);
__kmp_vprintf(stream, format, ap);
__kmp_release_bootstrap_lock(&__kmp_stdio_lock);
va_end(ap);
}

39
runtime/src/kmp_io.h Normal file
View File

@ -0,0 +1,39 @@
/*
* kmp_io.h -- RTL IO header file.
*/
//===----------------------------------------------------------------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is dual licensed under the MIT and the University of Illinois Open
// Source Licenses. See LICENSE.txt for details.
//
//===----------------------------------------------------------------------===//
#ifndef KMP_IO_H
#define KMP_IO_H
#ifdef __cplusplus
extern "C" {
#endif
/* ------------------------------------------------------------------------ */
enum kmp_io { kmp_out = 0, kmp_err };
extern kmp_bootstrap_lock_t __kmp_stdio_lock; /* Control stdio functions */
extern kmp_bootstrap_lock_t
__kmp_console_lock; /* Control console initialization */
extern void __kmp_vprintf(enum kmp_io stream, char const *format, va_list ap);
extern void __kmp_printf(char const *format, ...);
extern void __kmp_printf_no_lock(char const *format, ...);
extern void __kmp_fprintf(enum kmp_io stream, char const *format, ...);
extern void __kmp_close_console(void);
#ifdef __cplusplus
}
#endif
#endif /* KMP_IO_H */

161
runtime/src/kmp_itt.cpp Normal file
View File

@ -0,0 +1,161 @@
#include "kmp_config.h"
#if USE_ITT_BUILD
/*
* kmp_itt.cpp -- ITT Notify interface.
*/
//===----------------------------------------------------------------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is dual licensed under the MIT and the University of Illinois Open
// Source Licenses. See LICENSE.txt for details.
//
//===----------------------------------------------------------------------===//
#include "kmp_itt.h"
#if KMP_DEBUG
#include "kmp_itt.inl"
#endif
#if USE_ITT_NOTIFY
#include "ittnotify_config.h"
__itt_global __kmp_ittapi_clean_global;
extern __itt_global __kmp_itt__ittapi_global;
kmp_int32 __kmp_barrier_domain_count;
kmp_int32 __kmp_region_domain_count;
__itt_domain *__kmp_itt_barrier_domains[KMP_MAX_FRAME_DOMAINS];
__itt_domain *__kmp_itt_region_domains[KMP_MAX_FRAME_DOMAINS];
__itt_domain *__kmp_itt_imbalance_domains[KMP_MAX_FRAME_DOMAINS];
kmp_int32 __kmp_itt_region_team_size[KMP_MAX_FRAME_DOMAINS];
__itt_domain *metadata_domain = NULL;
__itt_string_handle *string_handle_imbl = NULL;
__itt_string_handle *string_handle_loop = NULL;
__itt_string_handle *string_handle_sngl = NULL;
#include "kmp_i18n.h"
#include "kmp_str.h"
#include "kmp_version.h"
KMP_BUILD_ASSERT(sizeof(kmp_itt_mark_t) == sizeof(__itt_mark_type));
/* Previously used warnings:
KMP_WARNING( IttAllNotifDisabled );
KMP_WARNING( IttObjNotifDisabled );
KMP_WARNING( IttMarkNotifDisabled );
KMP_WARNING( IttUnloadLibFailed, libittnotify );
*/
kmp_int32 __kmp_itt_prepare_delay = 0;
kmp_bootstrap_lock_t __kmp_itt_debug_lock =
KMP_BOOTSTRAP_LOCK_INITIALIZER(__kmp_itt_debug_lock);
#endif // USE_ITT_NOTIFY
void __kmp_itt_reset() {
#if USE_ITT_NOTIFY
__kmp_itt__ittapi_global = __kmp_ittapi_clean_global;
#endif
}
void __kmp_itt_initialize() {
// ITTNotify library is loaded and initialized at first call to any ittnotify
// function, so we do not need to explicitly load it any more. Just report OMP
// RTL version to ITTNotify.
#if USE_ITT_NOTIFY
// Backup a clean global state
__kmp_ittapi_clean_global = __kmp_itt__ittapi_global;
// Report OpenMP RTL version.
kmp_str_buf_t buf;
__itt_mark_type version;
__kmp_str_buf_init(&buf);
__kmp_str_buf_print(&buf, "OMP RTL Version %d.%d.%d", __kmp_version_major,
__kmp_version_minor, __kmp_version_build);
if (__itt_api_version_ptr != NULL) {
__kmp_str_buf_print(&buf, ":%s", __itt_api_version());
}
version = __itt_mark_create(buf.str);
__itt_mark(version, NULL);
__kmp_str_buf_free(&buf);
#endif
} // __kmp_itt_initialize
void __kmp_itt_destroy() {
#if USE_ITT_NOTIFY
__kmp_itt_fini_ittlib();
#endif
} // __kmp_itt_destroy
extern "C" void __itt_error_handler(__itt_error_code err, va_list args) {
switch (err) {
case __itt_error_no_module: {
char const *library = va_arg(args, char const *);
#if KMP_OS_WINDOWS
int sys_err = va_arg(args, int);
kmp_msg_t err_code = KMP_SYSERRCODE(sys_err);
__kmp_msg(kmp_ms_warning, KMP_MSG(IttLoadLibFailed, library), err_code,
__kmp_msg_null);
if (__kmp_generate_warnings == kmp_warnings_off) {
__kmp_str_free(&err_code.str);
}
#else
char const *sys_err = va_arg(args, char const *);
kmp_msg_t err_code = KMP_SYSERRMESG(sys_err);
__kmp_msg(kmp_ms_warning, KMP_MSG(IttLoadLibFailed, library), err_code,
__kmp_msg_null);
if (__kmp_generate_warnings == kmp_warnings_off) {
__kmp_str_free(&err_code.str);
}
#endif
} break;
case __itt_error_no_symbol: {
char const *library = va_arg(args, char const *);
char const *symbol = va_arg(args, char const *);
KMP_WARNING(IttLookupFailed, symbol, library);
} break;
case __itt_error_unknown_group: {
char const *var = va_arg(args, char const *);
char const *group = va_arg(args, char const *);
KMP_WARNING(IttUnknownGroup, var, group);
} break;
case __itt_error_env_too_long: {
char const *var = va_arg(args, char const *);
size_t act_len = va_arg(args, size_t);
size_t max_len = va_arg(args, size_t);
KMP_WARNING(IttEnvVarTooLong, var, (unsigned long)act_len,
(unsigned long)max_len);
} break;
case __itt_error_cant_read_env: {
char const *var = va_arg(args, char const *);
int sys_err = va_arg(args, int);
kmp_msg_t err_code = KMP_ERR(sys_err);
__kmp_msg(kmp_ms_warning, KMP_MSG(CantGetEnvVar, var), err_code,
__kmp_msg_null);
if (__kmp_generate_warnings == kmp_warnings_off) {
__kmp_str_free(&err_code.str);
}
} break;
case __itt_error_system: {
char const *func = va_arg(args, char const *);
int sys_err = va_arg(args, int);
kmp_msg_t err_code = KMP_SYSERRCODE(sys_err);
__kmp_msg(kmp_ms_warning, KMP_MSG(IttFunctionError, func), err_code,
__kmp_msg_null);
if (__kmp_generate_warnings == kmp_warnings_off) {
__kmp_str_free(&err_code.str);
}
} break;
default: { KMP_WARNING(IttUnknownError, err); }
}
} // __itt_error_handler
#endif /* USE_ITT_BUILD */

333
runtime/src/kmp_itt.h Normal file
View File

@ -0,0 +1,333 @@
#if USE_ITT_BUILD
/*
* kmp_itt.h -- ITT Notify interface.
*/
//===----------------------------------------------------------------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is dual licensed under the MIT and the University of Illinois Open
// Source Licenses. See LICENSE.txt for details.
//
//===----------------------------------------------------------------------===//
#ifndef KMP_ITT_H
#define KMP_ITT_H
#include "kmp_lock.h"
#define INTEL_ITTNOTIFY_API_PRIVATE
#include "ittnotify.h"
#include "legacy/ittnotify.h"
#if KMP_DEBUG
#define __kmp_inline // Turn off inlining in debug mode.
#else
#define __kmp_inline static inline
#endif
#if USE_ITT_NOTIFY
extern kmp_int32 __kmp_itt_prepare_delay;
#ifdef __cplusplus
extern "C" void __kmp_itt_fini_ittlib(void);
#else
extern void __kmp_itt_fini_ittlib(void);
#endif
#endif
// Simplify the handling of an argument that is only required when USE_ITT_BUILD
// is enabled.
#define USE_ITT_BUILD_ARG(x) , x
void __kmp_itt_initialize();
void __kmp_itt_destroy();
void __kmp_itt_reset();
// -----------------------------------------------------------------------------
// New stuff for reporting high-level constructs.
// Note the naming convention:
// __kmp_itt_xxxing() function should be called before action, while
// __kmp_itt_xxxed() function should be called after action.
// --- Parallel region reporting ---
__kmp_inline void
__kmp_itt_region_forking(int gtid, int team_size,
int barriers); // Master only, before forking threads.
__kmp_inline void
__kmp_itt_region_joined(int gtid); // Master only, after joining threads.
// (*) Note: A thread may execute tasks after this point, though.
// --- Frame reporting ---
// region=0: no regions, region=1: parallel, region=2: serialized parallel
__kmp_inline void __kmp_itt_frame_submit(int gtid, __itt_timestamp begin,
__itt_timestamp end, int imbalance,
ident_t *loc, int team_size,
int region = 0);
// --- Metadata reporting ---
// begin/end - begin/end timestamps of a barrier frame, imbalance - aggregated
// wait time value, reduction -if this is a reduction barrier
__kmp_inline void __kmp_itt_metadata_imbalance(int gtid, kmp_uint64 begin,
kmp_uint64 end,
kmp_uint64 imbalance,
kmp_uint64 reduction);
// sched_type: 0 - static, 1 - dynamic, 2 - guided, 3 - custom (all others);
// iterations - loop trip count, chunk - chunk size
__kmp_inline void __kmp_itt_metadata_loop(ident_t *loc, kmp_uint64 sched_type,
kmp_uint64 iterations,
kmp_uint64 chunk);
__kmp_inline void __kmp_itt_metadata_single(ident_t *loc);
// --- Barrier reporting ---
__kmp_inline void *__kmp_itt_barrier_object(int gtid, int bt, int set_name = 0,
int delta = 0);
__kmp_inline void __kmp_itt_barrier_starting(int gtid, void *object);
__kmp_inline void __kmp_itt_barrier_middle(int gtid, void *object);
__kmp_inline void __kmp_itt_barrier_finished(int gtid, void *object);
// --- Taskwait reporting ---
__kmp_inline void *__kmp_itt_taskwait_object(int gtid);
__kmp_inline void __kmp_itt_taskwait_starting(int gtid, void *object);
__kmp_inline void __kmp_itt_taskwait_finished(int gtid, void *object);
// --- Task reporting ---
__kmp_inline void __kmp_itt_task_starting(void *object);
__kmp_inline void __kmp_itt_task_finished(void *object);
// --- Lock reporting ---
#if KMP_USE_DYNAMIC_LOCK
__kmp_inline void __kmp_itt_lock_creating(kmp_user_lock_p lock,
const ident_t *);
#else
__kmp_inline void __kmp_itt_lock_creating(kmp_user_lock_p lock);
#endif
__kmp_inline void __kmp_itt_lock_acquiring(kmp_user_lock_p lock);
__kmp_inline void __kmp_itt_lock_acquired(kmp_user_lock_p lock);
__kmp_inline void __kmp_itt_lock_releasing(kmp_user_lock_p lock);
__kmp_inline void __kmp_itt_lock_cancelled(kmp_user_lock_p lock);
__kmp_inline void __kmp_itt_lock_destroyed(kmp_user_lock_p lock);
// --- Critical reporting ---
#if KMP_USE_DYNAMIC_LOCK
__kmp_inline void __kmp_itt_critical_creating(kmp_user_lock_p lock,
const ident_t *);
#else
__kmp_inline void __kmp_itt_critical_creating(kmp_user_lock_p lock);
#endif
__kmp_inline void __kmp_itt_critical_acquiring(kmp_user_lock_p lock);
__kmp_inline void __kmp_itt_critical_acquired(kmp_user_lock_p lock);
__kmp_inline void __kmp_itt_critical_releasing(kmp_user_lock_p lock);
__kmp_inline void __kmp_itt_critical_destroyed(kmp_user_lock_p lock);
// --- Single reporting ---
__kmp_inline void __kmp_itt_single_start(int gtid);
__kmp_inline void __kmp_itt_single_end(int gtid);
// --- Ordered reporting ---
__kmp_inline void __kmp_itt_ordered_init(int gtid);
__kmp_inline void __kmp_itt_ordered_prep(int gtid);
__kmp_inline void __kmp_itt_ordered_start(int gtid);
__kmp_inline void __kmp_itt_ordered_end(int gtid);
// --- Threads reporting ---
__kmp_inline void __kmp_itt_thread_ignore();
__kmp_inline void __kmp_itt_thread_name(int gtid);
// --- System objects ---
__kmp_inline void __kmp_itt_system_object_created(void *object,
char const *name);
// --- Stack stitching ---
__kmp_inline __itt_caller __kmp_itt_stack_caller_create(void);
__kmp_inline void __kmp_itt_stack_caller_destroy(__itt_caller);
__kmp_inline void __kmp_itt_stack_callee_enter(__itt_caller);
__kmp_inline void __kmp_itt_stack_callee_leave(__itt_caller);
// -----------------------------------------------------------------------------
// Old stuff for reporting low-level internal synchronization.
#if USE_ITT_NOTIFY
/* Support for SSC marks, which are used by SDE
http://software.intel.com/en-us/articles/intel-software-development-emulator
to mark points in instruction traces that represent spin-loops and are
therefore uninteresting when collecting traces for architecture simulation.
*/
#ifndef INCLUDE_SSC_MARKS
#define INCLUDE_SSC_MARKS (KMP_OS_LINUX && KMP_ARCH_X86_64)
#endif
/* Linux 64 only for now */
#if (INCLUDE_SSC_MARKS && KMP_OS_LINUX && KMP_ARCH_X86_64)
// Portable (at least for gcc and icc) code to insert the necessary instructions
// to set %ebx and execute the unlikely no-op.
#if defined(__INTEL_COMPILER)
#define INSERT_SSC_MARK(tag) __SSC_MARK(tag)
#else
#define INSERT_SSC_MARK(tag) \
__asm__ __volatile__("movl %0, %%ebx; .byte 0x64, 0x67, 0x90 " ::"i"(tag) \
: "%ebx")
#endif
#else
#define INSERT_SSC_MARK(tag) ((void)0)
#endif
/* Markers for the start and end of regions that represent polling and are
therefore uninteresting to architectural simulations 0x4376 and 0x4377 are
arbitrary numbers that should be unique in the space of SSC tags, but there
is no central issuing authority rather randomness is expected to work. */
#define SSC_MARK_SPIN_START() INSERT_SSC_MARK(0x4376)
#define SSC_MARK_SPIN_END() INSERT_SSC_MARK(0x4377)
// Markers for architecture simulation.
// FORKING : Before the master thread forks.
// JOINING : At the start of the join.
// INVOKING : Before the threads invoke microtasks.
// DISPATCH_INIT: At the start of dynamically scheduled loop.
// DISPATCH_NEXT: After claming next iteration of dynamically scheduled loop.
#define SSC_MARK_FORKING() INSERT_SSC_MARK(0xd693)
#define SSC_MARK_JOINING() INSERT_SSC_MARK(0xd694)
#define SSC_MARK_INVOKING() INSERT_SSC_MARK(0xd695)
#define SSC_MARK_DISPATCH_INIT() INSERT_SSC_MARK(0xd696)
#define SSC_MARK_DISPATCH_NEXT() INSERT_SSC_MARK(0xd697)
// The object is an address that associates a specific set of the prepare,
// acquire, release, and cancel operations.
/* Sync prepare indicates a thread is going to start waiting for another thread
to send a release event. This operation should be done just before the
thread begins checking for the existence of the release event */
/* Sync cancel indicates a thread is cancelling a wait on another thread and
continuing execution without waiting for the other thread to release it */
/* Sync acquired indicates a thread has received a release event from another
thread and has stopped waiting. This operation must occur only after the
release event is received. */
/* Sync release indicates a thread is going to send a release event to another
thread so it will stop waiting and continue execution. This operation must
just happen before the release event. */
#define KMP_FSYNC_PREPARE(obj) __itt_fsync_prepare((void *)(obj))
#define KMP_FSYNC_CANCEL(obj) __itt_fsync_cancel((void *)(obj))
#define KMP_FSYNC_ACQUIRED(obj) __itt_fsync_acquired((void *)(obj))
#define KMP_FSYNC_RELEASING(obj) __itt_fsync_releasing((void *)(obj))
/* In case of waiting in a spin loop, ITT wants KMP_FSYNC_PREPARE() to be called
with a delay (and not called at all if waiting time is small). So, in spin
loops, do not use KMP_FSYNC_PREPARE(), but use KMP_FSYNC_SPIN_INIT() (before
spin loop), KMP_FSYNC_SPIN_PREPARE() (whithin the spin loop), and
KMP_FSYNC_SPIN_ACQUIRED(). See KMP_WAIT_YIELD() for example. */
#undef KMP_FSYNC_SPIN_INIT
#define KMP_FSYNC_SPIN_INIT(obj, spin) \
int sync_iters = 0; \
if (__itt_fsync_prepare_ptr) { \
if (obj == NULL) { \
obj = spin; \
} /* if */ \
} /* if */ \
SSC_MARK_SPIN_START()
#undef KMP_FSYNC_SPIN_PREPARE
#define KMP_FSYNC_SPIN_PREPARE(obj) \
do { \
if (__itt_fsync_prepare_ptr && sync_iters < __kmp_itt_prepare_delay) { \
++sync_iters; \
if (sync_iters >= __kmp_itt_prepare_delay) { \
KMP_FSYNC_PREPARE((void *)obj); \
} /* if */ \
} /* if */ \
} while (0)
#undef KMP_FSYNC_SPIN_ACQUIRED
#define KMP_FSYNC_SPIN_ACQUIRED(obj) \
do { \
SSC_MARK_SPIN_END(); \
if (sync_iters >= __kmp_itt_prepare_delay) { \
KMP_FSYNC_ACQUIRED((void *)obj); \
} /* if */ \
} while (0)
/* ITT will not report objects created within KMP_ITT_IGNORE(), e. g.:
KMP_ITT_IGNORE(
ptr = malloc( size );
);
*/
#define KMP_ITT_IGNORE(statement) \
do { \
__itt_state_t __itt_state_; \
if (__itt_state_get_ptr) { \
__itt_state_ = __itt_state_get(); \
__itt_obj_mode_set(__itt_obj_prop_ignore, __itt_obj_state_set); \
} /* if */ \
{ statement } \
if (__itt_state_get_ptr) { \
__itt_state_set(__itt_state_); \
} /* if */ \
} while (0)
const int KMP_MAX_FRAME_DOMAINS =
512; // Maximum number of frame domains to use (maps to
// different OpenMP regions in the user source code).
extern kmp_int32 __kmp_barrier_domain_count;
extern kmp_int32 __kmp_region_domain_count;
extern __itt_domain *__kmp_itt_barrier_domains[KMP_MAX_FRAME_DOMAINS];
extern __itt_domain *__kmp_itt_region_domains[KMP_MAX_FRAME_DOMAINS];
extern __itt_domain *__kmp_itt_imbalance_domains[KMP_MAX_FRAME_DOMAINS];
extern kmp_int32 __kmp_itt_region_team_size[KMP_MAX_FRAME_DOMAINS];
extern __itt_domain *metadata_domain;
extern __itt_string_handle *string_handle_imbl;
extern __itt_string_handle *string_handle_loop;
extern __itt_string_handle *string_handle_sngl;
#else
// Null definitions of the synchronization tracing functions.
#define KMP_FSYNC_PREPARE(obj) ((void)0)
#define KMP_FSYNC_CANCEL(obj) ((void)0)
#define KMP_FSYNC_ACQUIRED(obj) ((void)0)
#define KMP_FSYNC_RELEASING(obj) ((void)0)
#define KMP_FSYNC_SPIN_INIT(obj, spin) ((void)0)
#define KMP_FSYNC_SPIN_PREPARE(obj) ((void)0)
#define KMP_FSYNC_SPIN_ACQUIRED(obj) ((void)0)
#define KMP_ITT_IGNORE(stmt) \
do { \
stmt \
} while (0)
#endif // USE_ITT_NOTIFY
#if !KMP_DEBUG
// In release mode include definitions of inline functions.
#include "kmp_itt.inl"
#endif
#endif // KMP_ITT_H
#else /* USE_ITT_BUILD */
// Null definitions of the synchronization tracing functions.
// If USE_ITT_BULID is not enabled, USE_ITT_NOTIFY cannot be either.
// By defining these we avoid unpleasant ifdef tests in many places.
#define KMP_FSYNC_PREPARE(obj) ((void)0)
#define KMP_FSYNC_CANCEL(obj) ((void)0)
#define KMP_FSYNC_ACQUIRED(obj) ((void)0)
#define KMP_FSYNC_RELEASING(obj) ((void)0)
#define KMP_FSYNC_SPIN_INIT(obj, spin) ((void)0)
#define KMP_FSYNC_SPIN_PREPARE(obj) ((void)0)
#define KMP_FSYNC_SPIN_ACQUIRED(obj) ((void)0)
#define KMP_ITT_IGNORE(stmt) \
do { \
stmt \
} while (0)
#define USE_ITT_BUILD_ARG(x)
#endif /* USE_ITT_BUILD */

1043
runtime/src/kmp_itt.inl Normal file

File diff suppressed because it is too large Load Diff

3965
runtime/src/kmp_lock.cpp Normal file

File diff suppressed because it is too large Load Diff

1297
runtime/src/kmp_lock.h Normal file

File diff suppressed because it is too large Load Diff

242
runtime/src/kmp_omp.h Normal file
View File

@ -0,0 +1,242 @@
#if USE_DEBUGGER
/*
* kmp_omp.h -- OpenMP definition for kmp_omp_struct_info_t.
* This is for information about runtime library structures.
*/
//===----------------------------------------------------------------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is dual licensed under the MIT and the University of Illinois Open
// Source Licenses. See LICENSE.txt for details.
//
//===----------------------------------------------------------------------===//
/* THIS FILE SHOULD NOT BE MODIFIED IN IDB INTERFACE LIBRARY CODE
It should instead be modified in the OpenMP runtime and copied to the
interface library code. This way we can minimize the problems that this is
sure to cause having two copies of the same file.
Files live in libomp and libomp_db/src/include */
/* CHANGE THIS WHEN STRUCTURES BELOW CHANGE
Before we release this to a customer, please don't change this value. After
it is released and stable, then any new updates to the structures or data
structure traversal algorithms need to change this value. */
#define KMP_OMP_VERSION 9
typedef struct {
kmp_int32 offset;
kmp_int32 size;
} offset_and_size_t;
typedef struct {
kmp_uint64 addr;
kmp_int32 size;
kmp_int32 padding;
} addr_and_size_t;
typedef struct {
kmp_uint64 flags; // Flags for future extensions.
kmp_uint64
file; // Pointer to name of source file where the parallel region is.
kmp_uint64 func; // Pointer to name of routine where the parallel region is.
kmp_int32 begin; // Beginning of source line range.
kmp_int32 end; // End of source line range.
kmp_int32 num_threads; // Specified number of threads.
} kmp_omp_nthr_item_t;
typedef struct {
kmp_int32 num; // Number of items in the arrray.
kmp_uint64 array; // Address of array of kmp_omp_num_threads_item_t.
} kmp_omp_nthr_info_t;
/* This structure is known to the idb interface library */
typedef struct {
/* Change this only if you make a fundamental data structure change here */
kmp_int32 lib_version;
/* sanity check. Only should be checked if versions are identical
* This is also used for backward compatibility to get the runtime
* structure size if it the runtime is older than the interface */
kmp_int32 sizeof_this_structure;
/* OpenMP RTL version info. */
addr_and_size_t major;
addr_and_size_t minor;
addr_and_size_t build;
addr_and_size_t openmp_version;
addr_and_size_t banner;
/* Various globals. */
addr_and_size_t threads; // Pointer to __kmp_threads.
addr_and_size_t roots; // Pointer to __kmp_root.
addr_and_size_t capacity; // Pointer to __kmp_threads_capacity.
#if KMP_USE_MONITOR
addr_and_size_t monitor; // Pointer to __kmp_monitor.
#endif
#if !KMP_USE_DYNAMIC_LOCK
addr_and_size_t lock_table; // Pointer to __kmp_lock_table.
#endif
addr_and_size_t func_microtask;
addr_and_size_t func_fork;
addr_and_size_t func_fork_teams;
addr_and_size_t team_counter;
addr_and_size_t task_counter;
addr_and_size_t nthr_info;
kmp_int32 address_width;
kmp_int32 indexed_locks;
kmp_int32 last_barrier; // The end in enum barrier_type
kmp_int32 deque_size; // TASK_DEQUE_SIZE
/* thread structure information. */
kmp_int32 th_sizeof_struct;
offset_and_size_t th_info; // descriptor for thread
offset_and_size_t th_team; // team for this thread
offset_and_size_t th_root; // root for this thread
offset_and_size_t th_serial_team; // serial team under this thread
offset_and_size_t th_ident; // location for this thread (if available)
offset_and_size_t th_spin_here; // is thread waiting for lock (if available)
offset_and_size_t
th_next_waiting; // next thread waiting for lock (if available)
offset_and_size_t th_task_team; // task team struct
offset_and_size_t th_current_task; // innermost task being executed
offset_and_size_t
th_task_state; // alternating 0/1 for task team identification
offset_and_size_t th_bar;
offset_and_size_t th_b_worker_arrived; // the worker increases it by 1 when it
// arrives to the barrier
#if OMP_40_ENABLED
/* teams information */
offset_and_size_t th_teams_microtask; // entry address for teams construct
offset_and_size_t th_teams_level; // initial level of teams construct
offset_and_size_t th_teams_nteams; // number of teams in a league
offset_and_size_t
th_teams_nth; // number of threads in each team of the league
#endif
/* kmp_desc structure (for info field above) */
kmp_int32 ds_sizeof_struct;
offset_and_size_t ds_tid; // team thread id
offset_and_size_t ds_gtid; // global thread id
offset_and_size_t ds_thread; // native thread id
/* team structure information */
kmp_int32 t_sizeof_struct;
offset_and_size_t t_master_tid; // tid of master in parent team
offset_and_size_t t_ident; // location of parallel region
offset_and_size_t t_parent; // parent team
offset_and_size_t t_nproc; // # team threads
offset_and_size_t t_threads; // array of threads
offset_and_size_t t_serialized; // # levels of serialized teams
offset_and_size_t t_id; // unique team id
offset_and_size_t t_pkfn;
offset_and_size_t t_task_team; // task team structure
offset_and_size_t t_implicit_task; // taskdata for the thread's implicit task
#if OMP_40_ENABLED
offset_and_size_t t_cancel_request;
#endif
offset_and_size_t t_bar;
offset_and_size_t
t_b_master_arrived; // increased by 1 when master arrives to a barrier
offset_and_size_t
t_b_team_arrived; // increased by one when all the threads arrived
/* root structure information */
kmp_int32 r_sizeof_struct;
offset_and_size_t r_root_team; // team at root
offset_and_size_t r_hot_team; // hot team for this root
offset_and_size_t r_uber_thread; // root thread
offset_and_size_t r_root_id; // unique root id (if available)
/* ident structure information */
kmp_int32 id_sizeof_struct;
offset_and_size_t
id_psource; /* address of string ";file;func;line1;line2;;". */
offset_and_size_t id_flags;
/* lock structure information */
kmp_int32 lk_sizeof_struct;
offset_and_size_t lk_initialized;
offset_and_size_t lk_location;
offset_and_size_t lk_tail_id;
offset_and_size_t lk_head_id;
offset_and_size_t lk_next_ticket;
offset_and_size_t lk_now_serving;
offset_and_size_t lk_owner_id;
offset_and_size_t lk_depth_locked;
offset_and_size_t lk_lock_flags;
#if !KMP_USE_DYNAMIC_LOCK
/* lock_table_t */
kmp_int32 lt_size_of_struct; /* Size and layout of kmp_lock_table_t. */
offset_and_size_t lt_used;
offset_and_size_t lt_allocated;
offset_and_size_t lt_table;
#endif
/* task_team_t */
kmp_int32 tt_sizeof_struct;
offset_and_size_t tt_threads_data;
offset_and_size_t tt_found_tasks;
offset_and_size_t tt_nproc;
offset_and_size_t tt_unfinished_threads;
offset_and_size_t tt_active;
/* kmp_taskdata_t */
kmp_int32 td_sizeof_struct;
offset_and_size_t td_task_id; // task id
offset_and_size_t td_flags; // task flags
offset_and_size_t td_team; // team for this task
offset_and_size_t td_parent; // parent task
offset_and_size_t td_level; // task testing level
offset_and_size_t td_ident; // task identifier
offset_and_size_t td_allocated_child_tasks; // child tasks (+ current task)
// not yet deallocated
offset_and_size_t td_incomplete_child_tasks; // child tasks not yet complete
/* Taskwait */
offset_and_size_t td_taskwait_ident;
offset_and_size_t td_taskwait_counter;
offset_and_size_t
td_taskwait_thread; // gtid + 1 of thread encountered taskwait
#if OMP_40_ENABLED
/* Taskgroup */
offset_and_size_t td_taskgroup; // pointer to the current taskgroup
offset_and_size_t
td_task_count; // number of allocated and not yet complete tasks
offset_and_size_t td_cancel; // request for cancellation of this taskgroup
/* Task dependency */
offset_and_size_t
td_depnode; // pointer to graph node if the task has dependencies
offset_and_size_t dn_node;
offset_and_size_t dn_next;
offset_and_size_t dn_successors;
offset_and_size_t dn_task;
offset_and_size_t dn_npredecessors;
offset_and_size_t dn_nrefs;
#endif
offset_and_size_t dn_routine;
/* kmp_thread_data_t */
kmp_int32 hd_sizeof_struct;
offset_and_size_t hd_deque;
offset_and_size_t hd_deque_size;
offset_and_size_t hd_deque_head;
offset_and_size_t hd_deque_tail;
offset_and_size_t hd_deque_ntasks;
offset_and_size_t hd_deque_last_stolen;
// The last field of stable version.
kmp_uint64 last_field;
} kmp_omp_struct_info_t;
#endif /* USE_DEBUGGER */
/* end of file */

965
runtime/src/kmp_os.h Normal file
View File

@ -0,0 +1,965 @@
/*
* kmp_os.h -- KPTS runtime header file.
*/
//===----------------------------------------------------------------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is dual licensed under the MIT and the University of Illinois Open
// Source Licenses. See LICENSE.txt for details.
//
//===----------------------------------------------------------------------===//
#ifndef KMP_OS_H
#define KMP_OS_H
#include "kmp_config.h"
#include <stdlib.h>
#include <atomic>
#define KMP_FTN_PLAIN 1
#define KMP_FTN_APPEND 2
#define KMP_FTN_UPPER 3
/*
#define KMP_FTN_PREPEND 4
#define KMP_FTN_UAPPEND 5
*/
#define KMP_PTR_SKIP (sizeof(void *))
/* -------------------------- Compiler variations ------------------------ */
#define KMP_OFF 0
#define KMP_ON 1
#define KMP_MEM_CONS_VOLATILE 0
#define KMP_MEM_CONS_FENCE 1
#ifndef KMP_MEM_CONS_MODEL
#define KMP_MEM_CONS_MODEL KMP_MEM_CONS_VOLATILE
#endif
/* ------------------------- Compiler recognition ---------------------- */
#define KMP_COMPILER_ICC 0
#define KMP_COMPILER_GCC 0
#define KMP_COMPILER_CLANG 0
#define KMP_COMPILER_MSVC 0
#if defined(__INTEL_COMPILER)
#undef KMP_COMPILER_ICC
#define KMP_COMPILER_ICC 1
#elif defined(__clang__)
#undef KMP_COMPILER_CLANG
#define KMP_COMPILER_CLANG 1
#elif defined(__GNUC__)
#undef KMP_COMPILER_GCC
#define KMP_COMPILER_GCC 1
#elif defined(_MSC_VER)
#undef KMP_COMPILER_MSVC
#define KMP_COMPILER_MSVC 1
#else
#error Unknown compiler
#endif
#if (KMP_OS_LINUX || KMP_OS_WINDOWS) && !KMP_OS_CNK
#define KMP_AFFINITY_SUPPORTED 1
#if KMP_OS_WINDOWS && KMP_ARCH_X86_64
#define KMP_GROUP_AFFINITY 1
#else
#define KMP_GROUP_AFFINITY 0
#endif
#else
#define KMP_AFFINITY_SUPPORTED 0
#define KMP_GROUP_AFFINITY 0
#endif
/* Check for quad-precision extension. */
#define KMP_HAVE_QUAD 0
#if KMP_ARCH_X86 || KMP_ARCH_X86_64
#if KMP_COMPILER_ICC
/* _Quad is already defined for icc */
#undef KMP_HAVE_QUAD
#define KMP_HAVE_QUAD 1
#elif KMP_COMPILER_CLANG
/* Clang doesn't support a software-implemented
128-bit extended precision type yet */
typedef long double _Quad;
#elif KMP_COMPILER_GCC
/* GCC on NetBSD lacks __multc3/__divtc3 builtins needed for quad */
#if !KMP_OS_NETBSD
typedef __float128 _Quad;
#undef KMP_HAVE_QUAD
#define KMP_HAVE_QUAD 1
#endif
#elif KMP_COMPILER_MSVC
typedef long double _Quad;
#endif
#else
#if __LDBL_MAX_EXP__ >= 16384 && KMP_COMPILER_GCC
typedef long double _Quad;
#undef KMP_HAVE_QUAD
#define KMP_HAVE_QUAD 1
#endif
#endif /* KMP_ARCH_X86 || KMP_ARCH_X86_64 */
#define KMP_USE_X87CONTROL 0
#if KMP_OS_WINDOWS
#define KMP_END_OF_LINE "\r\n"
typedef char kmp_int8;
typedef unsigned char kmp_uint8;
typedef short kmp_int16;
typedef unsigned short kmp_uint16;
typedef int kmp_int32;
typedef unsigned int kmp_uint32;
#define KMP_INT32_SPEC "d"
#define KMP_UINT32_SPEC "u"
#ifndef KMP_STRUCT64
typedef __int64 kmp_int64;
typedef unsigned __int64 kmp_uint64;
#define KMP_INT64_SPEC "I64d"
#define KMP_UINT64_SPEC "I64u"
#else
struct kmp_struct64 {
kmp_int32 a, b;
};
typedef struct kmp_struct64 kmp_int64;
typedef struct kmp_struct64 kmp_uint64;
/* Not sure what to use for KMP_[U]INT64_SPEC here */
#endif
#if KMP_ARCH_X86 && KMP_MSVC_COMPAT
#undef KMP_USE_X87CONTROL
#define KMP_USE_X87CONTROL 1
#endif
#if KMP_ARCH_X86_64
#define KMP_INTPTR 1
typedef __int64 kmp_intptr_t;
typedef unsigned __int64 kmp_uintptr_t;
#define KMP_INTPTR_SPEC "I64d"
#define KMP_UINTPTR_SPEC "I64u"
#endif
#endif /* KMP_OS_WINDOWS */
#if KMP_OS_UNIX
#define KMP_END_OF_LINE "\n"
typedef char kmp_int8;
typedef unsigned char kmp_uint8;
typedef short kmp_int16;
typedef unsigned short kmp_uint16;
typedef int kmp_int32;
typedef unsigned int kmp_uint32;
typedef long long kmp_int64;
typedef unsigned long long kmp_uint64;
#define KMP_INT32_SPEC "d"
#define KMP_UINT32_SPEC "u"
#define KMP_INT64_SPEC "lld"
#define KMP_UINT64_SPEC "llu"
#endif /* KMP_OS_UNIX */
#if KMP_ARCH_X86 || KMP_ARCH_ARM || KMP_ARCH_MIPS
#define KMP_SIZE_T_SPEC KMP_UINT32_SPEC
#elif KMP_ARCH_X86_64 || KMP_ARCH_PPC64 || KMP_ARCH_AARCH64 || KMP_ARCH_MIPS64
#define KMP_SIZE_T_SPEC KMP_UINT64_SPEC
#else
#error "Can't determine size_t printf format specifier."
#endif
#if KMP_ARCH_X86
#define KMP_SIZE_T_MAX (0xFFFFFFFF)
#else
#define KMP_SIZE_T_MAX (0xFFFFFFFFFFFFFFFF)
#endif
typedef size_t kmp_size_t;
typedef float kmp_real32;
typedef double kmp_real64;
#ifndef KMP_INTPTR
#define KMP_INTPTR 1
typedef long kmp_intptr_t;
typedef unsigned long kmp_uintptr_t;
#define KMP_INTPTR_SPEC "ld"
#define KMP_UINTPTR_SPEC "lu"
#endif
#ifdef BUILD_I8
typedef kmp_int64 kmp_int;
typedef kmp_uint64 kmp_uint;
#else
typedef kmp_int32 kmp_int;
typedef kmp_uint32 kmp_uint;
#endif /* BUILD_I8 */
#define KMP_INT_MAX ((kmp_int32)0x7FFFFFFF)
#define KMP_INT_MIN ((kmp_int32)0x80000000)
#ifdef __cplusplus
// macros to cast out qualifiers and to re-interpret types
#define CCAST(type, var) const_cast<type>(var)
#define RCAST(type, var) reinterpret_cast<type>(var)
//-------------------------------------------------------------------------
// template for debug prints specification ( d, u, lld, llu ), and to obtain
// signed/unsigned flavors of a type
template <typename T> struct traits_t {};
// int
template <> struct traits_t<signed int> {
typedef signed int signed_t;
typedef unsigned int unsigned_t;
typedef double floating_t;
static char const *spec;
static const signed_t max_value = 0x7fffffff;
static const signed_t min_value = 0x80000000;
static const int type_size = sizeof(signed_t);
};
// unsigned int
template <> struct traits_t<unsigned int> {
typedef signed int signed_t;
typedef unsigned int unsigned_t;
typedef double floating_t;
static char const *spec;
static const unsigned_t max_value = 0xffffffff;
static const unsigned_t min_value = 0x00000000;
static const int type_size = sizeof(unsigned_t);
};
// long
template <> struct traits_t<signed long> {
typedef signed long signed_t;
typedef unsigned long unsigned_t;
typedef long double floating_t;
static char const *spec;
static const int type_size = sizeof(signed_t);
};
// long long
template <> struct traits_t<signed long long> {
typedef signed long long signed_t;
typedef unsigned long long unsigned_t;
typedef long double floating_t;
static char const *spec;
static const signed_t max_value = 0x7fffffffffffffffLL;
static const signed_t min_value = 0x8000000000000000LL;
static const int type_size = sizeof(signed_t);
};
// unsigned long long
template <> struct traits_t<unsigned long long> {
typedef signed long long signed_t;
typedef unsigned long long unsigned_t;
typedef long double floating_t;
static char const *spec;
static const unsigned_t max_value = 0xffffffffffffffffLL;
static const unsigned_t min_value = 0x0000000000000000LL;
static const int type_size = sizeof(unsigned_t);
};
//-------------------------------------------------------------------------
#else
#define CCAST(type, var) (type)(var)
#define RCAST(type, var) (type)(var)
#endif // __cplusplus
#define KMP_EXPORT extern /* export declaration in guide libraries */
#if __GNUC__ >= 4 && !defined(__MINGW32__)
#define __forceinline __inline
#endif
#if KMP_OS_WINDOWS
#include <windows.h>
static inline int KMP_GET_PAGE_SIZE(void) {
SYSTEM_INFO si;
GetSystemInfo(&si);
return si.dwPageSize;
}
#else
#define KMP_GET_PAGE_SIZE() getpagesize()
#endif
#define PAGE_ALIGNED(_addr) \
(!((size_t)_addr & (size_t)(KMP_GET_PAGE_SIZE() - 1)))
#define ALIGN_TO_PAGE(x) \
(void *)(((size_t)(x)) & ~((size_t)(KMP_GET_PAGE_SIZE() - 1)))
/* ---------- Support for cache alignment, padding, etc. ----------------*/
#ifdef __cplusplus
extern "C" {
#endif // __cplusplus
#define INTERNODE_CACHE_LINE 4096 /* for multi-node systems */
/* Define the default size of the cache line */
#ifndef CACHE_LINE
#define CACHE_LINE 128 /* cache line size in bytes */
#else
#if (CACHE_LINE < 64) && !defined(KMP_OS_DARWIN)
// 2006-02-13: This produces too many warnings on OS X*. Disable for now
#warning CACHE_LINE is too small.
#endif
#endif /* CACHE_LINE */
#define KMP_CACHE_PREFETCH(ADDR) /* nothing */
// Define attribute that indicates a function does not return
#if __cplusplus >= 201103L
#define KMP_NORETURN [[noreturn]]
#elif KMP_OS_WINDOWS
#define KMP_NORETURN __declspec(noreturn)
#else
#define KMP_NORETURN __attribute__((noreturn))
#endif
#if KMP_OS_WINDOWS && KMP_MSVC_COMPAT
#define KMP_ALIGN(bytes) __declspec(align(bytes))
#define KMP_THREAD_LOCAL __declspec(thread)
#define KMP_ALIAS /* Nothing */
#else
#define KMP_ALIGN(bytes) __attribute__((aligned(bytes)))
#define KMP_THREAD_LOCAL __thread
#define KMP_ALIAS(alias_of) __attribute__((alias(alias_of)))
#endif
#if KMP_HAVE_WEAK_ATTRIBUTE
#define KMP_WEAK_ATTRIBUTE __attribute__((weak))
#else
#define KMP_WEAK_ATTRIBUTE /* Nothing */
#endif
// Define KMP_VERSION_SYMBOL and KMP_EXPAND_NAME
#ifndef KMP_STR
#define KMP_STR(x) _KMP_STR(x)
#define _KMP_STR(x) #x
#endif
#ifdef KMP_USE_VERSION_SYMBOLS
// If using versioned symbols, KMP_EXPAND_NAME prepends
// __kmp_api_ to the real API name
#define KMP_EXPAND_NAME(api_name) _KMP_EXPAND_NAME(api_name)
#define _KMP_EXPAND_NAME(api_name) __kmp_api_##api_name
#define KMP_VERSION_SYMBOL(api_name, ver_num, ver_str) \
_KMP_VERSION_SYMBOL(api_name, ver_num, ver_str, "VERSION")
#define _KMP_VERSION_SYMBOL(api_name, ver_num, ver_str, default_ver) \
__typeof__(__kmp_api_##api_name) __kmp_api_##api_name##_##ver_num##_alias \
__attribute__((alias(KMP_STR(__kmp_api_##api_name)))); \
__asm__( \
".symver " KMP_STR(__kmp_api_##api_name##_##ver_num##_alias) "," KMP_STR( \
api_name) "@" ver_str "\n\t"); \
__asm__(".symver " KMP_STR(__kmp_api_##api_name) "," KMP_STR( \
api_name) "@@" default_ver "\n\t")
#else // KMP_USE_VERSION_SYMBOLS
#define KMP_EXPAND_NAME(api_name) api_name
#define KMP_VERSION_SYMBOL(api_name, ver_num, ver_str) /* Nothing */
#endif // KMP_USE_VERSION_SYMBOLS
/* Temporary note: if performance testing of this passes, we can remove
all references to KMP_DO_ALIGN and replace with KMP_ALIGN. */
#define KMP_DO_ALIGN(bytes) KMP_ALIGN(bytes)
#define KMP_ALIGN_CACHE KMP_ALIGN(CACHE_LINE)
#define KMP_ALIGN_CACHE_INTERNODE KMP_ALIGN(INTERNODE_CACHE_LINE)
/* General purpose fence types for memory operations */
enum kmp_mem_fence_type {
kmp_no_fence, /* No memory fence */
kmp_acquire_fence, /* Acquire (read) memory fence */
kmp_release_fence, /* Release (write) memory fence */
kmp_full_fence /* Full (read+write) memory fence */
};
// Synchronization primitives
#if KMP_ASM_INTRINS && KMP_OS_WINDOWS
#if KMP_MSVC_COMPAT && !KMP_COMPILER_CLANG
#pragma intrinsic(InterlockedExchangeAdd)
#pragma intrinsic(InterlockedCompareExchange)
#pragma intrinsic(InterlockedExchange)
#pragma intrinsic(InterlockedExchange64)
#endif
// Using InterlockedIncrement / InterlockedDecrement causes a library loading
// ordering problem, so we use InterlockedExchangeAdd instead.
#define KMP_TEST_THEN_INC32(p) InterlockedExchangeAdd((volatile long *)(p), 1)
#define KMP_TEST_THEN_INC_ACQ32(p) \
InterlockedExchangeAdd((volatile long *)(p), 1)
#define KMP_TEST_THEN_ADD4_32(p) InterlockedExchangeAdd((volatile long *)(p), 4)
#define KMP_TEST_THEN_ADD4_ACQ32(p) \
InterlockedExchangeAdd((volatile long *)(p), 4)
#define KMP_TEST_THEN_DEC32(p) InterlockedExchangeAdd((volatile long *)(p), -1)
#define KMP_TEST_THEN_DEC_ACQ32(p) \
InterlockedExchangeAdd((volatile long *)(p), -1)
#define KMP_TEST_THEN_ADD32(p, v) \
InterlockedExchangeAdd((volatile long *)(p), (v))
#define KMP_COMPARE_AND_STORE_RET32(p, cv, sv) \
InterlockedCompareExchange((volatile long *)(p), (long)(sv), (long)(cv))
#define KMP_XCHG_FIXED32(p, v) \
InterlockedExchange((volatile long *)(p), (long)(v))
#define KMP_XCHG_FIXED64(p, v) \
InterlockedExchange64((volatile kmp_int64 *)(p), (kmp_int64)(v))
inline kmp_real32 KMP_XCHG_REAL32(volatile kmp_real32 *p, kmp_real32 v) {
kmp_int32 tmp = InterlockedExchange((volatile long *)p, *(long *)&v);
return *(kmp_real32 *)&tmp;
}
// Routines that we still need to implement in assembly.
extern kmp_int8 __kmp_test_then_add8(volatile kmp_int8 *p, kmp_int8 v);
extern kmp_int8 __kmp_test_then_or8(volatile kmp_int8 *p, kmp_int8 v);
extern kmp_int8 __kmp_test_then_and8(volatile kmp_int8 *p, kmp_int8 v);
extern kmp_int32 __kmp_test_then_add32(volatile kmp_int32 *p, kmp_int32 v);
extern kmp_uint32 __kmp_test_then_or32(volatile kmp_uint32 *p, kmp_uint32 v);
extern kmp_uint32 __kmp_test_then_and32(volatile kmp_uint32 *p, kmp_uint32 v);
extern kmp_int64 __kmp_test_then_add64(volatile kmp_int64 *p, kmp_int64 v);
extern kmp_uint64 __kmp_test_then_or64(volatile kmp_uint64 *p, kmp_uint64 v);
extern kmp_uint64 __kmp_test_then_and64(volatile kmp_uint64 *p, kmp_uint64 v);
extern kmp_int8 __kmp_compare_and_store8(volatile kmp_int8 *p, kmp_int8 cv,
kmp_int8 sv);
extern kmp_int16 __kmp_compare_and_store16(volatile kmp_int16 *p, kmp_int16 cv,
kmp_int16 sv);
extern kmp_int32 __kmp_compare_and_store32(volatile kmp_int32 *p, kmp_int32 cv,
kmp_int32 sv);
extern kmp_int32 __kmp_compare_and_store64(volatile kmp_int64 *p, kmp_int64 cv,
kmp_int64 sv);
extern kmp_int8 __kmp_compare_and_store_ret8(volatile kmp_int8 *p, kmp_int8 cv,
kmp_int8 sv);
extern kmp_int16 __kmp_compare_and_store_ret16(volatile kmp_int16 *p,
kmp_int16 cv, kmp_int16 sv);
extern kmp_int32 __kmp_compare_and_store_ret32(volatile kmp_int32 *p,
kmp_int32 cv, kmp_int32 sv);
extern kmp_int64 __kmp_compare_and_store_ret64(volatile kmp_int64 *p,
kmp_int64 cv, kmp_int64 sv);
extern kmp_int8 __kmp_xchg_fixed8(volatile kmp_int8 *p, kmp_int8 v);
extern kmp_int16 __kmp_xchg_fixed16(volatile kmp_int16 *p, kmp_int16 v);
extern kmp_int32 __kmp_xchg_fixed32(volatile kmp_int32 *p, kmp_int32 v);
extern kmp_int64 __kmp_xchg_fixed64(volatile kmp_int64 *p, kmp_int64 v);
extern kmp_real32 __kmp_xchg_real32(volatile kmp_real32 *p, kmp_real32 v);
extern kmp_real64 __kmp_xchg_real64(volatile kmp_real64 *p, kmp_real64 v);
//#define KMP_TEST_THEN_INC32(p) __kmp_test_then_add32((p), 1)
//#define KMP_TEST_THEN_INC_ACQ32(p) __kmp_test_then_add32((p), 1)
#define KMP_TEST_THEN_INC64(p) __kmp_test_then_add64((p), 1LL)
#define KMP_TEST_THEN_INC_ACQ64(p) __kmp_test_then_add64((p), 1LL)
//#define KMP_TEST_THEN_ADD4_32(p) __kmp_test_then_add32((p), 4)
//#define KMP_TEST_THEN_ADD4_ACQ32(p) __kmp_test_then_add32((p), 4)
#define KMP_TEST_THEN_ADD4_64(p) __kmp_test_then_add64((p), 4LL)
#define KMP_TEST_THEN_ADD4_ACQ64(p) __kmp_test_then_add64((p), 4LL)
//#define KMP_TEST_THEN_DEC32(p) __kmp_test_then_add32((p), -1)
//#define KMP_TEST_THEN_DEC_ACQ32(p) __kmp_test_then_add32((p), -1)
#define KMP_TEST_THEN_DEC64(p) __kmp_test_then_add64((p), -1LL)
#define KMP_TEST_THEN_DEC_ACQ64(p) __kmp_test_then_add64((p), -1LL)
//#define KMP_TEST_THEN_ADD32(p, v) __kmp_test_then_add32((p), (v))
#define KMP_TEST_THEN_ADD8(p, v) __kmp_test_then_add8((p), (v))
#define KMP_TEST_THEN_ADD64(p, v) __kmp_test_then_add64((p), (v))
#define KMP_TEST_THEN_OR8(p, v) __kmp_test_then_or8((p), (v))
#define KMP_TEST_THEN_AND8(p, v) __kmp_test_then_and8((p), (v))
#define KMP_TEST_THEN_OR32(p, v) __kmp_test_then_or32((p), (v))
#define KMP_TEST_THEN_AND32(p, v) __kmp_test_then_and32((p), (v))
#define KMP_TEST_THEN_OR64(p, v) __kmp_test_then_or64((p), (v))
#define KMP_TEST_THEN_AND64(p, v) __kmp_test_then_and64((p), (v))
#define KMP_COMPARE_AND_STORE_ACQ8(p, cv, sv) \
__kmp_compare_and_store8((p), (cv), (sv))
#define KMP_COMPARE_AND_STORE_REL8(p, cv, sv) \
__kmp_compare_and_store8((p), (cv), (sv))
#define KMP_COMPARE_AND_STORE_ACQ16(p, cv, sv) \
__kmp_compare_and_store16((p), (cv), (sv))
#define KMP_COMPARE_AND_STORE_REL16(p, cv, sv) \
__kmp_compare_and_store16((p), (cv), (sv))
#define KMP_COMPARE_AND_STORE_ACQ32(p, cv, sv) \
__kmp_compare_and_store32((volatile kmp_int32 *)(p), (kmp_int32)(cv), \
(kmp_int32)(sv))
#define KMP_COMPARE_AND_STORE_REL32(p, cv, sv) \
__kmp_compare_and_store32((volatile kmp_int32 *)(p), (kmp_int32)(cv), \
(kmp_int32)(sv))
#define KMP_COMPARE_AND_STORE_ACQ64(p, cv, sv) \
__kmp_compare_and_store64((volatile kmp_int64 *)(p), (kmp_int64)(cv), \
(kmp_int64)(sv))
#define KMP_COMPARE_AND_STORE_REL64(p, cv, sv) \
__kmp_compare_and_store64((volatile kmp_int64 *)(p), (kmp_int64)(cv), \
(kmp_int64)(sv))
#if KMP_ARCH_X86
#define KMP_COMPARE_AND_STORE_PTR(p, cv, sv) \
__kmp_compare_and_store32((volatile kmp_int32 *)(p), (kmp_int32)(cv), \
(kmp_int32)(sv))
#else /* 64 bit pointers */
#define KMP_COMPARE_AND_STORE_PTR(p, cv, sv) \
__kmp_compare_and_store64((volatile kmp_int64 *)(p), (kmp_int64)(cv), \
(kmp_int64)(sv))
#endif /* KMP_ARCH_X86 */
#define KMP_COMPARE_AND_STORE_RET8(p, cv, sv) \
__kmp_compare_and_store_ret8((p), (cv), (sv))
#define KMP_COMPARE_AND_STORE_RET16(p, cv, sv) \
__kmp_compare_and_store_ret16((p), (cv), (sv))
#define KMP_COMPARE_AND_STORE_RET64(p, cv, sv) \
__kmp_compare_and_store_ret64((volatile kmp_int64 *)(p), (kmp_int64)(cv), \
(kmp_int64)(sv))
#define KMP_XCHG_FIXED8(p, v) \
__kmp_xchg_fixed8((volatile kmp_int8 *)(p), (kmp_int8)(v));
#define KMP_XCHG_FIXED16(p, v) __kmp_xchg_fixed16((p), (v));
//#define KMP_XCHG_FIXED32(p, v) __kmp_xchg_fixed32((p), (v));
//#define KMP_XCHG_FIXED64(p, v) __kmp_xchg_fixed64((p), (v));
//#define KMP_XCHG_REAL32(p, v) __kmp_xchg_real32((p), (v));
#define KMP_XCHG_REAL64(p, v) __kmp_xchg_real64((p), (v));
#elif (KMP_ASM_INTRINS && KMP_OS_UNIX) || !(KMP_ARCH_X86 || KMP_ARCH_X86_64)
/* cast p to correct type so that proper intrinsic will be used */
#define KMP_TEST_THEN_INC32(p) \
__sync_fetch_and_add((volatile kmp_int32 *)(p), 1)
#define KMP_TEST_THEN_INC_ACQ32(p) \
__sync_fetch_and_add((volatile kmp_int32 *)(p), 1)
#define KMP_TEST_THEN_INC64(p) \
__sync_fetch_and_add((volatile kmp_int64 *)(p), 1LL)
#define KMP_TEST_THEN_INC_ACQ64(p) \
__sync_fetch_and_add((volatile kmp_int64 *)(p), 1LL)
#define KMP_TEST_THEN_ADD4_32(p) \
__sync_fetch_and_add((volatile kmp_int32 *)(p), 4)
#define KMP_TEST_THEN_ADD4_ACQ32(p) \
__sync_fetch_and_add((volatile kmp_int32 *)(p), 4)
#define KMP_TEST_THEN_ADD4_64(p) \
__sync_fetch_and_add((volatile kmp_int64 *)(p), 4LL)
#define KMP_TEST_THEN_ADD4_ACQ64(p) \
__sync_fetch_and_add((volatile kmp_int64 *)(p), 4LL)
#define KMP_TEST_THEN_DEC32(p) \
__sync_fetch_and_sub((volatile kmp_int32 *)(p), 1)
#define KMP_TEST_THEN_DEC_ACQ32(p) \
__sync_fetch_and_sub((volatile kmp_int32 *)(p), 1)
#define KMP_TEST_THEN_DEC64(p) \
__sync_fetch_and_sub((volatile kmp_int64 *)(p), 1LL)
#define KMP_TEST_THEN_DEC_ACQ64(p) \
__sync_fetch_and_sub((volatile kmp_int64 *)(p), 1LL)
#define KMP_TEST_THEN_ADD8(p, v) \
__sync_fetch_and_add((volatile kmp_int8 *)(p), (kmp_int8)(v))
#define KMP_TEST_THEN_ADD32(p, v) \
__sync_fetch_and_add((volatile kmp_int32 *)(p), (kmp_int32)(v))
#define KMP_TEST_THEN_ADD64(p, v) \
__sync_fetch_and_add((volatile kmp_int64 *)(p), (kmp_int64)(v))
#define KMP_TEST_THEN_OR8(p, v) \
__sync_fetch_and_or((volatile kmp_int8 *)(p), (kmp_int8)(v))
#define KMP_TEST_THEN_AND8(p, v) \
__sync_fetch_and_and((volatile kmp_int8 *)(p), (kmp_int8)(v))
#define KMP_TEST_THEN_OR32(p, v) \
__sync_fetch_and_or((volatile kmp_uint32 *)(p), (kmp_uint32)(v))
#define KMP_TEST_THEN_AND32(p, v) \
__sync_fetch_and_and((volatile kmp_uint32 *)(p), (kmp_uint32)(v))
#define KMP_TEST_THEN_OR64(p, v) \
__sync_fetch_and_or((volatile kmp_uint64 *)(p), (kmp_uint64)(v))
#define KMP_TEST_THEN_AND64(p, v) \
__sync_fetch_and_and((volatile kmp_uint64 *)(p), (kmp_uint64)(v))
#define KMP_COMPARE_AND_STORE_ACQ8(p, cv, sv) \
__sync_bool_compare_and_swap((volatile kmp_uint8 *)(p), (kmp_uint8)(cv), \
(kmp_uint8)(sv))
#define KMP_COMPARE_AND_STORE_REL8(p, cv, sv) \
__sync_bool_compare_and_swap((volatile kmp_uint8 *)(p), (kmp_uint8)(cv), \
(kmp_uint8)(sv))
#define KMP_COMPARE_AND_STORE_ACQ16(p, cv, sv) \
__sync_bool_compare_and_swap((volatile kmp_uint16 *)(p), (kmp_uint16)(cv), \
(kmp_uint16)(sv))
#define KMP_COMPARE_AND_STORE_REL16(p, cv, sv) \
__sync_bool_compare_and_swap((volatile kmp_uint16 *)(p), (kmp_uint16)(cv), \
(kmp_uint16)(sv))
#define KMP_COMPARE_AND_STORE_ACQ32(p, cv, sv) \
__sync_bool_compare_and_swap((volatile kmp_uint32 *)(p), (kmp_uint32)(cv), \
(kmp_uint32)(sv))
#define KMP_COMPARE_AND_STORE_REL32(p, cv, sv) \
__sync_bool_compare_and_swap((volatile kmp_uint32 *)(p), (kmp_uint32)(cv), \
(kmp_uint32)(sv))
#define KMP_COMPARE_AND_STORE_ACQ64(p, cv, sv) \
__sync_bool_compare_and_swap((volatile kmp_uint64 *)(p), (kmp_uint64)(cv), \
(kmp_uint64)(sv))
#define KMP_COMPARE_AND_STORE_REL64(p, cv, sv) \
__sync_bool_compare_and_swap((volatile kmp_uint64 *)(p), (kmp_uint64)(cv), \
(kmp_uint64)(sv))
#define KMP_COMPARE_AND_STORE_PTR(p, cv, sv) \
__sync_bool_compare_and_swap((void *volatile *)(p), (void *)(cv), \
(void *)(sv))
#define KMP_COMPARE_AND_STORE_RET8(p, cv, sv) \
__sync_val_compare_and_swap((volatile kmp_uint8 *)(p), (kmp_uint8)(cv), \
(kmp_uint8)(sv))
#define KMP_COMPARE_AND_STORE_RET16(p, cv, sv) \
__sync_val_compare_and_swap((volatile kmp_uint16 *)(p), (kmp_uint16)(cv), \
(kmp_uint16)(sv))
#define KMP_COMPARE_AND_STORE_RET32(p, cv, sv) \
__sync_val_compare_and_swap((volatile kmp_uint32 *)(p), (kmp_uint32)(cv), \
(kmp_uint32)(sv))
#define KMP_COMPARE_AND_STORE_RET64(p, cv, sv) \
__sync_val_compare_and_swap((volatile kmp_uint64 *)(p), (kmp_uint64)(cv), \
(kmp_uint64)(sv))
#define KMP_XCHG_FIXED8(p, v) \
__sync_lock_test_and_set((volatile kmp_uint8 *)(p), (kmp_uint8)(v))
#define KMP_XCHG_FIXED16(p, v) \
__sync_lock_test_and_set((volatile kmp_uint16 *)(p), (kmp_uint16)(v))
#define KMP_XCHG_FIXED32(p, v) \
__sync_lock_test_and_set((volatile kmp_uint32 *)(p), (kmp_uint32)(v))
#define KMP_XCHG_FIXED64(p, v) \
__sync_lock_test_and_set((volatile kmp_uint64 *)(p), (kmp_uint64)(v))
inline kmp_real32 KMP_XCHG_REAL32(volatile kmp_real32 *p, kmp_real32 v) {
kmp_int32 tmp =
__sync_lock_test_and_set((volatile kmp_uint32 *)(p), *(kmp_uint32 *)&v);
return *(kmp_real32 *)&tmp;
}
inline kmp_real64 KMP_XCHG_REAL64(volatile kmp_real64 *p, kmp_real64 v) {
kmp_int64 tmp =
__sync_lock_test_and_set((volatile kmp_uint64 *)(p), *(kmp_uint64 *)&v);
return *(kmp_real64 *)&tmp;
}
#else
extern kmp_int8 __kmp_test_then_add8(volatile kmp_int8 *p, kmp_int8 v);
extern kmp_int8 __kmp_test_then_or8(volatile kmp_int8 *p, kmp_int8 v);
extern kmp_int8 __kmp_test_then_and8(volatile kmp_int8 *p, kmp_int8 v);
extern kmp_int32 __kmp_test_then_add32(volatile kmp_int32 *p, kmp_int32 v);
extern kmp_uint32 __kmp_test_then_or32(volatile kmp_uint32 *p, kmp_uint32 v);
extern kmp_uint32 __kmp_test_then_and32(volatile kmp_uint32 *p, kmp_uint32 v);
extern kmp_int64 __kmp_test_then_add64(volatile kmp_int64 *p, kmp_int64 v);
extern kmp_uint64 __kmp_test_then_or64(volatile kmp_uint64 *p, kmp_uint64 v);
extern kmp_uint64 __kmp_test_then_and64(volatile kmp_uint64 *p, kmp_uint64 v);
extern kmp_int8 __kmp_compare_and_store8(volatile kmp_int8 *p, kmp_int8 cv,
kmp_int8 sv);
extern kmp_int16 __kmp_compare_and_store16(volatile kmp_int16 *p, kmp_int16 cv,
kmp_int16 sv);
extern kmp_int32 __kmp_compare_and_store32(volatile kmp_int32 *p, kmp_int32 cv,
kmp_int32 sv);
extern kmp_int32 __kmp_compare_and_store64(volatile kmp_int64 *p, kmp_int64 cv,
kmp_int64 sv);
extern kmp_int8 __kmp_compare_and_store_ret8(volatile kmp_int8 *p, kmp_int8 cv,
kmp_int8 sv);
extern kmp_int16 __kmp_compare_and_store_ret16(volatile kmp_int16 *p,
kmp_int16 cv, kmp_int16 sv);
extern kmp_int32 __kmp_compare_and_store_ret32(volatile kmp_int32 *p,
kmp_int32 cv, kmp_int32 sv);
extern kmp_int64 __kmp_compare_and_store_ret64(volatile kmp_int64 *p,
kmp_int64 cv, kmp_int64 sv);
extern kmp_int8 __kmp_xchg_fixed8(volatile kmp_int8 *p, kmp_int8 v);
extern kmp_int16 __kmp_xchg_fixed16(volatile kmp_int16 *p, kmp_int16 v);
extern kmp_int32 __kmp_xchg_fixed32(volatile kmp_int32 *p, kmp_int32 v);
extern kmp_int64 __kmp_xchg_fixed64(volatile kmp_int64 *p, kmp_int64 v);
extern kmp_real32 __kmp_xchg_real32(volatile kmp_real32 *p, kmp_real32 v);
extern kmp_real64 __kmp_xchg_real64(volatile kmp_real64 *p, kmp_real64 v);
#define KMP_TEST_THEN_INC32(p) \
__kmp_test_then_add32((volatile kmp_int32 *)(p), 1)
#define KMP_TEST_THEN_INC_ACQ32(p) \
__kmp_test_then_add32((volatile kmp_int32 *)(p), 1)
#define KMP_TEST_THEN_INC64(p) \
__kmp_test_then_add64((volatile kmp_int64 *)(p), 1LL)
#define KMP_TEST_THEN_INC_ACQ64(p) \
__kmp_test_then_add64((volatile kmp_int64 *)(p), 1LL)
#define KMP_TEST_THEN_ADD4_32(p) \
__kmp_test_then_add32((volatile kmp_int32 *)(p), 4)
#define KMP_TEST_THEN_ADD4_ACQ32(p) \
__kmp_test_then_add32((volatile kmp_int32 *)(p), 4)
#define KMP_TEST_THEN_ADD4_64(p) \
__kmp_test_then_add64((volatile kmp_int64 *)(p), 4LL)
#define KMP_TEST_THEN_ADD4_ACQ64(p) \
__kmp_test_then_add64((volatile kmp_int64 *)(p), 4LL)
#define KMP_TEST_THEN_DEC32(p) \
__kmp_test_then_add32((volatile kmp_int32 *)(p), -1)
#define KMP_TEST_THEN_DEC_ACQ32(p) \
__kmp_test_then_add32((volatile kmp_int32 *)(p), -1)
#define KMP_TEST_THEN_DEC64(p) \
__kmp_test_then_add64((volatile kmp_int64 *)(p), -1LL)
#define KMP_TEST_THEN_DEC_ACQ64(p) \
__kmp_test_then_add64((volatile kmp_int64 *)(p), -1LL)
#define KMP_TEST_THEN_ADD8(p, v) \
__kmp_test_then_add8((volatile kmp_int8 *)(p), (kmp_int8)(v))
#define KMP_TEST_THEN_ADD32(p, v) \
__kmp_test_then_add32((volatile kmp_int32 *)(p), (kmp_int32)(v))
#define KMP_TEST_THEN_ADD64(p, v) \
__kmp_test_then_add64((volatile kmp_int64 *)(p), (kmp_int64)(v))
#define KMP_TEST_THEN_OR8(p, v) \
__kmp_test_then_or8((volatile kmp_int8 *)(p), (kmp_int8)(v))
#define KMP_TEST_THEN_AND8(p, v) \
__kmp_test_then_and8((volatile kmp_int8 *)(p), (kmp_int8)(v))
#define KMP_TEST_THEN_OR32(p, v) \
__kmp_test_then_or32((volatile kmp_uint32 *)(p), (kmp_uint32)(v))
#define KMP_TEST_THEN_AND32(p, v) \
__kmp_test_then_and32((volatile kmp_uint32 *)(p), (kmp_uint32)(v))
#define KMP_TEST_THEN_OR64(p, v) \
__kmp_test_then_or64((volatile kmp_uint64 *)(p), (kmp_uint64)(v))
#define KMP_TEST_THEN_AND64(p, v) \
__kmp_test_then_and64((volatile kmp_uint64 *)(p), (kmp_uint64)(v))
#define KMP_COMPARE_AND_STORE_ACQ8(p, cv, sv) \
__kmp_compare_and_store8((volatile kmp_int8 *)(p), (kmp_int8)(cv), \
(kmp_int8)(sv))
#define KMP_COMPARE_AND_STORE_REL8(p, cv, sv) \
__kmp_compare_and_store8((volatile kmp_int8 *)(p), (kmp_int8)(cv), \
(kmp_int8)(sv))
#define KMP_COMPARE_AND_STORE_ACQ16(p, cv, sv) \
__kmp_compare_and_store16((volatile kmp_int16 *)(p), (kmp_int16)(cv), \
(kmp_int16)(sv))
#define KMP_COMPARE_AND_STORE_REL16(p, cv, sv) \
__kmp_compare_and_store16((volatile kmp_int16 *)(p), (kmp_int16)(cv), \
(kmp_int16)(sv))
#define KMP_COMPARE_AND_STORE_ACQ32(p, cv, sv) \
__kmp_compare_and_store32((volatile kmp_int32 *)(p), (kmp_int32)(cv), \
(kmp_int32)(sv))
#define KMP_COMPARE_AND_STORE_REL32(p, cv, sv) \
__kmp_compare_and_store32((volatile kmp_int32 *)(p), (kmp_int32)(cv), \
(kmp_int32)(sv))
#define KMP_COMPARE_AND_STORE_ACQ64(p, cv, sv) \
__kmp_compare_and_store64((volatile kmp_int64 *)(p), (kmp_int64)(cv), \
(kmp_int64)(sv))
#define KMP_COMPARE_AND_STORE_REL64(p, cv, sv) \
__kmp_compare_and_store64((volatile kmp_int64 *)(p), (kmp_int64)(cv), \
(kmp_int64)(sv))
#if KMP_ARCH_X86
#define KMP_COMPARE_AND_STORE_PTR(p, cv, sv) \
__kmp_compare_and_store32((volatile kmp_int32 *)(p), (kmp_int32)(cv), \
(kmp_int32)(sv))
#else /* 64 bit pointers */
#define KMP_COMPARE_AND_STORE_PTR(p, cv, sv) \
__kmp_compare_and_store64((volatile kmp_int64 *)(p), (kmp_int64)(cv), \
(kmp_int64)(sv))
#endif /* KMP_ARCH_X86 */
#define KMP_COMPARE_AND_STORE_RET8(p, cv, sv) \
__kmp_compare_and_store_ret8((p), (cv), (sv))
#define KMP_COMPARE_AND_STORE_RET16(p, cv, sv) \
__kmp_compare_and_store_ret16((p), (cv), (sv))
#define KMP_COMPARE_AND_STORE_RET32(p, cv, sv) \
__kmp_compare_and_store_ret32((volatile kmp_int32 *)(p), (kmp_int32)(cv), \
(kmp_int32)(sv))
#define KMP_COMPARE_AND_STORE_RET64(p, cv, sv) \
__kmp_compare_and_store_ret64((volatile kmp_int64 *)(p), (kmp_int64)(cv), \
(kmp_int64)(sv))
#define KMP_XCHG_FIXED8(p, v) \
__kmp_xchg_fixed8((volatile kmp_int8 *)(p), (kmp_int8)(v));
#define KMP_XCHG_FIXED16(p, v) __kmp_xchg_fixed16((p), (v));
#define KMP_XCHG_FIXED32(p, v) __kmp_xchg_fixed32((p), (v));
#define KMP_XCHG_FIXED64(p, v) __kmp_xchg_fixed64((p), (v));
#define KMP_XCHG_REAL32(p, v) __kmp_xchg_real32((p), (v));
#define KMP_XCHG_REAL64(p, v) __kmp_xchg_real64((p), (v));
#endif /* KMP_ASM_INTRINS */
/* ------------- relaxed consistency memory model stuff ------------------ */
#if KMP_OS_WINDOWS
#ifdef __ABSOFT_WIN
#define KMP_MB() asm("nop")
#define KMP_IMB() asm("nop")
#else
#define KMP_MB() /* _asm{ nop } */
#define KMP_IMB() /* _asm{ nop } */
#endif
#endif /* KMP_OS_WINDOWS */
#if KMP_ARCH_PPC64 || KMP_ARCH_ARM || KMP_ARCH_AARCH64 || KMP_ARCH_MIPS || \
KMP_ARCH_MIPS64
#define KMP_MB() __sync_synchronize()
#endif
#ifndef KMP_MB
#define KMP_MB() /* nothing to do */
#endif
#ifndef KMP_IMB
#define KMP_IMB() /* nothing to do */
#endif
#ifndef KMP_ST_REL32
#define KMP_ST_REL32(A, D) (*(A) = (D))
#endif
#ifndef KMP_ST_REL64
#define KMP_ST_REL64(A, D) (*(A) = (D))
#endif
#ifndef KMP_LD_ACQ32
#define KMP_LD_ACQ32(A) (*(A))
#endif
#ifndef KMP_LD_ACQ64
#define KMP_LD_ACQ64(A) (*(A))
#endif
/* ------------------------------------------------------------------------ */
// FIXME - maybe this should this be
//
// #define TCR_4(a) (*(volatile kmp_int32 *)(&a))
// #define TCW_4(a,b) (a) = (*(volatile kmp_int32 *)&(b))
//
// #define TCR_8(a) (*(volatile kmp_int64 *)(a))
// #define TCW_8(a,b) (a) = (*(volatile kmp_int64 *)(&b))
//
// I'm fairly certain this is the correct thing to do, but I'm afraid
// of performance regressions.
#define TCR_1(a) (a)
#define TCW_1(a, b) (a) = (b)
#define TCR_4(a) (a)
#define TCW_4(a, b) (a) = (b)
#define TCI_4(a) (++(a))
#define TCD_4(a) (--(a))
#define TCR_8(a) (a)
#define TCW_8(a, b) (a) = (b)
#define TCI_8(a) (++(a))
#define TCD_8(a) (--(a))
#define TCR_SYNC_4(a) (a)
#define TCW_SYNC_4(a, b) (a) = (b)
#define TCX_SYNC_4(a, b, c) \
KMP_COMPARE_AND_STORE_REL32((volatile kmp_int32 *)(volatile void *)&(a), \
(kmp_int32)(b), (kmp_int32)(c))
#define TCR_SYNC_8(a) (a)
#define TCW_SYNC_8(a, b) (a) = (b)
#define TCX_SYNC_8(a, b, c) \
KMP_COMPARE_AND_STORE_REL64((volatile kmp_int64 *)(volatile void *)&(a), \
(kmp_int64)(b), (kmp_int64)(c))
#if KMP_ARCH_X86 || KMP_ARCH_MIPS
// What about ARM?
#define TCR_PTR(a) ((void *)TCR_4(a))
#define TCW_PTR(a, b) TCW_4((a), (b))
#define TCR_SYNC_PTR(a) ((void *)TCR_SYNC_4(a))
#define TCW_SYNC_PTR(a, b) TCW_SYNC_4((a), (b))
#define TCX_SYNC_PTR(a, b, c) ((void *)TCX_SYNC_4((a), (b), (c)))
#else /* 64 bit pointers */
#define TCR_PTR(a) ((void *)TCR_8(a))
#define TCW_PTR(a, b) TCW_8((a), (b))
#define TCR_SYNC_PTR(a) ((void *)TCR_SYNC_8(a))
#define TCW_SYNC_PTR(a, b) TCW_SYNC_8((a), (b))
#define TCX_SYNC_PTR(a, b, c) ((void *)TCX_SYNC_8((a), (b), (c)))
#endif /* KMP_ARCH_X86 */
/* If these FTN_{TRUE,FALSE} values change, may need to change several places
where they are used to check that language is Fortran, not C. */
#ifndef FTN_TRUE
#define FTN_TRUE TRUE
#endif
#ifndef FTN_FALSE
#define FTN_FALSE FALSE
#endif
typedef void (*microtask_t)(int *gtid, int *npr, ...);
#ifdef USE_VOLATILE_CAST
#define VOLATILE_CAST(x) (volatile x)
#else
#define VOLATILE_CAST(x) (x)
#endif
#define KMP_WAIT_YIELD __kmp_wait_yield_4
#define KMP_WAIT_YIELD_PTR __kmp_wait_yield_4_ptr
#define KMP_EQ __kmp_eq_4
#define KMP_NEQ __kmp_neq_4
#define KMP_LT __kmp_lt_4
#define KMP_GE __kmp_ge_4
#define KMP_LE __kmp_le_4
/* Workaround for Intel(R) 64 code gen bug when taking address of static array
* (Intel(R) 64 Tracker #138) */
#if (KMP_ARCH_X86_64 || KMP_ARCH_PPC64) && KMP_OS_LINUX
#define STATIC_EFI2_WORKAROUND
#else
#define STATIC_EFI2_WORKAROUND static
#endif
// Support of BGET usage
#ifndef KMP_USE_BGET
#define KMP_USE_BGET 1
#endif
// Switches for OSS builds
#ifndef USE_CMPXCHG_FIX
#define USE_CMPXCHG_FIX 1
#endif
// Enable dynamic user lock
#if OMP_45_ENABLED
#define KMP_USE_DYNAMIC_LOCK 1
#endif
// Enable Intel(R) Transactional Synchronization Extensions (Intel(R) TSX) if
// dynamic user lock is turned on
#if KMP_USE_DYNAMIC_LOCK
// Visual studio can't handle the asm sections in this code
#define KMP_USE_TSX (KMP_ARCH_X86 || KMP_ARCH_X86_64) && !KMP_COMPILER_MSVC
#ifdef KMP_USE_ADAPTIVE_LOCKS
#undef KMP_USE_ADAPTIVE_LOCKS
#endif
#define KMP_USE_ADAPTIVE_LOCKS KMP_USE_TSX
#endif
// Enable tick time conversion of ticks to seconds
#if KMP_STATS_ENABLED
#define KMP_HAVE_TICK_TIME \
(KMP_OS_LINUX && (KMP_MIC || KMP_ARCH_X86 || KMP_ARCH_X86_64))
#endif
// Warning levels
enum kmp_warnings_level {
kmp_warnings_off = 0, /* No warnings */
kmp_warnings_low, /* Minimal warnings (default) */
kmp_warnings_explicit = 6, /* Explicitly set to ON - more warnings */
kmp_warnings_verbose /* reserved */
};
#ifdef __cplusplus
} // extern "C"
#endif // __cplusplus
// Macros for C++11 atomic functions
#define KMP_ATOMIC_LD(p, order) (p)->load(std::memory_order_##order)
#define KMP_ATOMIC_OP(op, p, v, order) (p)->op(v, std::memory_order_##order)
// For non-default load/store
#define KMP_ATOMIC_LD_ACQ(p) KMP_ATOMIC_LD(p, acquire)
#define KMP_ATOMIC_LD_RLX(p) KMP_ATOMIC_LD(p, relaxed)
#define KMP_ATOMIC_ST_REL(p, v) KMP_ATOMIC_OP(store, p, v, release)
#define KMP_ATOMIC_ST_RLX(p, v) KMP_ATOMIC_OP(store, p, v, relaxed)
// For non-default fetch_<op>
#define KMP_ATOMIC_ADD(p, v) KMP_ATOMIC_OP(fetch_add, p, v, acq_rel)
#define KMP_ATOMIC_SUB(p, v) KMP_ATOMIC_OP(fetch_sub, p, v, acq_rel)
#define KMP_ATOMIC_AND(p, v) KMP_ATOMIC_OP(fetch_and, p, v, acq_rel)
#define KMP_ATOMIC_OR(p, v) KMP_ATOMIC_OP(fetch_or, p, v, acq_rel)
#define KMP_ATOMIC_INC(p) KMP_ATOMIC_OP(fetch_add, p, 1, acq_rel)
#define KMP_ATOMIC_DEC(p) KMP_ATOMIC_OP(fetch_sub, p, 1, acq_rel)
#define KMP_ATOMIC_ADD_RLX(p, v) KMP_ATOMIC_OP(fetch_add, p, v, relaxed)
#define KMP_ATOMIC_INC_RLX(p) KMP_ATOMIC_OP(fetch_add, p, 1, relaxed)
// Callers of the following functions cannot see the side effect on "expected".
template <typename T>
bool __kmp_atomic_compare_store(std::atomic<T> *p, T expected, T desired) {
return p->compare_exchange_strong(
expected, desired, std::memory_order_acq_rel, std::memory_order_relaxed);
}
template <typename T>
bool __kmp_atomic_compare_store_acq(std::atomic<T> *p, T expected, T desired) {
return p->compare_exchange_strong(
expected, desired, std::memory_order_acquire, std::memory_order_relaxed);
}
template <typename T>
bool __kmp_atomic_compare_store_rel(std::atomic<T> *p, T expected, T desired) {
return p->compare_exchange_strong(
expected, desired, std::memory_order_release, std::memory_order_relaxed);
}
#endif /* KMP_OS_H */
// Safe C API
#include "kmp_safe_c_api.h"

207
runtime/src/kmp_platform.h Normal file
View File

@ -0,0 +1,207 @@
/*
* kmp_platform.h -- header for determining operating system and architecture
*/
//===----------------------------------------------------------------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is dual licensed under the MIT and the University of Illinois Open
// Source Licenses. See LICENSE.txt for details.
//
//===----------------------------------------------------------------------===//
#ifndef KMP_PLATFORM_H
#define KMP_PLATFORM_H
/* ---------------------- Operating system recognition ------------------- */
#define KMP_OS_LINUX 0
#define KMP_OS_DRAGONFLY 0
#define KMP_OS_FREEBSD 0
#define KMP_OS_NETBSD 0
#define KMP_OS_OPENBSD 0
#define KMP_OS_DARWIN 0
#define KMP_OS_WINDOWS 0
#define KMP_OS_CNK 0
#define KMP_OS_HURD 0
#define KMP_OS_UNIX 0 /* disjunction of KMP_OS_LINUX, KMP_OS_DARWIN etc. */
#ifdef _WIN32
#undef KMP_OS_WINDOWS
#define KMP_OS_WINDOWS 1
#endif
#if (defined __APPLE__ && defined __MACH__)
#undef KMP_OS_DARWIN
#define KMP_OS_DARWIN 1
#endif
// in some ppc64 linux installations, only the second condition is met
#if (defined __linux)
#undef KMP_OS_LINUX
#define KMP_OS_LINUX 1
#elif (defined __linux__)
#undef KMP_OS_LINUX
#define KMP_OS_LINUX 1
#else
#endif
#if (defined __DragonFly__)
#undef KMP_OS_DRAGONFLY
#define KMP_OS_DRAGONFLY 1
#endif
#if (defined __FreeBSD__)
#undef KMP_OS_FREEBSD
#define KMP_OS_FREEBSD 1
#endif
#if (defined __NetBSD__)
#undef KMP_OS_NETBSD
#define KMP_OS_NETBSD 1
#endif
#if (defined __OpenBSD__)
#undef KMP_OS_OPENBSD
#define KMP_OS_OPENBSD 1
#endif
#if (defined __bgq__)
#undef KMP_OS_CNK
#define KMP_OS_CNK 1
#endif
#if (defined __GNU__)
#undef KMP_OS_HURD
#define KMP_OS_HURD 1
#endif
#if (1 != \
KMP_OS_LINUX + KMP_OS_DRAGONFLY + KMP_OS_FREEBSD + KMP_OS_NETBSD + \
KMP_OS_OPENBSD + KMP_OS_DARWIN + KMP_OS_WINDOWS + KMP_OS_HURD)
#error Unknown OS
#endif
#if KMP_OS_LINUX || KMP_OS_DRAGONFLY || KMP_OS_FREEBSD || KMP_OS_NETBSD || \
KMP_OS_OPENBSD || KMP_OS_DARWIN || KMP_OS_HURD
#undef KMP_OS_UNIX
#define KMP_OS_UNIX 1
#endif
/* ---------------------- Architecture recognition ------------------- */
#define KMP_ARCH_X86 0
#define KMP_ARCH_X86_64 0
#define KMP_ARCH_AARCH64 0
#define KMP_ARCH_PPC64_BE 0
#define KMP_ARCH_PPC64_LE 0
#define KMP_ARCH_PPC64 (KMP_ARCH_PPC64_LE || KMP_ARCH_PPC64_BE)
#define KMP_ARCH_MIPS 0
#define KMP_ARCH_MIPS64 0
#if KMP_OS_WINDOWS
#if defined(_M_AMD64) || defined(__x86_64)
#undef KMP_ARCH_X86_64
#define KMP_ARCH_X86_64 1
#else
#undef KMP_ARCH_X86
#define KMP_ARCH_X86 1
#endif
#endif
#if KMP_OS_UNIX
#if defined __x86_64
#undef KMP_ARCH_X86_64
#define KMP_ARCH_X86_64 1
#elif defined __i386
#undef KMP_ARCH_X86
#define KMP_ARCH_X86 1
#elif defined __powerpc64__
#if defined __LITTLE_ENDIAN__
#undef KMP_ARCH_PPC64_LE
#define KMP_ARCH_PPC64_LE 1
#else
#undef KMP_ARCH_PPC64_BE
#define KMP_ARCH_PPC64_BE 1
#endif
#elif defined __aarch64__
#undef KMP_ARCH_AARCH64
#define KMP_ARCH_AARCH64 1
#elif defined __mips__
#if defined __mips64
#undef KMP_ARCH_MIPS64
#define KMP_ARCH_MIPS64 1
#else
#undef KMP_ARCH_MIPS
#define KMP_ARCH_MIPS 1
#endif
#endif
#endif
#if defined(__ARM_ARCH_7__) || defined(__ARM_ARCH_7R__) || \
defined(__ARM_ARCH_7A__)
#define KMP_ARCH_ARMV7 1
#endif
#if defined(KMP_ARCH_ARMV7) || defined(__ARM_ARCH_6__) || \
defined(__ARM_ARCH_6J__) || defined(__ARM_ARCH_6K__) || \
defined(__ARM_ARCH_6Z__) || defined(__ARM_ARCH_6T2__) || \
defined(__ARM_ARCH_6ZK__)
#define KMP_ARCH_ARMV6 1
#endif
#if defined(KMP_ARCH_ARMV6) || defined(__ARM_ARCH_5T__) || \
defined(__ARM_ARCH_5E__) || defined(__ARM_ARCH_5TE__) || \
defined(__ARM_ARCH_5TEJ__)
#define KMP_ARCH_ARMV5 1
#endif
#if defined(KMP_ARCH_ARMV5) || defined(__ARM_ARCH_4__) || \
defined(__ARM_ARCH_4T__)
#define KMP_ARCH_ARMV4 1
#endif
#if defined(KMP_ARCH_ARMV4) || defined(__ARM_ARCH_3__) || \
defined(__ARM_ARCH_3M__)
#define KMP_ARCH_ARMV3 1
#endif
#if defined(KMP_ARCH_ARMV3) || defined(__ARM_ARCH_2__)
#define KMP_ARCH_ARMV2 1
#endif
#if defined(KMP_ARCH_ARMV2)
#define KMP_ARCH_ARM 1
#endif
#if defined(__MIC__) || defined(__MIC2__)
#define KMP_MIC 1
#if __MIC2__ || __KNC__
#define KMP_MIC1 0
#define KMP_MIC2 1
#else
#define KMP_MIC1 1
#define KMP_MIC2 0
#endif
#else
#define KMP_MIC 0
#define KMP_MIC1 0
#define KMP_MIC2 0
#endif
/* Specify 32 bit architectures here */
#define KMP_32_BIT_ARCH (KMP_ARCH_X86 || KMP_ARCH_ARM || KMP_ARCH_MIPS)
// Platforms which support Intel(R) Many Integrated Core Architecture
#define KMP_MIC_SUPPORTED \
((KMP_ARCH_X86 || KMP_ARCH_X86_64) && (KMP_OS_LINUX || KMP_OS_WINDOWS))
// TODO: Fixme - This is clever, but really fugly
#if (1 != \
KMP_ARCH_X86 + KMP_ARCH_X86_64 + KMP_ARCH_ARM + KMP_ARCH_PPC64 + \
KMP_ARCH_AARCH64 + KMP_ARCH_MIPS + KMP_ARCH_MIPS64)
#error Unknown or unsupported architecture
#endif
#endif // KMP_PLATFORM_H

8192
runtime/src/kmp_runtime.cpp Normal file

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,75 @@
//===----------------------------------------------------------------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is dual licensed under the MIT and the University of Illinois Open
// Source Licenses. See LICENSE.txt for details.
//
//===----------------------------------------------------------------------===//
#ifndef KMP_SAFE_C_API_H
#define KMP_SAFE_C_API_H
#include "kmp_platform.h"
#include <string.h>
// Replacement for banned C API
// Not every unsafe call listed here is handled now, but keeping everything
// in one place should be handy for future maintenance.
#if KMP_OS_WINDOWS && KMP_MSVC_COMPAT
#define RSIZE_MAX_STR (4UL << 10) // 4KB
// _malloca was suggested, but it is not a drop-in replacement for _alloca
#define KMP_ALLOCA _alloca
#define KMP_MEMCPY_S memcpy_s
#define KMP_SNPRINTF sprintf_s
#define KMP_SSCANF sscanf_s
#define KMP_STRCPY_S strcpy_s
#define KMP_STRNCPY_S strncpy_s
// Use this only when buffer size is unknown
#define KMP_MEMCPY(dst, src, cnt) memcpy_s(dst, cnt, src, cnt)
#define KMP_STRLEN(str) strnlen_s(str, RSIZE_MAX_STR)
// Use this only when buffer size is unknown
#define KMP_STRNCPY(dst, src, cnt) strncpy_s(dst, cnt, src, cnt)
// _TRUNCATE insures buffer size > max string to print.
#define KMP_VSNPRINTF(dst, cnt, fmt, arg) \
vsnprintf_s(dst, cnt, _TRUNCATE, fmt, arg)
#else // KMP_OS_WINDOWS
// For now, these macros use the existing API.
#define KMP_ALLOCA alloca
#define KMP_MEMCPY_S(dst, bsz, src, cnt) memcpy(dst, src, cnt)
#define KMP_SNPRINTF snprintf
#define KMP_SSCANF sscanf
#define KMP_STRCPY_S(dst, bsz, src) strcpy(dst, src)
#define KMP_STRNCPY_S(dst, bsz, src, cnt) strncpy(dst, src, cnt)
#define KMP_VSNPRINTF vsnprintf
#define KMP_STRNCPY strncpy
#define KMP_STRLEN strlen
#define KMP_MEMCPY memcpy
#endif // KMP_OS_WINDOWS
// Offer truncated version of strncpy
static inline void __kmp_strncpy_truncate(char *buffer, size_t buf_size,
char const *src, size_t src_size) {
if (src_size >= buf_size) {
src_size = buf_size - 1;
KMP_STRNCPY_S(buffer, buf_size, src, src_size);
buffer[buf_size - 1] = '\0';
} else {
KMP_STRNCPY_S(buffer, buf_size, src, src_size);
}
}
#endif // KMP_SAFE_C_API_H

1001
runtime/src/kmp_sched.cpp Normal file

File diff suppressed because it is too large Load Diff

5832
runtime/src/kmp_settings.cpp Normal file

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,69 @@
/*
* kmp_settings.h -- Initialize environment variables
*/
//===----------------------------------------------------------------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is dual licensed under the MIT and the University of Illinois Open
// Source Licenses. See LICENSE.txt for details.
//
//===----------------------------------------------------------------------===//
#ifndef KMP_SETTINGS_H
#define KMP_SETTINGS_H
void __kmp_reset_global_vars(void);
void __kmp_env_initialize(char const *);
void __kmp_env_print();
#if OMP_40_ENABLED
void __kmp_env_print_2();
#endif // OMP_40_ENABLED
int __kmp_initial_threads_capacity(int req_nproc);
void __kmp_init_dflt_team_nth();
int __kmp_convert_to_milliseconds(char const *);
int __kmp_default_tp_capacity(int, int, int);
#if KMP_MIC
#define KMP_STR_BUF_PRINT_NAME \
__kmp_str_buf_print(buffer, " %s %s", KMP_I18N_STR(Device), name)
#define KMP_STR_BUF_PRINT_NAME_EX(x) \
__kmp_str_buf_print(buffer, " %s %s='", KMP_I18N_STR(Device), x)
#define KMP_STR_BUF_PRINT_BOOL_EX(n, v, t, f) \
__kmp_str_buf_print(buffer, " %s %s='%s'\n", KMP_I18N_STR(Device), n, \
(v) ? t : f)
#define KMP_STR_BUF_PRINT_BOOL \
KMP_STR_BUF_PRINT_BOOL_EX(name, value, "TRUE", "FALSE")
#define KMP_STR_BUF_PRINT_INT \
__kmp_str_buf_print(buffer, " %s %s='%d'\n", KMP_I18N_STR(Device), name, \
value)
#define KMP_STR_BUF_PRINT_UINT64 \
__kmp_str_buf_print(buffer, " %s %s='%" KMP_UINT64_SPEC "'\n", \
KMP_I18N_STR(Device), name, value);
#define KMP_STR_BUF_PRINT_STR \
__kmp_str_buf_print(buffer, " %s %s='%s'\n", KMP_I18N_STR(Device), name, \
value)
#else
#define KMP_STR_BUF_PRINT_NAME \
__kmp_str_buf_print(buffer, " %s %s", KMP_I18N_STR(Host), name)
#define KMP_STR_BUF_PRINT_NAME_EX(x) \
__kmp_str_buf_print(buffer, " %s %s='", KMP_I18N_STR(Host), x)
#define KMP_STR_BUF_PRINT_BOOL_EX(n, v, t, f) \
__kmp_str_buf_print(buffer, " %s %s='%s'\n", KMP_I18N_STR(Host), n, \
(v) ? t : f)
#define KMP_STR_BUF_PRINT_BOOL \
KMP_STR_BUF_PRINT_BOOL_EX(name, value, "TRUE", "FALSE")
#define KMP_STR_BUF_PRINT_INT \
__kmp_str_buf_print(buffer, " %s %s='%d'\n", KMP_I18N_STR(Host), name, value)
#define KMP_STR_BUF_PRINT_UINT64 \
__kmp_str_buf_print(buffer, " %s %s='%" KMP_UINT64_SPEC "'\n", \
KMP_I18N_STR(Host), name, value);
#define KMP_STR_BUF_PRINT_STR \
__kmp_str_buf_print(buffer, " %s %s='%s'\n", KMP_I18N_STR(Host), name, value)
#endif
#endif // KMP_SETTINGS_H
// end of file //

922
runtime/src/kmp_stats.cpp Normal file
View File

@ -0,0 +1,922 @@
/** @file kmp_stats.cpp
* Statistics gathering and processing.
*/
//===----------------------------------------------------------------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is dual licensed under the MIT and the University of Illinois Open
// Source Licenses. See LICENSE.txt for details.
//
//===----------------------------------------------------------------------===//
#include "kmp.h"
#include "kmp_lock.h"
#include "kmp_stats.h"
#include "kmp_str.h"
#include <algorithm>
#include <ctime>
#include <iomanip>
#include <sstream>
#include <stdlib.h> // for atexit
#include <cmath>
#define STRINGIZE2(x) #x
#define STRINGIZE(x) STRINGIZE2(x)
#define expandName(name, flags, ignore) {STRINGIZE(name), flags},
statInfo timeStat::timerInfo[] = {
KMP_FOREACH_TIMER(expandName, 0){"TIMER_LAST", 0}};
const statInfo counter::counterInfo[] = {
KMP_FOREACH_COUNTER(expandName, 0){"COUNTER_LAST", 0}};
#undef expandName
#define expandName(ignore1, ignore2, ignore3) {0.0, 0.0, 0.0},
kmp_stats_output_module::rgb_color kmp_stats_output_module::timerColorInfo[] = {
KMP_FOREACH_TIMER(expandName, 0){0.0, 0.0, 0.0}};
#undef expandName
const kmp_stats_output_module::rgb_color
kmp_stats_output_module::globalColorArray[] = {
{1.0, 0.0, 0.0}, // red
{1.0, 0.6, 0.0}, // orange
{1.0, 1.0, 0.0}, // yellow
{0.0, 1.0, 0.0}, // green
{0.0, 0.0, 1.0}, // blue
{0.6, 0.2, 0.8}, // purple
{1.0, 0.0, 1.0}, // magenta
{0.0, 0.4, 0.2}, // dark green
{1.0, 1.0, 0.6}, // light yellow
{0.6, 0.4, 0.6}, // dirty purple
{0.0, 1.0, 1.0}, // cyan
{1.0, 0.4, 0.8}, // pink
{0.5, 0.5, 0.5}, // grey
{0.8, 0.7, 0.5}, // brown
{0.6, 0.6, 1.0}, // light blue
{1.0, 0.7, 0.5}, // peach
{0.8, 0.5, 1.0}, // lavender
{0.6, 0.0, 0.0}, // dark red
{0.7, 0.6, 0.0}, // gold
{0.0, 0.0, 0.0} // black
};
// Ensure that the atexit handler only runs once.
static uint32_t statsPrinted = 0;
// output interface
static kmp_stats_output_module *__kmp_stats_global_output = NULL;
double logHistogram::binMax[] = {
1.e1l, 1.e2l, 1.e3l, 1.e4l, 1.e5l, 1.e6l, 1.e7l, 1.e8l,
1.e9l, 1.e10l, 1.e11l, 1.e12l, 1.e13l, 1.e14l, 1.e15l, 1.e16l,
1.e17l, 1.e18l, 1.e19l, 1.e20l, 1.e21l, 1.e22l, 1.e23l, 1.e24l,
1.e25l, 1.e26l, 1.e27l, 1.e28l, 1.e29l, 1.e30l};
/* ************* statistic member functions ************* */
void statistic::addSample(double sample) {
sample -= offset;
KMP_DEBUG_ASSERT(std::isfinite(sample));
double delta = sample - meanVal;
sampleCount = sampleCount + 1;
meanVal = meanVal + delta / sampleCount;
m2 = m2 + delta * (sample - meanVal);
minVal = std::min(minVal, sample);
maxVal = std::max(maxVal, sample);
if (collectingHist)
hist.addSample(sample);
}
statistic &statistic::operator+=(const statistic &other) {
if (other.sampleCount == 0)
return *this;
if (sampleCount == 0) {
*this = other;
return *this;
}
uint64_t newSampleCount = sampleCount + other.sampleCount;
double dnsc = double(newSampleCount);
double dsc = double(sampleCount);
double dscBydnsc = dsc / dnsc;
double dosc = double(other.sampleCount);
double delta = other.meanVal - meanVal;
// Try to order these calculations to avoid overflows. If this were Fortran,
// then the compiler would not be able to re-order over brackets. In C++ it
// may be legal to do that (we certainly hope it doesn't, and CC+ Programming
// Language 2nd edition suggests it shouldn't, since it says that exploitation
// of associativity can only be made if the operation really is associative
// (which floating addition isn't...)).
meanVal = meanVal * dscBydnsc + other.meanVal * (1 - dscBydnsc);
m2 = m2 + other.m2 + dscBydnsc * dosc * delta * delta;
minVal = std::min(minVal, other.minVal);
maxVal = std::max(maxVal, other.maxVal);
sampleCount = newSampleCount;
if (collectingHist)
hist += other.hist;
return *this;
}
void statistic::scale(double factor) {
minVal = minVal * factor;
maxVal = maxVal * factor;
meanVal = meanVal * factor;
m2 = m2 * factor * factor;
return;
}
std::string statistic::format(char unit, bool total) const {
std::string result = formatSI(sampleCount, 9, ' ');
if (sampleCount == 0) {
result = result + std::string(", ") + formatSI(0.0, 9, unit);
result = result + std::string(", ") + formatSI(0.0, 9, unit);
result = result + std::string(", ") + formatSI(0.0, 9, unit);
if (total)
result = result + std::string(", ") + formatSI(0.0, 9, unit);
result = result + std::string(", ") + formatSI(0.0, 9, unit);
} else {
result = result + std::string(", ") + formatSI(minVal, 9, unit);
result = result + std::string(", ") + formatSI(meanVal, 9, unit);
result = result + std::string(", ") + formatSI(maxVal, 9, unit);
if (total)
result =
result + std::string(", ") + formatSI(meanVal * sampleCount, 9, unit);
result = result + std::string(", ") + formatSI(getSD(), 9, unit);
}
return result;
}
/* ************* histogram member functions ************* */
// Lowest bin that has anything in it
int logHistogram::minBin() const {
for (int i = 0; i < numBins; i++) {
if (bins[i].count != 0)
return i - logOffset;
}
return -logOffset;
}
// Highest bin that has anything in it
int logHistogram::maxBin() const {
for (int i = numBins - 1; i >= 0; i--) {
if (bins[i].count != 0)
return i - logOffset;
}
return -logOffset;
}
// Which bin does this sample belong in ?
uint32_t logHistogram::findBin(double sample) {
double v = std::fabs(sample);
// Simply loop up looking which bin to put it in.
// According to a micro-architect this is likely to be faster than a binary
// search, since
// it will only have one branch mis-predict
for (int b = 0; b < numBins; b++)
if (binMax[b] > v)
return b;
fprintf(stderr,
"Trying to add a sample that is too large into a histogram\n");
KMP_ASSERT(0);
return -1;
}
void logHistogram::addSample(double sample) {
if (sample == 0.0) {
zeroCount += 1;
#ifdef KMP_DEBUG
_total++;
check();
#endif
return;
}
KMP_DEBUG_ASSERT(std::isfinite(sample));
uint32_t bin = findBin(sample);
KMP_DEBUG_ASSERT(0 <= bin && bin < numBins);
bins[bin].count += 1;
bins[bin].total += sample;
#ifdef KMP_DEBUG
_total++;
check();
#endif
}
// This may not be the format we want, but it'll do for now
std::string logHistogram::format(char unit) const {
std::stringstream result;
result << "Bin, Count, Total\n";
if (zeroCount) {
result << "0, " << formatSI(zeroCount, 9, ' ') << ", ",
formatSI(0.0, 9, unit);
if (count(minBin()) == 0)
return result.str();
result << "\n";
}
for (int i = minBin(); i <= maxBin(); i++) {
result << "10**" << i << "<=v<10**" << (i + 1) << ", "
<< formatSI(count(i), 9, ' ') << ", " << formatSI(total(i), 9, unit);
if (i != maxBin())
result << "\n";
}
return result.str();
}
/* ************* explicitTimer member functions ************* */
void explicitTimer::start(tsc_tick_count tick) {
startTime = tick;
totalPauseTime = 0;
if (timeStat::logEvent(timerEnumValue)) {
__kmp_stats_thread_ptr->incrementNestValue();
}
return;
}
void explicitTimer::stop(tsc_tick_count tick,
kmp_stats_list *stats_ptr /* = nullptr */) {
if (startTime.getValue() == 0)
return;
stat->addSample(((tick - startTime) - totalPauseTime).ticks());
if (timeStat::logEvent(timerEnumValue)) {
if (!stats_ptr)
stats_ptr = __kmp_stats_thread_ptr;
stats_ptr->push_event(
startTime.getValue() - __kmp_stats_start_time.getValue(),
tick.getValue() - __kmp_stats_start_time.getValue(),
__kmp_stats_thread_ptr->getNestValue(), timerEnumValue);
stats_ptr->decrementNestValue();
}
/* We accept the risk that we drop a sample because it really did start at
t==0. */
startTime = 0;
return;
}
/* ************* partitionedTimers member functions ************* */
partitionedTimers::partitionedTimers() { timer_stack.reserve(8); }
// initialize the paritioned timers to an initial timer
void partitionedTimers::init(explicitTimer timer) {
KMP_DEBUG_ASSERT(this->timer_stack.size() == 0);
timer_stack.push_back(timer);
timer_stack.back().start(tsc_tick_count::now());
}
// stop/save the current timer, and start the new timer (timer_pair)
// There is a special condition where if the current timer is equal to
// the one you are trying to push, then it only manipulates the stack,
// and it won't stop/start the currently running timer.
void partitionedTimers::push(explicitTimer timer) {
// get the current timer
// pause current timer
// push new timer
// start the new timer
explicitTimer *current_timer, *new_timer;
size_t stack_size;
KMP_DEBUG_ASSERT(this->timer_stack.size() > 0);
timer_stack.push_back(timer);
stack_size = timer_stack.size();
current_timer = &(timer_stack[stack_size - 2]);
new_timer = &(timer_stack[stack_size - 1]);
tsc_tick_count tick = tsc_tick_count::now();
current_timer->pause(tick);
new_timer->start(tick);
}
// stop/discard the current timer, and start the previously saved timer
void partitionedTimers::pop() {
// get the current timer
// stop current timer (record event/sample)
// pop current timer
// get the new current timer and resume
explicitTimer *old_timer, *new_timer;
size_t stack_size = timer_stack.size();
KMP_DEBUG_ASSERT(stack_size > 1);
old_timer = &(timer_stack[stack_size - 1]);
new_timer = &(timer_stack[stack_size - 2]);
tsc_tick_count tick = tsc_tick_count::now();
old_timer->stop(tick);
new_timer->resume(tick);
timer_stack.pop_back();
}
void partitionedTimers::exchange(explicitTimer timer) {
// get the current timer
// stop current timer (record event/sample)
// push new timer
// start the new timer
explicitTimer *current_timer, *new_timer;
size_t stack_size;
KMP_DEBUG_ASSERT(this->timer_stack.size() > 0);
tsc_tick_count tick = tsc_tick_count::now();
stack_size = timer_stack.size();
current_timer = &(timer_stack[stack_size - 1]);
current_timer->stop(tick);
timer_stack.pop_back();
timer_stack.push_back(timer);
new_timer = &(timer_stack[stack_size - 1]);
new_timer->start(tick);
}
// Wind up all the currently running timers.
// This pops off all the timers from the stack and clears the stack
// After this is called, init() must be run again to initialize the
// stack of timers
void partitionedTimers::windup() {
while (timer_stack.size() > 1) {
this->pop();
}
// Pop the timer from the init() call
if (timer_stack.size() > 0) {
timer_stack.back().stop(tsc_tick_count::now());
timer_stack.pop_back();
}
}
/* ************* kmp_stats_event_vector member functions ************* */
void kmp_stats_event_vector::deallocate() {
__kmp_free(events);
internal_size = 0;
allocated_size = 0;
events = NULL;
}
// This function is for qsort() which requires the compare function to return
// either a negative number if event1 < event2, a positive number if event1 >
// event2 or zero if event1 == event2. This sorts by start time (lowest to
// highest).
int compare_two_events(const void *event1, const void *event2) {
const kmp_stats_event *ev1 = RCAST(const kmp_stats_event *, event1);
const kmp_stats_event *ev2 = RCAST(const kmp_stats_event *, event2);
if (ev1->getStart() < ev2->getStart())
return -1;
else if (ev1->getStart() > ev2->getStart())
return 1;
else
return 0;
}
void kmp_stats_event_vector::sort() {
qsort(events, internal_size, sizeof(kmp_stats_event), compare_two_events);
}
/* ************* kmp_stats_list member functions ************* */
// returns a pointer to newly created stats node
kmp_stats_list *kmp_stats_list::push_back(int gtid) {
kmp_stats_list *newnode =
(kmp_stats_list *)__kmp_allocate(sizeof(kmp_stats_list));
// placement new, only requires space and pointer and initializes (so
// __kmp_allocate instead of C++ new[] is used)
new (newnode) kmp_stats_list();
newnode->setGtid(gtid);
newnode->prev = this->prev;
newnode->next = this;
newnode->prev->next = newnode;
newnode->next->prev = newnode;
return newnode;
}
void kmp_stats_list::deallocate() {
kmp_stats_list *ptr = this->next;
kmp_stats_list *delptr = this->next;
while (ptr != this) {
delptr = ptr;
ptr = ptr->next;
// placement new means we have to explicitly call destructor.
delptr->_event_vector.deallocate();
delptr->~kmp_stats_list();
__kmp_free(delptr);
}
}
kmp_stats_list::iterator kmp_stats_list::begin() {
kmp_stats_list::iterator it;
it.ptr = this->next;
return it;
}
kmp_stats_list::iterator kmp_stats_list::end() {
kmp_stats_list::iterator it;
it.ptr = this;
return it;
}
int kmp_stats_list::size() {
int retval;
kmp_stats_list::iterator it;
for (retval = 0, it = begin(); it != end(); it++, retval++) {
}
return retval;
}
/* ************* kmp_stats_list::iterator member functions ************* */
kmp_stats_list::iterator::iterator() : ptr(NULL) {}
kmp_stats_list::iterator::~iterator() {}
kmp_stats_list::iterator kmp_stats_list::iterator::operator++() {
this->ptr = this->ptr->next;
return *this;
}
kmp_stats_list::iterator kmp_stats_list::iterator::operator++(int dummy) {
this->ptr = this->ptr->next;
return *this;
}
kmp_stats_list::iterator kmp_stats_list::iterator::operator--() {
this->ptr = this->ptr->prev;
return *this;
}
kmp_stats_list::iterator kmp_stats_list::iterator::operator--(int dummy) {
this->ptr = this->ptr->prev;
return *this;
}
bool kmp_stats_list::iterator::operator!=(const kmp_stats_list::iterator &rhs) {
return this->ptr != rhs.ptr;
}
bool kmp_stats_list::iterator::operator==(const kmp_stats_list::iterator &rhs) {
return this->ptr == rhs.ptr;
}
kmp_stats_list *kmp_stats_list::iterator::operator*() const {
return this->ptr;
}
/* ************* kmp_stats_output_module functions ************** */
const char *kmp_stats_output_module::eventsFileName = NULL;
const char *kmp_stats_output_module::plotFileName = NULL;
int kmp_stats_output_module::printPerThreadFlag = 0;
int kmp_stats_output_module::printPerThreadEventsFlag = 0;
static char const *lastName(char *name) {
int l = strlen(name);
for (int i = l - 1; i >= 0; --i) {
if (name[i] == '.')
name[i] = '_';
if (name[i] == '/')
return name + i + 1;
}
return name;
}
/* Read the name of the executable from /proc/self/cmdline */
static char const *getImageName(char *buffer, size_t buflen) {
FILE *f = fopen("/proc/self/cmdline", "r");
buffer[0] = char(0);
if (!f)
return buffer;
// The file contains char(0) delimited words from the commandline.
// This just returns the last filename component of the first word on the
// line.
size_t n = fread(buffer, 1, buflen, f);
if (n == 0) {
fclose(f);
KMP_CHECK_SYSFAIL("fread", 1)
}
fclose(f);
buffer[buflen - 1] = char(0);
return lastName(buffer);
}
static void getTime(char *buffer, size_t buflen, bool underscores = false) {
time_t timer;
time(&timer);
struct tm *tm_info = localtime(&timer);
if (underscores)
strftime(buffer, buflen, "%Y-%m-%d_%H%M%S", tm_info);
else
strftime(buffer, buflen, "%Y-%m-%d %H%M%S", tm_info);
}
/* Generate a stats file name, expanding prototypes */
static std::string generateFilename(char const *prototype,
char const *imageName) {
std::string res;
for (int i = 0; prototype[i] != char(0); i++) {
char ch = prototype[i];
if (ch == '%') {
i++;
if (prototype[i] == char(0))
break;
switch (prototype[i]) {
case 't': // Insert time and date
{
char date[26];
getTime(date, sizeof(date), true);
res += date;
} break;
case 'e': // Insert executable name
res += imageName;
break;
case 'p': // Insert pid
{
std::stringstream ss;
ss << getpid();
res += ss.str();
} break;
default:
res += prototype[i];
break;
}
} else
res += ch;
}
return res;
}
// init() is called very near the beginning of execution time in the constructor
// of __kmp_stats_global_output
void kmp_stats_output_module::init() {
fprintf(stderr, "*** Stats enabled OpenMP* runtime ***\n");
char *statsFileName = getenv("KMP_STATS_FILE");
eventsFileName = getenv("KMP_STATS_EVENTS_FILE");
plotFileName = getenv("KMP_STATS_PLOT_FILE");
char *threadStats = getenv("KMP_STATS_THREADS");
char *threadEvents = getenv("KMP_STATS_EVENTS");
// set the stats output filenames based on environment variables and defaults
if (statsFileName) {
char imageName[1024];
// Process any escapes (e.g., %p, %e, %t) in the name
outputFileName = generateFilename(
statsFileName, getImageName(&imageName[0], sizeof(imageName)));
}
eventsFileName = eventsFileName ? eventsFileName : "events.dat";
plotFileName = plotFileName ? plotFileName : "events.plt";
// set the flags based on environment variables matching: true, on, 1, .true.
// , .t. , yes
printPerThreadFlag = __kmp_str_match_true(threadStats);
printPerThreadEventsFlag = __kmp_str_match_true(threadEvents);
if (printPerThreadEventsFlag) {
// assigns a color to each timer for printing
setupEventColors();
} else {
// will clear flag so that no event will be logged
timeStat::clearEventFlags();
}
}
void kmp_stats_output_module::setupEventColors() {
int i;
int globalColorIndex = 0;
int numGlobalColors = sizeof(globalColorArray) / sizeof(rgb_color);
for (i = 0; i < TIMER_LAST; i++) {
if (timeStat::logEvent((timer_e)i)) {
timerColorInfo[i] = globalColorArray[globalColorIndex];
globalColorIndex = (globalColorIndex + 1) % numGlobalColors;
}
}
}
void kmp_stats_output_module::printTimerStats(FILE *statsOut,
statistic const *theStats,
statistic const *totalStats) {
fprintf(statsOut,
"Timer, SampleCount, Min, "
"Mean, Max, Total, SD\n");
for (timer_e s = timer_e(0); s < TIMER_LAST; s = timer_e(s + 1)) {
statistic const *stat = &theStats[s];
char tag = timeStat::noUnits(s) ? ' ' : 'T';
fprintf(statsOut, "%-35s, %s\n", timeStat::name(s),
stat->format(tag, true).c_str());
}
// Also print the Total_ versions of times.
for (timer_e s = timer_e(0); s < TIMER_LAST; s = timer_e(s + 1)) {
char tag = timeStat::noUnits(s) ? ' ' : 'T';
if (totalStats && !timeStat::noTotal(s))
fprintf(statsOut, "Total_%-29s, %s\n", timeStat::name(s),
totalStats[s].format(tag, true).c_str());
}
// Print historgram of statistics
if (theStats[0].haveHist()) {
fprintf(statsOut, "\nTimer distributions\n");
for (int s = 0; s < TIMER_LAST; s++) {
statistic const *stat = &theStats[s];
if (stat->getCount() != 0) {
char tag = timeStat::noUnits(timer_e(s)) ? ' ' : 'T';
fprintf(statsOut, "%s\n", timeStat::name(timer_e(s)));
fprintf(statsOut, "%s\n", stat->getHist()->format(tag).c_str());
}
}
}
}
void kmp_stats_output_module::printCounterStats(FILE *statsOut,
statistic const *theStats) {
fprintf(statsOut, "Counter, ThreadCount, Min, Mean, "
" Max, Total, SD\n");
for (int s = 0; s < COUNTER_LAST; s++) {
statistic const *stat = &theStats[s];
fprintf(statsOut, "%-25s, %s\n", counter::name(counter_e(s)),
stat->format(' ', true).c_str());
}
// Print histogram of counters
if (theStats[0].haveHist()) {
fprintf(statsOut, "\nCounter distributions\n");
for (int s = 0; s < COUNTER_LAST; s++) {
statistic const *stat = &theStats[s];
if (stat->getCount() != 0) {
fprintf(statsOut, "%s\n", counter::name(counter_e(s)));
fprintf(statsOut, "%s\n", stat->getHist()->format(' ').c_str());
}
}
}
}
void kmp_stats_output_module::printCounters(FILE *statsOut,
counter const *theCounters) {
// We print all the counters even if they are zero.
// That makes it easier to slice them into a spreadsheet if you need to.
fprintf(statsOut, "\nCounter, Count\n");
for (int c = 0; c < COUNTER_LAST; c++) {
counter const *stat = &theCounters[c];
fprintf(statsOut, "%-25s, %s\n", counter::name(counter_e(c)),
formatSI(stat->getValue(), 9, ' ').c_str());
}
}
void kmp_stats_output_module::printEvents(FILE *eventsOut,
kmp_stats_event_vector *theEvents,
int gtid) {
// sort by start time before printing
theEvents->sort();
for (int i = 0; i < theEvents->size(); i++) {
kmp_stats_event ev = theEvents->at(i);
rgb_color color = getEventColor(ev.getTimerName());
fprintf(eventsOut, "%d %lu %lu %1.1f rgb(%1.1f,%1.1f,%1.1f) %s\n", gtid,
ev.getStart(), ev.getStop(), 1.2 - (ev.getNestLevel() * 0.2),
color.r, color.g, color.b, timeStat::name(ev.getTimerName()));
}
return;
}
void kmp_stats_output_module::windupExplicitTimers() {
// Wind up any explicit timers. We assume that it's fair at this point to just
// walk all the explcit timers in all threads and say "it's over".
// If the timer wasn't running, this won't record anything anyway.
kmp_stats_list::iterator it;
for (it = __kmp_stats_list->begin(); it != __kmp_stats_list->end(); it++) {
kmp_stats_list *ptr = *it;
ptr->getPartitionedTimers()->windup();
ptr->endLife();
}
}
void kmp_stats_output_module::printPloticusFile() {
int i;
int size = __kmp_stats_list->size();
FILE *plotOut = fopen(plotFileName, "w+");
fprintf(plotOut, "#proc page\n"
" pagesize: 15 10\n"
" scale: 1.0\n\n");
fprintf(plotOut, "#proc getdata\n"
" file: %s\n\n",
eventsFileName);
fprintf(plotOut, "#proc areadef\n"
" title: OpenMP Sampling Timeline\n"
" titledetails: align=center size=16\n"
" rectangle: 1 1 13 9\n"
" xautorange: datafield=2,3\n"
" yautorange: -1 %d\n\n",
size);
fprintf(plotOut, "#proc xaxis\n"
" stubs: inc\n"
" stubdetails: size=12\n"
" label: Time (ticks)\n"
" labeldetails: size=14\n\n");
fprintf(plotOut, "#proc yaxis\n"
" stubs: inc 1\n"
" stubrange: 0 %d\n"
" stubdetails: size=12\n"
" label: Thread #\n"
" labeldetails: size=14\n\n",
size - 1);
fprintf(plotOut, "#proc bars\n"
" exactcolorfield: 5\n"
" axis: x\n"
" locfield: 1\n"
" segmentfields: 2 3\n"
" barwidthfield: 4\n\n");
// create legend entries corresponding to the timer color
for (i = 0; i < TIMER_LAST; i++) {
if (timeStat::logEvent((timer_e)i)) {
rgb_color c = getEventColor((timer_e)i);
fprintf(plotOut, "#proc legendentry\n"
" sampletype: color\n"
" label: %s\n"
" details: rgb(%1.1f,%1.1f,%1.1f)\n\n",
timeStat::name((timer_e)i), c.r, c.g, c.b);
}
}
fprintf(plotOut, "#proc legend\n"
" format: down\n"
" location: max max\n\n");
fclose(plotOut);
return;
}
static void outputEnvVariable(FILE *statsOut, char const *name) {
char const *value = getenv(name);
fprintf(statsOut, "# %s = %s\n", name, value ? value : "*unspecified*");
}
/* Print some useful information about
* the date and time this experiment ran.
* the machine on which it ran.
We output all of this as stylised comments, though we may decide to parse
some of it. */
void kmp_stats_output_module::printHeaderInfo(FILE *statsOut) {
std::time_t now = std::time(0);
char buffer[40];
char hostName[80];
std::strftime(&buffer[0], sizeof(buffer), "%c", std::localtime(&now));
fprintf(statsOut, "# Time of run: %s\n", &buffer[0]);
if (gethostname(&hostName[0], sizeof(hostName)) == 0)
fprintf(statsOut, "# Hostname: %s\n", &hostName[0]);
#if KMP_ARCH_X86 || KMP_ARCH_X86_64
fprintf(statsOut, "# CPU: %s\n", &__kmp_cpuinfo.name[0]);
fprintf(statsOut, "# Family: %d, Model: %d, Stepping: %d\n",
__kmp_cpuinfo.family, __kmp_cpuinfo.model, __kmp_cpuinfo.stepping);
if (__kmp_cpuinfo.frequency == 0)
fprintf(statsOut, "# Nominal frequency: Unknown\n");
else
fprintf(statsOut, "# Nominal frequency: %sz\n",
formatSI(double(__kmp_cpuinfo.frequency), 9, 'H').c_str());
outputEnvVariable(statsOut, "KMP_HW_SUBSET");
outputEnvVariable(statsOut, "KMP_AFFINITY");
outputEnvVariable(statsOut, "KMP_BLOCKTIME");
outputEnvVariable(statsOut, "KMP_LIBRARY");
fprintf(statsOut, "# Production runtime built " __DATE__ " " __TIME__ "\n");
#endif
}
void kmp_stats_output_module::outputStats(const char *heading) {
// Stop all the explicit timers in all threads
// Do this before declaring the local statistics because thay have
// constructors so will take time to create.
windupExplicitTimers();
statistic allStats[TIMER_LAST];
statistic totalStats[TIMER_LAST]; /* Synthesized, cross threads versions of
normal timer stats */
statistic allCounters[COUNTER_LAST];
FILE *statsOut =
!outputFileName.empty() ? fopen(outputFileName.c_str(), "a+") : stderr;
if (!statsOut)
statsOut = stderr;
FILE *eventsOut;
if (eventPrintingEnabled()) {
eventsOut = fopen(eventsFileName, "w+");
}
printHeaderInfo(statsOut);
fprintf(statsOut, "%s\n", heading);
// Accumulate across threads.
kmp_stats_list::iterator it;
for (it = __kmp_stats_list->begin(); it != __kmp_stats_list->end(); it++) {
int t = (*it)->getGtid();
// Output per thread stats if requested.
if (printPerThreadFlag) {
fprintf(statsOut, "Thread %d\n", t);
printTimerStats(statsOut, (*it)->getTimers(), 0);
printCounters(statsOut, (*it)->getCounters());
fprintf(statsOut, "\n");
}
// Output per thread events if requested.
if (eventPrintingEnabled()) {
kmp_stats_event_vector events = (*it)->getEventVector();
printEvents(eventsOut, &events, t);
}
// Accumulate timers.
for (timer_e s = timer_e(0); s < TIMER_LAST; s = timer_e(s + 1)) {
// See if we should ignore this timer when aggregating
if ((timeStat::masterOnly(s) && (t != 0)) || // Timer only valid on master
// and this thread is worker
(timeStat::workerOnly(s) && (t == 0)) // Timer only valid on worker
// and this thread is the master
) {
continue;
}
statistic *threadStat = (*it)->getTimer(s);
allStats[s] += *threadStat;
// Add Total stats for timers that are valid in more than one thread
if (!timeStat::noTotal(s))
totalStats[s].addSample(threadStat->getTotal());
}
// Accumulate counters.
for (counter_e c = counter_e(0); c < COUNTER_LAST; c = counter_e(c + 1)) {
if (counter::masterOnly(c) && t != 0)
continue;
allCounters[c].addSample((*it)->getCounter(c)->getValue());
}
}
if (eventPrintingEnabled()) {
printPloticusFile();
fclose(eventsOut);
}
fprintf(statsOut, "Aggregate for all threads\n");
printTimerStats(statsOut, &allStats[0], &totalStats[0]);
fprintf(statsOut, "\n");
printCounterStats(statsOut, &allCounters[0]);
if (statsOut != stderr)
fclose(statsOut);
}
/* ************* exported C functions ************** */
// no name mangling for these functions, we want the c files to be able to get
// at these functions
extern "C" {
void __kmp_reset_stats() {
kmp_stats_list::iterator it;
for (it = __kmp_stats_list->begin(); it != __kmp_stats_list->end(); it++) {
timeStat *timers = (*it)->getTimers();
counter *counters = (*it)->getCounters();
for (int t = 0; t < TIMER_LAST; t++)
timers[t].reset();
for (int c = 0; c < COUNTER_LAST; c++)
counters[c].reset();
// reset the event vector so all previous events are "erased"
(*it)->resetEventVector();
}
}
// This function will reset all stats and stop all threads' explicit timers if
// they haven't been stopped already.
void __kmp_output_stats(const char *heading) {
__kmp_stats_global_output->outputStats(heading);
__kmp_reset_stats();
}
void __kmp_accumulate_stats_at_exit(void) {
// Only do this once.
if (KMP_XCHG_FIXED32(&statsPrinted, 1) != 0)
return;
__kmp_output_stats("Statistics on exit");
}
void __kmp_stats_init(void) {
__kmp_init_tas_lock(&__kmp_stats_lock);
__kmp_stats_start_time = tsc_tick_count::now();
__kmp_stats_global_output = new kmp_stats_output_module();
__kmp_stats_list = new kmp_stats_list();
}
void __kmp_stats_fini(void) {
__kmp_accumulate_stats_at_exit();
__kmp_stats_list->deallocate();
delete __kmp_stats_global_output;
delete __kmp_stats_list;
}
} // extern "C"

1002
runtime/src/kmp_stats.h Normal file

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,131 @@
/** @file kmp_stats_timing.cpp
* Timing functions
*/
//===----------------------------------------------------------------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is dual licensed under the MIT and the University of Illinois Open
// Source Licenses. See LICENSE.txt for details.
//
//===----------------------------------------------------------------------===//
#include <stdlib.h>
#include <unistd.h>
#include <iomanip>
#include <iostream>
#include <sstream>
#include "kmp.h"
#include "kmp_stats_timing.h"
using namespace std;
#if KMP_HAVE_TICK_TIME
#if KMP_MIC
double tsc_tick_count::tick_time() {
// pretty bad assumption of 1GHz clock for MIC
return 1 / ((double)1000 * 1.e6);
}
#elif KMP_ARCH_X86 || KMP_ARCH_X86_64
#include <string.h>
// Extract the value from the CPUID information
double tsc_tick_count::tick_time() {
static double result = 0.0;
if (result == 0.0) {
kmp_cpuid_t cpuinfo;
char brand[256];
__kmp_x86_cpuid(0x80000000, 0, &cpuinfo);
memset(brand, 0, sizeof(brand));
int ids = cpuinfo.eax;
for (unsigned int i = 2; i < (ids ^ 0x80000000) + 2; i++)
__kmp_x86_cpuid(i | 0x80000000, 0,
(kmp_cpuid_t *)(brand + (i - 2) * sizeof(kmp_cpuid_t)));
char *start = &brand[0];
for (; *start == ' '; start++)
;
char *end = brand + KMP_STRLEN(brand) - 3;
uint64_t multiplier;
if (*end == 'M')
multiplier = 1000LL * 1000LL;
else if (*end == 'G')
multiplier = 1000LL * 1000LL * 1000LL;
else if (*end == 'T')
multiplier = 1000LL * 1000LL * 1000LL * 1000LL;
else {
cout << "Error determining multiplier '" << *end << "'\n";
exit(-1);
}
*end = 0;
while (*end != ' ')
end--;
end++;
double freq = strtod(end, &start);
if (freq == 0.0) {
cout << "Error calculating frequency " << end << "\n";
exit(-1);
}
result = ((double)1.0) / (freq * multiplier);
}
return result;
}
#endif
#endif
static bool useSI = true;
// Return a formatted string after normalising the value into
// engineering style and using a suitable unit prefix (e.g. ms, us, ns).
std::string formatSI(double interval, int width, char unit) {
std::stringstream os;
if (useSI) {
// Preserve accuracy for small numbers, since we only multiply and the
// positive powers of ten are precisely representable.
static struct {
double scale;
char prefix;
} ranges[] = {{1.e21, 'y'}, {1.e18, 'z'}, {1.e15, 'a'}, {1.e12, 'f'},
{1.e9, 'p'}, {1.e6, 'n'}, {1.e3, 'u'}, {1.0, 'm'},
{1.e-3, ' '}, {1.e-6, 'k'}, {1.e-9, 'M'}, {1.e-12, 'G'},
{1.e-15, 'T'}, {1.e-18, 'P'}, {1.e-21, 'E'}, {1.e-24, 'Z'},
{1.e-27, 'Y'}};
if (interval == 0.0) {
os << std::setw(width - 3) << std::right << "0.00" << std::setw(3)
<< unit;
return os.str();
}
bool negative = false;
if (interval < 0.0) {
negative = true;
interval = -interval;
}
for (int i = 0; i < (int)(sizeof(ranges) / sizeof(ranges[0])); i++) {
if (interval * ranges[i].scale < 1.e0) {
interval = interval * 1000.e0 * ranges[i].scale;
os << std::fixed << std::setprecision(2) << std::setw(width - 3)
<< std::right << (negative ? -interval : interval) << std::setw(2)
<< ranges[i].prefix << std::setw(1) << unit;
return os.str();
}
}
}
os << std::setprecision(2) << std::fixed << std::right << std::setw(width - 3)
<< interval << std::setw(3) << unit;
return os.str();
}

View File

@ -0,0 +1,116 @@
#ifndef KMP_STATS_TIMING_H
#define KMP_STATS_TIMING_H
/** @file kmp_stats_timing.h
* Access to real time clock and timers.
*/
//===----------------------------------------------------------------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is dual licensed under the MIT and the University of Illinois Open
// Source Licenses. See LICENSE.txt for details.
//
//===----------------------------------------------------------------------===//
#include "kmp_os.h"
#include <limits>
#include <stdint.h>
#include <string>
#if KMP_HAVE_X86INTRIN_H
#include <x86intrin.h>
#endif
class tsc_tick_count {
private:
int64_t my_count;
public:
class tsc_interval_t {
int64_t value;
explicit tsc_interval_t(int64_t _value) : value(_value) {}
public:
tsc_interval_t() : value(0) {} // Construct 0 time duration
#if KMP_HAVE_TICK_TIME
double seconds() const; // Return the length of a time interval in seconds
#endif
double ticks() const { return double(value); }
int64_t getValue() const { return value; }
tsc_interval_t &operator=(int64_t nvalue) {
value = nvalue;
return *this;
}
friend class tsc_tick_count;
friend tsc_interval_t operator-(const tsc_tick_count &t1,
const tsc_tick_count &t0);
friend tsc_interval_t operator-(const tsc_tick_count::tsc_interval_t &i1,
const tsc_tick_count::tsc_interval_t &i0);
friend tsc_interval_t &operator+=(tsc_tick_count::tsc_interval_t &i1,
const tsc_tick_count::tsc_interval_t &i0);
};
#if KMP_HAVE___BUILTIN_READCYCLECOUNTER
tsc_tick_count()
: my_count(static_cast<int64_t>(__builtin_readcyclecounter())) {}
#elif KMP_HAVE___RDTSC
tsc_tick_count() : my_count(static_cast<int64_t>(__rdtsc())) {}
#else
#error Must have high resolution timer defined
#endif
tsc_tick_count(int64_t value) : my_count(value) {}
int64_t getValue() const { return my_count; }
tsc_tick_count later(tsc_tick_count const other) const {
return my_count > other.my_count ? (*this) : other;
}
tsc_tick_count earlier(tsc_tick_count const other) const {
return my_count < other.my_count ? (*this) : other;
}
#if KMP_HAVE_TICK_TIME
static double tick_time(); // returns seconds per cycle (period) of clock
#endif
static tsc_tick_count now() {
return tsc_tick_count();
} // returns the rdtsc register value
friend tsc_tick_count::tsc_interval_t operator-(const tsc_tick_count &t1,
const tsc_tick_count &t0);
};
inline tsc_tick_count::tsc_interval_t operator-(const tsc_tick_count &t1,
const tsc_tick_count &t0) {
return tsc_tick_count::tsc_interval_t(t1.my_count - t0.my_count);
}
inline tsc_tick_count::tsc_interval_t
operator-(const tsc_tick_count::tsc_interval_t &i1,
const tsc_tick_count::tsc_interval_t &i0) {
return tsc_tick_count::tsc_interval_t(i1.value - i0.value);
}
inline tsc_tick_count::tsc_interval_t &
operator+=(tsc_tick_count::tsc_interval_t &i1,
const tsc_tick_count::tsc_interval_t &i0) {
i1.value += i0.value;
return i1;
}
#if KMP_HAVE_TICK_TIME
inline double tsc_tick_count::tsc_interval_t::seconds() const {
return value * tick_time();
}
#endif
extern std::string formatSI(double interval, int width, char unit);
inline std::string formatSeconds(double interval, int width) {
return formatSI(interval, width, 'S');
}
inline std::string formatTicks(double interval, int width) {
return formatSI(interval, width, 'T');
}
#endif // KMP_STATS_TIMING_H

752
runtime/src/kmp_str.cpp Normal file
View File

@ -0,0 +1,752 @@
/*
* kmp_str.cpp -- String manipulation routines.
*/
//===----------------------------------------------------------------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is dual licensed under the MIT and the University of Illinois Open
// Source Licenses. See LICENSE.txt for details.
//
//===----------------------------------------------------------------------===//
#include "kmp_str.h"
#include <stdarg.h> // va_*
#include <stdio.h> // vsnprintf()
#include <stdlib.h> // malloc(), realloc()
#include "kmp.h"
#include "kmp_i18n.h"
/* String buffer.
Usage:
// Declare buffer and initialize it.
kmp_str_buf_t buffer;
__kmp_str_buf_init( & buffer );
// Print to buffer.
__kmp_str_buf_print(& buffer, "Error in file \"%s\" line %d\n", "foo.c", 12);
__kmp_str_buf_print(& buffer, " <%s>\n", line);
// Use buffer contents. buffer.str is a pointer to data, buffer.used is a
// number of printed characters (not including terminating zero).
write( fd, buffer.str, buffer.used );
// Free buffer.
__kmp_str_buf_free( & buffer );
// Alternatively, you can detach allocated memory from buffer:
__kmp_str_buf_detach( & buffer );
return buffer.str; // That memory should be freed eventually.
Notes:
* Buffer users may use buffer.str and buffer.used. Users should not change
any fields of buffer directly.
* buffer.str is never NULL. If buffer is empty, buffer.str points to empty
string ("").
* For performance reasons, buffer uses stack memory (buffer.bulk) first. If
stack memory is exhausted, buffer allocates memory on heap by malloc(), and
reallocates it by realloc() as amount of used memory grows.
* Buffer doubles amount of allocated memory each time it is exhausted.
*/
// TODO: __kmp_str_buf_print() can use thread local memory allocator.
#define KMP_STR_BUF_INVARIANT(b) \
{ \
KMP_DEBUG_ASSERT((b)->str != NULL); \
KMP_DEBUG_ASSERT((b)->size >= sizeof((b)->bulk)); \
KMP_DEBUG_ASSERT((b)->size % sizeof((b)->bulk) == 0); \
KMP_DEBUG_ASSERT((unsigned)(b)->used < (b)->size); \
KMP_DEBUG_ASSERT( \
(b)->size == sizeof((b)->bulk) ? (b)->str == &(b)->bulk[0] : 1); \
KMP_DEBUG_ASSERT((b)->size > sizeof((b)->bulk) ? (b)->str != &(b)->bulk[0] \
: 1); \
}
void __kmp_str_buf_clear(kmp_str_buf_t *buffer) {
KMP_STR_BUF_INVARIANT(buffer);
if (buffer->used > 0) {
buffer->used = 0;
buffer->str[0] = 0;
}
KMP_STR_BUF_INVARIANT(buffer);
} // __kmp_str_buf_clear
void __kmp_str_buf_reserve(kmp_str_buf_t *buffer, int size) {
KMP_STR_BUF_INVARIANT(buffer);
KMP_DEBUG_ASSERT(size >= 0);
if (buffer->size < (unsigned int)size) {
// Calculate buffer size.
do {
buffer->size *= 2;
} while (buffer->size < (unsigned int)size);
// Enlarge buffer.
if (buffer->str == &buffer->bulk[0]) {
buffer->str = (char *)KMP_INTERNAL_MALLOC(buffer->size);
if (buffer->str == NULL) {
KMP_FATAL(MemoryAllocFailed);
}
KMP_MEMCPY_S(buffer->str, buffer->size, buffer->bulk, buffer->used + 1);
} else {
buffer->str = (char *)KMP_INTERNAL_REALLOC(buffer->str, buffer->size);
if (buffer->str == NULL) {
KMP_FATAL(MemoryAllocFailed);
}
}
}
KMP_DEBUG_ASSERT(buffer->size > 0);
KMP_DEBUG_ASSERT(buffer->size >= (unsigned)size);
KMP_STR_BUF_INVARIANT(buffer);
} // __kmp_str_buf_reserve
void __kmp_str_buf_detach(kmp_str_buf_t *buffer) {
KMP_STR_BUF_INVARIANT(buffer);
// If internal bulk is used, allocate memory and copy it.
if (buffer->size <= sizeof(buffer->bulk)) {
buffer->str = (char *)KMP_INTERNAL_MALLOC(buffer->size);
if (buffer->str == NULL) {
KMP_FATAL(MemoryAllocFailed);
}
KMP_MEMCPY_S(buffer->str, buffer->size, buffer->bulk, buffer->used + 1);
}
} // __kmp_str_buf_detach
void __kmp_str_buf_free(kmp_str_buf_t *buffer) {
KMP_STR_BUF_INVARIANT(buffer);
if (buffer->size > sizeof(buffer->bulk)) {
KMP_INTERNAL_FREE(buffer->str);
}
buffer->str = buffer->bulk;
buffer->size = sizeof(buffer->bulk);
buffer->used = 0;
KMP_STR_BUF_INVARIANT(buffer);
} // __kmp_str_buf_free
void __kmp_str_buf_cat(kmp_str_buf_t *buffer, char const *str, int len) {
KMP_STR_BUF_INVARIANT(buffer);
KMP_DEBUG_ASSERT(str != NULL);
KMP_DEBUG_ASSERT(len >= 0);
__kmp_str_buf_reserve(buffer, buffer->used + len + 1);
KMP_MEMCPY(buffer->str + buffer->used, str, len);
buffer->str[buffer->used + len] = 0;
buffer->used += len;
KMP_STR_BUF_INVARIANT(buffer);
} // __kmp_str_buf_cat
void __kmp_str_buf_catbuf(kmp_str_buf_t *dest, const kmp_str_buf_t *src) {
KMP_DEBUG_ASSERT(dest);
KMP_DEBUG_ASSERT(src);
KMP_STR_BUF_INVARIANT(dest);
KMP_STR_BUF_INVARIANT(src);
if (!src->str || !src->used)
return;
__kmp_str_buf_reserve(dest, dest->used + src->used + 1);
KMP_MEMCPY(dest->str + dest->used, src->str, src->used);
dest->str[dest->used + src->used] = 0;
dest->used += src->used;
KMP_STR_BUF_INVARIANT(dest);
} // __kmp_str_buf_catbuf
// Return the number of characters written
int __kmp_str_buf_vprint(kmp_str_buf_t *buffer, char const *format,
va_list args) {
int rc;
KMP_STR_BUF_INVARIANT(buffer);
for (;;) {
int const free = buffer->size - buffer->used;
int size;
// Try to format string.
{
/* On Linux* OS Intel(R) 64, vsnprintf() modifies args argument, so vsnprintf()
crashes if it is called for the second time with the same args. To prevent
the crash, we have to pass a fresh intact copy of args to vsnprintf() on each
iteration.
Unfortunately, standard va_copy() macro is not available on Windows* OS.
However, it seems vsnprintf() does not modify args argument on Windows* OS.
*/
#if !KMP_OS_WINDOWS
va_list _args;
va_copy(_args, args); // Make copy of args.
#define args _args // Substitute args with its copy, _args.
#endif // KMP_OS_WINDOWS
rc = KMP_VSNPRINTF(buffer->str + buffer->used, free, format, args);
#if !KMP_OS_WINDOWS
#undef args // Remove substitution.
va_end(_args);
#endif // KMP_OS_WINDOWS
}
// No errors, string has been formatted.
if (rc >= 0 && rc < free) {
buffer->used += rc;
break;
}
// Error occurred, buffer is too small.
if (rc >= 0) {
// C99-conforming implementation of vsnprintf returns required buffer size
size = buffer->used + rc + 1;
} else {
// Older implementations just return -1. Double buffer size.
size = buffer->size * 2;
}
// Enlarge buffer.
__kmp_str_buf_reserve(buffer, size);
// And try again.
}
KMP_DEBUG_ASSERT(buffer->size > 0);
KMP_STR_BUF_INVARIANT(buffer);
return rc;
} // __kmp_str_buf_vprint
// Return the number of characters written
int __kmp_str_buf_print(kmp_str_buf_t *buffer, char const *format, ...) {
int rc;
va_list args;
va_start(args, format);
rc = __kmp_str_buf_vprint(buffer, format, args);
va_end(args);
return rc;
} // __kmp_str_buf_print
/* The function prints specified size to buffer. Size is expressed using biggest
possible unit, for example 1024 is printed as "1k". */
void __kmp_str_buf_print_size(kmp_str_buf_t *buf, size_t size) {
char const *names[] = {"", "k", "M", "G", "T", "P", "E", "Z", "Y"};
int const units = sizeof(names) / sizeof(char const *);
int u = 0;
if (size > 0) {
while ((size % 1024 == 0) && (u + 1 < units)) {
size = size / 1024;
++u;
}
}
__kmp_str_buf_print(buf, "%" KMP_SIZE_T_SPEC "%s", size, names[u]);
} // __kmp_str_buf_print_size
void __kmp_str_fname_init(kmp_str_fname_t *fname, char const *path) {
fname->path = NULL;
fname->dir = NULL;
fname->base = NULL;
if (path != NULL) {
char *slash = NULL; // Pointer to the last character of dir.
char *base = NULL; // Pointer to the beginning of basename.
fname->path = __kmp_str_format("%s", path);
// Original code used strdup() function to copy a string, but on Windows* OS
// Intel(R) 64 it causes assertioon id debug heap, so I had to replace
// strdup with __kmp_str_format().
if (KMP_OS_WINDOWS) {
__kmp_str_replace(fname->path, '\\', '/');
}
fname->dir = __kmp_str_format("%s", fname->path);
slash = strrchr(fname->dir, '/');
if (KMP_OS_WINDOWS &&
slash == NULL) { // On Windows* OS, if slash not found,
char first = TOLOWER(fname->dir[0]); // look for drive.
if ('a' <= first && first <= 'z' && fname->dir[1] == ':') {
slash = &fname->dir[1];
}
}
base = (slash == NULL ? fname->dir : slash + 1);
fname->base = __kmp_str_format("%s", base); // Copy basename
*base = 0; // and truncate dir.
}
} // kmp_str_fname_init
void __kmp_str_fname_free(kmp_str_fname_t *fname) {
__kmp_str_free(&fname->path);
__kmp_str_free(&fname->dir);
__kmp_str_free(&fname->base);
} // kmp_str_fname_free
int __kmp_str_fname_match(kmp_str_fname_t const *fname, char const *pattern) {
int dir_match = 1;
int base_match = 1;
if (pattern != NULL) {
kmp_str_fname_t ptrn;
__kmp_str_fname_init(&ptrn, pattern);
dir_match = strcmp(ptrn.dir, "*/") == 0 ||
(fname->dir != NULL && __kmp_str_eqf(fname->dir, ptrn.dir));
base_match = strcmp(ptrn.base, "*") == 0 ||
(fname->base != NULL && __kmp_str_eqf(fname->base, ptrn.base));
__kmp_str_fname_free(&ptrn);
}
return dir_match && base_match;
} // __kmp_str_fname_match
kmp_str_loc_t __kmp_str_loc_init(char const *psource, int init_fname) {
kmp_str_loc_t loc;
loc._bulk = NULL;
loc.file = NULL;
loc.func = NULL;
loc.line = 0;
loc.col = 0;
if (psource != NULL) {
char *str = NULL;
char *dummy = NULL;
char *line = NULL;
char *col = NULL;
// Copy psource to keep it intact.
loc._bulk = __kmp_str_format("%s", psource);
// Parse psource string: ";file;func;line;col;;"
str = loc._bulk;
__kmp_str_split(str, ';', &dummy, &str);
__kmp_str_split(str, ';', &loc.file, &str);
__kmp_str_split(str, ';', &loc.func, &str);
__kmp_str_split(str, ';', &line, &str);
__kmp_str_split(str, ';', &col, &str);
// Convert line and col into numberic values.
if (line != NULL) {
loc.line = atoi(line);
if (loc.line < 0) {
loc.line = 0;
}
}
if (col != NULL) {
loc.col = atoi(col);
if (loc.col < 0) {
loc.col = 0;
}
}
}
__kmp_str_fname_init(&loc.fname, init_fname ? loc.file : NULL);
return loc;
} // kmp_str_loc_init
void __kmp_str_loc_free(kmp_str_loc_t *loc) {
__kmp_str_fname_free(&loc->fname);
__kmp_str_free(&(loc->_bulk));
loc->file = NULL;
loc->func = NULL;
} // kmp_str_loc_free
/* This function is intended to compare file names. On Windows* OS file names
are case-insensitive, so functions performs case-insensitive comparison. On
Linux* OS it performs case-sensitive comparison. Note: The function returns
*true* if strings are *equal*. */
int __kmp_str_eqf( // True, if strings are equal, false otherwise.
char const *lhs, // First string.
char const *rhs // Second string.
) {
int result;
#if KMP_OS_WINDOWS
result = (_stricmp(lhs, rhs) == 0);
#else
result = (strcmp(lhs, rhs) == 0);
#endif
return result;
} // __kmp_str_eqf
/* This function is like sprintf, but it *allocates* new buffer, which must be
freed eventually by __kmp_str_free(). The function is very convenient for
constructing strings, it successfully replaces strdup(), strcat(), it frees
programmer from buffer allocations and helps to avoid buffer overflows.
Examples:
str = __kmp_str_format("%s", orig); //strdup() doesn't care about buffer size
__kmp_str_free( & str );
str = __kmp_str_format( "%s%s", orig1, orig2 ); // strcat(), doesn't care
// about buffer size.
__kmp_str_free( & str );
str = __kmp_str_format( "%s/%s.txt", path, file ); // constructing string.
__kmp_str_free( & str );
Performance note:
This function allocates memory with malloc() calls, so do not call it from
performance-critical code. In performance-critical code consider using
kmp_str_buf_t instead, since it uses stack-allocated buffer for short
strings.
Why does this function use malloc()?
1. __kmp_allocate() returns cache-aligned memory allocated with malloc().
There are no reasons in using __kmp_allocate() for strings due to extra
overhead while cache-aligned memory is not necessary.
2. __kmp_thread_malloc() cannot be used because it requires pointer to thread
structure. We need to perform string operations during library startup
(for example, in __kmp_register_library_startup()) when no thread
structures are allocated yet.
So standard malloc() is the only available option.
*/
char *__kmp_str_format( // Allocated string.
char const *format, // Format string.
... // Other parameters.
) {
va_list args;
int size = 512;
char *buffer = NULL;
int rc;
// Allocate buffer.
buffer = (char *)KMP_INTERNAL_MALLOC(size);
if (buffer == NULL) {
KMP_FATAL(MemoryAllocFailed);
}
for (;;) {
// Try to format string.
va_start(args, format);
rc = KMP_VSNPRINTF(buffer, size, format, args);
va_end(args);
// No errors, string has been formatted.
if (rc >= 0 && rc < size) {
break;
}
// Error occurred, buffer is too small.
if (rc >= 0) {
// C99-conforming implementation of vsnprintf returns required buffer
// size.
size = rc + 1;
} else {
// Older implementations just return -1.
size = size * 2;
}
// Enlarge buffer and try again.
buffer = (char *)KMP_INTERNAL_REALLOC(buffer, size);
if (buffer == NULL) {
KMP_FATAL(MemoryAllocFailed);
}
}
return buffer;
} // func __kmp_str_format
void __kmp_str_free(char **str) {
KMP_DEBUG_ASSERT(str != NULL);
KMP_INTERNAL_FREE(*str);
*str = NULL;
} // func __kmp_str_free
/* If len is zero, returns true iff target and data have exact case-insensitive
match. If len is negative, returns true iff target is a case-insensitive
substring of data. If len is positive, returns true iff target is a
case-insensitive substring of data or vice versa, and neither is shorter than
len. */
int __kmp_str_match(char const *target, int len, char const *data) {
int i;
if (target == NULL || data == NULL) {
return FALSE;
}
for (i = 0; target[i] && data[i]; ++i) {
if (TOLOWER(target[i]) != TOLOWER(data[i])) {
return FALSE;
}
}
return ((len > 0) ? i >= len : (!target[i] && (len || !data[i])));
} // __kmp_str_match
int __kmp_str_match_false(char const *data) {
int result =
__kmp_str_match("false", 1, data) || __kmp_str_match("off", 2, data) ||
__kmp_str_match("0", 1, data) || __kmp_str_match(".false.", 2, data) ||
__kmp_str_match(".f.", 2, data) || __kmp_str_match("no", 1, data) ||
__kmp_str_match("disabled", 0, data);
return result;
} // __kmp_str_match_false
int __kmp_str_match_true(char const *data) {
int result =
__kmp_str_match("true", 1, data) || __kmp_str_match("on", 2, data) ||
__kmp_str_match("1", 1, data) || __kmp_str_match(".true.", 2, data) ||
__kmp_str_match(".t.", 2, data) || __kmp_str_match("yes", 1, data) ||
__kmp_str_match("enabled", 0, data);
return result;
} // __kmp_str_match_true
void __kmp_str_replace(char *str, char search_for, char replace_with) {
char *found = NULL;
found = strchr(str, search_for);
while (found) {
*found = replace_with;
found = strchr(found + 1, search_for);
}
} // __kmp_str_replace
void __kmp_str_split(char *str, // I: String to split.
char delim, // I: Character to split on.
char **head, // O: Pointer to head (may be NULL).
char **tail // O: Pointer to tail (may be NULL).
) {
char *h = str;
char *t = NULL;
if (str != NULL) {
char *ptr = strchr(str, delim);
if (ptr != NULL) {
*ptr = 0;
t = ptr + 1;
}
}
if (head != NULL) {
*head = h;
}
if (tail != NULL) {
*tail = t;
}
} // __kmp_str_split
/* strtok_r() is not available on Windows* OS. This function reimplements
strtok_r(). */
char *__kmp_str_token(
char *str, // String to split into tokens. Note: String *is* modified!
char const *delim, // Delimiters.
char **buf // Internal buffer.
) {
char *token = NULL;
#if KMP_OS_WINDOWS
// On Windows* OS there is no strtok_r() function. Let us implement it.
if (str != NULL) {
*buf = str; // First call, initialize buf.
}
*buf += strspn(*buf, delim); // Skip leading delimiters.
if (**buf != 0) { // Rest of the string is not yet empty.
token = *buf; // Use it as result.
*buf += strcspn(*buf, delim); // Skip non-delimiters.
if (**buf != 0) { // Rest of the string is not yet empty.
**buf = 0; // Terminate token here.
*buf += 1; // Advance buf to start with the next token next time.
}
}
#else
// On Linux* OS and OS X*, strtok_r() is available. Let us use it.
token = strtok_r(str, delim, buf);
#endif
return token;
} // __kmp_str_token
int __kmp_str_to_int(char const *str, char sentinel) {
int result, factor;
char const *t;
result = 0;
for (t = str; *t != '\0'; ++t) {
if (*t < '0' || *t > '9')
break;
result = (result * 10) + (*t - '0');
}
switch (*t) {
case '\0': /* the current default for no suffix is bytes */
factor = 1;
break;
case 'b':
case 'B': /* bytes */
++t;
factor = 1;
break;
case 'k':
case 'K': /* kilo-bytes */
++t;
factor = 1024;
break;
case 'm':
case 'M': /* mega-bytes */
++t;
factor = (1024 * 1024);
break;
default:
if (*t != sentinel)
return (-1);
t = "";
factor = 1;
}
if (result > (INT_MAX / factor))
result = INT_MAX;
else
result *= factor;
return (*t != 0 ? 0 : result);
} // __kmp_str_to_int
/* The routine parses input string. It is expected it is a unsigned integer with
optional unit. Units are: "b" for bytes, "kb" or just "k" for kilobytes, "mb"
or "m" for megabytes, ..., "yb" or "y" for yottabytes. :-) Unit name is
case-insensitive. The routine returns 0 if everything is ok, or error code:
-1 in case of overflow, -2 in case of unknown unit. *size is set to parsed
value. In case of overflow *size is set to KMP_SIZE_T_MAX, in case of unknown
unit *size is set to zero. */
void __kmp_str_to_size( // R: Error code.
char const *str, // I: String of characters, unsigned number and unit ("b",
// "kb", etc).
size_t *out, // O: Parsed number.
size_t dfactor, // I: The factor if none of the letters specified.
char const **error // O: Null if everything is ok, error message otherwise.
) {
size_t value = 0;
size_t factor = 0;
int overflow = 0;
int i = 0;
int digit;
KMP_DEBUG_ASSERT(str != NULL);
// Skip spaces.
while (str[i] == ' ' || str[i] == '\t') {
++i;
}
// Parse number.
if (str[i] < '0' || str[i] > '9') {
*error = KMP_I18N_STR(NotANumber);
return;
}
do {
digit = str[i] - '0';
overflow = overflow || (value > (KMP_SIZE_T_MAX - digit) / 10);
value = (value * 10) + digit;
++i;
} while (str[i] >= '0' && str[i] <= '9');
// Skip spaces.
while (str[i] == ' ' || str[i] == '\t') {
++i;
}
// Parse unit.
#define _case(ch, exp) \
case ch: \
case ch - ('a' - 'A'): { \
size_t shift = (exp)*10; \
++i; \
if (shift < sizeof(size_t) * 8) { \
factor = (size_t)(1) << shift; \
} else { \
overflow = 1; \
} \
} break;
switch (str[i]) {
_case('k', 1); // Kilo
_case('m', 2); // Mega
_case('g', 3); // Giga
_case('t', 4); // Tera
_case('p', 5); // Peta
_case('e', 6); // Exa
_case('z', 7); // Zetta
_case('y', 8); // Yotta
// Oops. No more units...
}
#undef _case
if (str[i] == 'b' || str[i] == 'B') { // Skip optional "b".
if (factor == 0) {
factor = 1;
}
++i;
}
if (!(str[i] == ' ' || str[i] == '\t' || str[i] == 0)) { // Bad unit
*error = KMP_I18N_STR(BadUnit);
return;
}
if (factor == 0) {
factor = dfactor;
}
// Apply factor.
overflow = overflow || (value > (KMP_SIZE_T_MAX / factor));
value *= factor;
// Skip spaces.
while (str[i] == ' ' || str[i] == '\t') {
++i;
}
if (str[i] != 0) {
*error = KMP_I18N_STR(IllegalCharacters);
return;
}
if (overflow) {
*error = KMP_I18N_STR(ValueTooLarge);
*out = KMP_SIZE_T_MAX;
return;
}
*error = NULL;
*out = value;
} // __kmp_str_to_size
void __kmp_str_to_uint( // R: Error code.
char const *str, // I: String of characters, unsigned number.
kmp_uint64 *out, // O: Parsed number.
char const **error // O: Null if everything is ok, error message otherwise.
) {
size_t value = 0;
int overflow = 0;
int i = 0;
int digit;
KMP_DEBUG_ASSERT(str != NULL);
// Skip spaces.
while (str[i] == ' ' || str[i] == '\t') {
++i;
}
// Parse number.
if (str[i] < '0' || str[i] > '9') {
*error = KMP_I18N_STR(NotANumber);
return;
}
do {
digit = str[i] - '0';
overflow = overflow || (value > (KMP_SIZE_T_MAX - digit) / 10);
value = (value * 10) + digit;
++i;
} while (str[i] >= '0' && str[i] <= '9');
// Skip spaces.
while (str[i] == ' ' || str[i] == '\t') {
++i;
}
if (str[i] != 0) {
*error = KMP_I18N_STR(IllegalCharacters);
return;
}
if (overflow) {
*error = KMP_I18N_STR(ValueTooLarge);
*out = (kmp_uint64)-1;
return;
}
*error = NULL;
*out = value;
} // __kmp_str_to_unit
// end of file //

126
runtime/src/kmp_str.h Normal file
View File

@ -0,0 +1,126 @@
/*
* kmp_str.h -- String manipulation routines.
*/
//===----------------------------------------------------------------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is dual licensed under the MIT and the University of Illinois Open
// Source Licenses. See LICENSE.txt for details.
//
//===----------------------------------------------------------------------===//
#ifndef KMP_STR_H
#define KMP_STR_H
#include <stdarg.h>
#include <string.h>
#include "kmp_os.h"
#ifdef __cplusplus
extern "C" {
#endif // __cplusplus
#if KMP_OS_WINDOWS
#define strdup _strdup
#endif
/* some macros to replace ctype.h functions */
#define TOLOWER(c) ((((c) >= 'A') && ((c) <= 'Z')) ? ((c) + 'a' - 'A') : (c))
struct kmp_str_buf {
char *str; // Pointer to buffer content, read only.
unsigned int size; // Do not change this field!
int used; // Number of characters printed to buffer, read only.
char bulk[512]; // Do not use this field!
}; // struct kmp_str_buf
typedef struct kmp_str_buf kmp_str_buf_t;
#define __kmp_str_buf_init(b) \
{ \
(b)->str = (b)->bulk; \
(b)->size = sizeof((b)->bulk); \
(b)->used = 0; \
(b)->bulk[0] = 0; \
}
void __kmp_str_buf_clear(kmp_str_buf_t *buffer);
void __kmp_str_buf_reserve(kmp_str_buf_t *buffer, int size);
void __kmp_str_buf_detach(kmp_str_buf_t *buffer);
void __kmp_str_buf_free(kmp_str_buf_t *buffer);
void __kmp_str_buf_cat(kmp_str_buf_t *buffer, char const *str, int len);
void __kmp_str_buf_catbuf(kmp_str_buf_t *dest, const kmp_str_buf_t *src);
int __kmp_str_buf_vprint(kmp_str_buf_t *buffer, char const *format,
va_list args);
int __kmp_str_buf_print(kmp_str_buf_t *buffer, char const *format, ...);
void __kmp_str_buf_print_size(kmp_str_buf_t *buffer, size_t size);
/* File name parser.
Usage:
kmp_str_fname_t fname = __kmp_str_fname_init( path );
// Use fname.path (copy of original path ), fname.dir, fname.base.
// Note fname.dir concatenated with fname.base gives exact copy of path.
__kmp_str_fname_free( & fname );
*/
struct kmp_str_fname {
char *path;
char *dir;
char *base;
}; // struct kmp_str_fname
typedef struct kmp_str_fname kmp_str_fname_t;
void __kmp_str_fname_init(kmp_str_fname_t *fname, char const *path);
void __kmp_str_fname_free(kmp_str_fname_t *fname);
// Compares file name with specified patern. If pattern is NULL, any fname
// matched.
int __kmp_str_fname_match(kmp_str_fname_t const *fname, char const *pattern);
/* The compiler provides source locations in string form
";file;func;line;col;;". It is not convenient for manupulation. This
structure keeps source location in more convenient form.
Usage:
kmp_str_loc_t loc = __kmp_str_loc_init( ident->psource, 0 );
// use loc.file, loc.func, loc.line, loc.col.
// loc.fname is available if second argument of __kmp_str_loc_init is true.
__kmp_str_loc_free( & loc );
If psource is NULL or does not follow format above, file and/or func may be
NULL pointers.
*/
struct kmp_str_loc {
char *_bulk; // Do not use thid field.
kmp_str_fname_t fname; // Will be initialized if init_fname is true.
char *file;
char *func;
int line;
int col;
}; // struct kmp_str_loc
typedef struct kmp_str_loc kmp_str_loc_t;
kmp_str_loc_t __kmp_str_loc_init(char const *psource, int init_fname);
void __kmp_str_loc_free(kmp_str_loc_t *loc);
int __kmp_str_eqf(char const *lhs, char const *rhs);
char *__kmp_str_format(char const *format, ...);
void __kmp_str_free(char **str);
int __kmp_str_match(char const *target, int len, char const *data);
int __kmp_str_match_false(char const *data);
int __kmp_str_match_true(char const *data);
void __kmp_str_replace(char *str, char search_for, char replace_with);
void __kmp_str_split(char *str, char delim, char **head, char **tail);
char *__kmp_str_token(char *str, char const *delim, char **buf);
int __kmp_str_to_int(char const *str, char sentinel);
void __kmp_str_to_size(char const *str, size_t *out, size_t dfactor,
char const **error);
void __kmp_str_to_uint(char const *str, kmp_uint64 *out, char const **error);
#ifdef __cplusplus
} // extern "C"
#endif // __cplusplus
#endif // KMP_STR_H
// end of file //

370
runtime/src/kmp_stub.cpp Normal file
View File

@ -0,0 +1,370 @@
/*
* kmp_stub.cpp -- stub versions of user-callable OpenMP RT functions.
*/
//===----------------------------------------------------------------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is dual licensed under the MIT and the University of Illinois Open
// Source Licenses. See LICENSE.txt for details.
//
//===----------------------------------------------------------------------===//
#include <errno.h>
#include <limits.h>
#include <stdlib.h>
#define __KMP_IMP
#include "omp.h" // omp_* declarations, must be included before "kmp.h"
#include "kmp.h" // KMP_DEFAULT_STKSIZE
#include "kmp_stub.h"
#if KMP_OS_WINDOWS
#include <windows.h>
#else
#include <sys/time.h>
#endif
// Moved from omp.h
#define omp_set_max_active_levels ompc_set_max_active_levels
#define omp_set_schedule ompc_set_schedule
#define omp_get_ancestor_thread_num ompc_get_ancestor_thread_num
#define omp_get_team_size ompc_get_team_size
#define omp_set_num_threads ompc_set_num_threads
#define omp_set_dynamic ompc_set_dynamic
#define omp_set_nested ompc_set_nested
#define omp_set_affinity_format ompc_set_affinity_format
#define omp_get_affinity_format ompc_get_affinity_format
#define omp_display_affinity ompc_display_affinity
#define omp_capture_affinity ompc_capture_affinity
#define kmp_set_stacksize kmpc_set_stacksize
#define kmp_set_stacksize_s kmpc_set_stacksize_s
#define kmp_set_blocktime kmpc_set_blocktime
#define kmp_set_library kmpc_set_library
#define kmp_set_defaults kmpc_set_defaults
#define kmp_set_disp_num_buffers kmpc_set_disp_num_buffers
#define kmp_malloc kmpc_malloc
#define kmp_aligned_malloc kmpc_aligned_malloc
#define kmp_calloc kmpc_calloc
#define kmp_realloc kmpc_realloc
#define kmp_free kmpc_free
#if KMP_OS_WINDOWS
static double frequency = 0.0;
#endif
// Helper functions.
static size_t __kmps_init() {
static int initialized = 0;
static size_t dummy = 0;
if (!initialized) {
// TODO: Analyze KMP_VERSION environment variable, print
// __kmp_version_copyright and __kmp_version_build_time.
// WARNING: Do not use "fprintf(stderr, ...)" because it will cause
// unresolved "__iob" symbol (see C70080). We need to extract __kmp_printf()
// stuff from kmp_runtime.cpp and use it.
// Trick with dummy variable forces linker to keep __kmp_version_copyright
// and __kmp_version_build_time strings in executable file (in case of
// static linkage). When KMP_VERSION analysis is implemented, dummy
// variable should be deleted, function should return void.
dummy = __kmp_version_copyright - __kmp_version_build_time;
#if KMP_OS_WINDOWS
LARGE_INTEGER freq;
BOOL status = QueryPerformanceFrequency(&freq);
if (status) {
frequency = double(freq.QuadPart);
}
#endif
initialized = 1;
}
return dummy;
} // __kmps_init
#define i __kmps_init();
/* set API functions */
void omp_set_num_threads(omp_int_t num_threads) { i; }
void omp_set_dynamic(omp_int_t dynamic) {
i;
__kmps_set_dynamic(dynamic);
}
void omp_set_nested(omp_int_t nested) {
i;
__kmps_set_nested(nested);
}
void omp_set_max_active_levels(omp_int_t max_active_levels) { i; }
void omp_set_schedule(omp_sched_t kind, omp_int_t modifier) {
i;
__kmps_set_schedule((kmp_sched_t)kind, modifier);
}
int omp_get_ancestor_thread_num(omp_int_t level) {
i;
return (level) ? (-1) : (0);
}
int omp_get_team_size(omp_int_t level) {
i;
return (level) ? (-1) : (1);
}
int kmpc_set_affinity_mask_proc(int proc, void **mask) {
i;
return -1;
}
int kmpc_unset_affinity_mask_proc(int proc, void **mask) {
i;
return -1;
}
int kmpc_get_affinity_mask_proc(int proc, void **mask) {
i;
return -1;
}
/* kmp API functions */
void kmp_set_stacksize(omp_int_t arg) {
i;
__kmps_set_stacksize(arg);
}
void kmp_set_stacksize_s(size_t arg) {
i;
__kmps_set_stacksize(arg);
}
void kmp_set_blocktime(omp_int_t arg) {
i;
__kmps_set_blocktime(arg);
}
void kmp_set_library(omp_int_t arg) {
i;
__kmps_set_library(arg);
}
void kmp_set_defaults(char const *str) { i; }
void kmp_set_disp_num_buffers(omp_int_t arg) { i; }
/* KMP memory management functions. */
void *kmp_malloc(size_t size) {
i;
void *res;
#if KMP_OS_WINDOWS
// If succesfull returns a pointer to the memory block, otherwise returns
// NULL.
// Sets errno to ENOMEM or EINVAL if memory allocation failed or parameter
// validation failed.
res = _aligned_malloc(size, 1);
#else
res = malloc(size);
#endif
return res;
}
void *kmp_aligned_malloc(size_t sz, size_t a) {
i;
int err;
void *res;
#if KMP_OS_WINDOWS
res = _aligned_malloc(sz, a);
#else
if (err = posix_memalign(&res, a, sz)) {
errno = err; // can be EINVAL or ENOMEM
res = NULL;
}
#endif
return res;
}
void *kmp_calloc(size_t nelem, size_t elsize) {
i;
void *res;
#if KMP_OS_WINDOWS
res = _aligned_recalloc(NULL, nelem, elsize, 1);
#else
res = calloc(nelem, elsize);
#endif
return res;
}
void *kmp_realloc(void *ptr, size_t size) {
i;
void *res;
#if KMP_OS_WINDOWS
res = _aligned_realloc(ptr, size, 1);
#else
res = realloc(ptr, size);
#endif
return res;
}
void kmp_free(void *ptr) {
i;
#if KMP_OS_WINDOWS
_aligned_free(ptr);
#else
free(ptr);
#endif
}
static int __kmps_blocktime = INT_MAX;
void __kmps_set_blocktime(int arg) {
i;
__kmps_blocktime = arg;
} // __kmps_set_blocktime
int __kmps_get_blocktime(void) {
i;
return __kmps_blocktime;
} // __kmps_get_blocktime
static int __kmps_dynamic = 0;
void __kmps_set_dynamic(int arg) {
i;
__kmps_dynamic = arg;
} // __kmps_set_dynamic
int __kmps_get_dynamic(void) {
i;
return __kmps_dynamic;
} // __kmps_get_dynamic
static int __kmps_library = 1000;
void __kmps_set_library(int arg) {
i;
__kmps_library = arg;
} // __kmps_set_library
int __kmps_get_library(void) {
i;
return __kmps_library;
} // __kmps_get_library
static int __kmps_nested = 0;
void __kmps_set_nested(int arg) {
i;
__kmps_nested = arg;
} // __kmps_set_nested
int __kmps_get_nested(void) {
i;
return __kmps_nested;
} // __kmps_get_nested
static size_t __kmps_stacksize = KMP_DEFAULT_STKSIZE;
void __kmps_set_stacksize(int arg) {
i;
__kmps_stacksize = arg;
} // __kmps_set_stacksize
int __kmps_get_stacksize(void) {
i;
return __kmps_stacksize;
} // __kmps_get_stacksize
static kmp_sched_t __kmps_sched_kind = kmp_sched_default;
static int __kmps_sched_modifier = 0;
void __kmps_set_schedule(kmp_sched_t kind, int modifier) {
i;
__kmps_sched_kind = kind;
__kmps_sched_modifier = modifier;
} // __kmps_set_schedule
void __kmps_get_schedule(kmp_sched_t *kind, int *modifier) {
i;
*kind = __kmps_sched_kind;
*modifier = __kmps_sched_modifier;
} // __kmps_get_schedule
#if OMP_40_ENABLED
static kmp_proc_bind_t __kmps_proc_bind = proc_bind_false;
void __kmps_set_proc_bind(kmp_proc_bind_t arg) {
i;
__kmps_proc_bind = arg;
} // __kmps_set_proc_bind
kmp_proc_bind_t __kmps_get_proc_bind(void) {
i;
return __kmps_proc_bind;
} // __kmps_get_proc_bind
#endif /* OMP_40_ENABLED */
double __kmps_get_wtime(void) {
// Elapsed wall clock time (in second) from "sometime in the past".
double wtime = 0.0;
i;
#if KMP_OS_WINDOWS
if (frequency > 0.0) {
LARGE_INTEGER now;
BOOL status = QueryPerformanceCounter(&now);
if (status) {
wtime = double(now.QuadPart) / frequency;
}
}
#else
// gettimeofday() returns seconds and microseconds since the Epoch.
struct timeval tval;
int rc;
rc = gettimeofday(&tval, NULL);
if (rc == 0) {
wtime = (double)(tval.tv_sec) + 1.0E-06 * (double)(tval.tv_usec);
} else {
// TODO: Assert or abort here.
}
#endif
return wtime;
} // __kmps_get_wtime
double __kmps_get_wtick(void) {
// Number of seconds between successive clock ticks.
double wtick = 0.0;
i;
#if KMP_OS_WINDOWS
{
DWORD increment;
DWORD adjustment;
BOOL disabled;
BOOL rc;
rc = GetSystemTimeAdjustment(&adjustment, &increment, &disabled);
if (rc) {
wtick = 1.0E-07 * (double)(disabled ? increment : adjustment);
} else {
// TODO: Assert or abort here.
wtick = 1.0E-03;
}
}
#else
// TODO: gettimeofday() returns in microseconds, but what the precision?
wtick = 1.0E-06;
#endif
return wtick;
} // __kmps_get_wtick
#if OMP_50_ENABLED
/* OpenMP 5.0 Memory Management */
const omp_allocator_t *OMP_NULL_ALLOCATOR = NULL;
const omp_allocator_t *omp_default_mem_alloc = (const omp_allocator_t *)1;
const omp_allocator_t *omp_large_cap_mem_alloc = (const omp_allocator_t *)2;
const omp_allocator_t *omp_const_mem_alloc = (const omp_allocator_t *)3;
const omp_allocator_t *omp_high_bw_mem_alloc = (const omp_allocator_t *)4;
const omp_allocator_t *omp_low_lat_mem_alloc = (const omp_allocator_t *)5;
const omp_allocator_t *omp_cgroup_mem_alloc = (const omp_allocator_t *)6;
const omp_allocator_t *omp_pteam_mem_alloc = (const omp_allocator_t *)7;
const omp_allocator_t *omp_thread_mem_alloc = (const omp_allocator_t *)8;
/* OpenMP 5.0 Affinity Format */
void omp_set_affinity_format(char const *format) { i; }
size_t omp_get_affinity_format(char *buffer, size_t size) {
i;
return 0;
}
void omp_display_affinity(char const *format) { i; }
size_t omp_capture_affinity(char *buffer, size_t buf_size, char const *format) {
i;
return 0;
}
#endif /* OMP_50_ENABLED */
// end of file //

59
runtime/src/kmp_stub.h Normal file
View File

@ -0,0 +1,59 @@
/*
* kmp_stub.h
*/
//===----------------------------------------------------------------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is dual licensed under the MIT and the University of Illinois Open
// Source Licenses. See LICENSE.txt for details.
//
//===----------------------------------------------------------------------===//
#ifndef KMP_STUB_H
#define KMP_STUB_H
#ifdef __cplusplus
extern "C" {
#endif // __cplusplus
void __kmps_set_blocktime(int arg);
int __kmps_get_blocktime(void);
void __kmps_set_dynamic(int arg);
int __kmps_get_dynamic(void);
void __kmps_set_library(int arg);
int __kmps_get_library(void);
void __kmps_set_nested(int arg);
int __kmps_get_nested(void);
void __kmps_set_stacksize(int arg);
int __kmps_get_stacksize();
#ifndef KMP_SCHED_TYPE_DEFINED
#define KMP_SCHED_TYPE_DEFINED
typedef enum kmp_sched {
kmp_sched_static = 1, // mapped to kmp_sch_static_chunked (33)
kmp_sched_dynamic = 2, // mapped to kmp_sch_dynamic_chunked (35)
kmp_sched_guided = 3, // mapped to kmp_sch_guided_chunked (36)
kmp_sched_auto = 4, // mapped to kmp_sch_auto (38)
kmp_sched_default = kmp_sched_static // default scheduling
} kmp_sched_t;
#endif
void __kmps_set_schedule(kmp_sched_t kind, int modifier);
void __kmps_get_schedule(kmp_sched_t *kind, int *modifier);
#if OMP_40_ENABLED
void __kmps_set_proc_bind(kmp_proc_bind_t arg);
kmp_proc_bind_t __kmps_get_proc_bind(void);
#endif /* OMP_40_ENABLED */
double __kmps_get_wtime();
double __kmps_get_wtick();
#ifdef __cplusplus
} // extern "C"
#endif // __cplusplus
#endif // KMP_STUB_H
// end of file //

View File

@ -0,0 +1,664 @@
/*
* kmp_taskdeps.cpp
*/
//===----------------------------------------------------------------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is dual licensed under the MIT and the University of Illinois Open
// Source Licenses. See LICENSE.txt for details.
//
//===----------------------------------------------------------------------===//
//#define KMP_SUPPORT_GRAPH_OUTPUT 1
#include "kmp.h"
#include "kmp_io.h"
#include "kmp_wait_release.h"
#include "kmp_taskdeps.h"
#if OMPT_SUPPORT
#include "ompt-specific.h"
#endif
#if OMP_40_ENABLED
// TODO: Improve memory allocation? keep a list of pre-allocated structures?
// allocate in blocks? re-use list finished list entries?
// TODO: don't use atomic ref counters for stack-allocated nodes.
// TODO: find an alternate to atomic refs for heap-allocated nodes?
// TODO: Finish graph output support
// TODO: kmp_lock_t seems a tad to big (and heavy weight) for this. Check other
// runtime locks
// TODO: Any ITT support needed?
#ifdef KMP_SUPPORT_GRAPH_OUTPUT
static std::atomic<kmp_int32> kmp_node_id_seed = ATOMIC_VAR_INIT(0);
#endif
static void __kmp_init_node(kmp_depnode_t *node) {
node->dn.successors = NULL;
node->dn.task = NULL; // will point to the rigth task
// once dependences have been processed
for (int i = 0; i < MAX_MTX_DEPS; ++i)
node->dn.mtx_locks[i] = NULL;
node->dn.mtx_num_locks = 0;
__kmp_init_lock(&node->dn.lock);
KMP_ATOMIC_ST_RLX(&node->dn.nrefs, 1); // init creates the first reference
#ifdef KMP_SUPPORT_GRAPH_OUTPUT
node->dn.id = KMP_ATOMIC_INC(&kmp_node_id_seed);
#endif
}
static inline kmp_depnode_t *__kmp_node_ref(kmp_depnode_t *node) {
KMP_ATOMIC_INC(&node->dn.nrefs);
return node;
}
enum { KMP_DEPHASH_OTHER_SIZE = 97, KMP_DEPHASH_MASTER_SIZE = 997 };
static inline kmp_int32 __kmp_dephash_hash(kmp_intptr_t addr, size_t hsize) {
// TODO alternate to try: set = (((Addr64)(addrUsefulBits * 9.618)) %
// m_num_sets );
return ((addr >> 6) ^ (addr >> 2)) % hsize;
}
static kmp_dephash_t *__kmp_dephash_create(kmp_info_t *thread,
kmp_taskdata_t *current_task) {
kmp_dephash_t *h;
size_t h_size;
if (current_task->td_flags.tasktype == TASK_IMPLICIT)
h_size = KMP_DEPHASH_MASTER_SIZE;
else
h_size = KMP_DEPHASH_OTHER_SIZE;
kmp_int32 size =
h_size * sizeof(kmp_dephash_entry_t *) + sizeof(kmp_dephash_t);
#if USE_FAST_MEMORY
h = (kmp_dephash_t *)__kmp_fast_allocate(thread, size);
#else
h = (kmp_dephash_t *)__kmp_thread_malloc(thread, size);
#endif
h->size = h_size;
#ifdef KMP_DEBUG
h->nelements = 0;
h->nconflicts = 0;
#endif
h->buckets = (kmp_dephash_entry **)(h + 1);
for (size_t i = 0; i < h_size; i++)
h->buckets[i] = 0;
return h;
}
#define ENTRY_LAST_INS 0
#define ENTRY_LAST_MTXS 1
static kmp_dephash_entry *
__kmp_dephash_find(kmp_info_t *thread, kmp_dephash_t *h, kmp_intptr_t addr) {
kmp_int32 bucket = __kmp_dephash_hash(addr, h->size);
kmp_dephash_entry_t *entry;
for (entry = h->buckets[bucket]; entry; entry = entry->next_in_bucket)
if (entry->addr == addr)
break;
if (entry == NULL) {
// create entry. This is only done by one thread so no locking required
#if USE_FAST_MEMORY
entry = (kmp_dephash_entry_t *)__kmp_fast_allocate(
thread, sizeof(kmp_dephash_entry_t));
#else
entry = (kmp_dephash_entry_t *)__kmp_thread_malloc(
thread, sizeof(kmp_dephash_entry_t));
#endif
entry->addr = addr;
entry->last_out = NULL;
entry->last_ins = NULL;
entry->last_mtxs = NULL;
entry->last_flag = ENTRY_LAST_INS;
entry->mtx_lock = NULL;
entry->next_in_bucket = h->buckets[bucket];
h->buckets[bucket] = entry;
#ifdef KMP_DEBUG
h->nelements++;
if (entry->next_in_bucket)
h->nconflicts++;
#endif
}
return entry;
}
static kmp_depnode_list_t *__kmp_add_node(kmp_info_t *thread,
kmp_depnode_list_t *list,
kmp_depnode_t *node) {
kmp_depnode_list_t *new_head;
#if USE_FAST_MEMORY
new_head = (kmp_depnode_list_t *)__kmp_fast_allocate(
thread, sizeof(kmp_depnode_list_t));
#else
new_head = (kmp_depnode_list_t *)__kmp_thread_malloc(
thread, sizeof(kmp_depnode_list_t));
#endif
new_head->node = __kmp_node_ref(node);
new_head->next = list;
return new_head;
}
static inline void __kmp_track_dependence(kmp_depnode_t *source,
kmp_depnode_t *sink,
kmp_task_t *sink_task) {
#ifdef KMP_SUPPORT_GRAPH_OUTPUT
kmp_taskdata_t *task_source = KMP_TASK_TO_TASKDATA(source->dn.task);
// do not use sink->dn.task as that is only filled after the dependencies
// are already processed!
kmp_taskdata_t *task_sink = KMP_TASK_TO_TASKDATA(sink_task);
__kmp_printf("%d(%s) -> %d(%s)\n", source->dn.id,
task_source->td_ident->psource, sink->dn.id,
task_sink->td_ident->psource);
#endif
#if OMPT_SUPPORT && OMPT_OPTIONAL
/* OMPT tracks dependences between task (a=source, b=sink) in which
task a blocks the execution of b through the ompt_new_dependence_callback
*/
if (ompt_enabled.ompt_callback_task_dependence) {
kmp_taskdata_t *task_source = KMP_TASK_TO_TASKDATA(source->dn.task);
kmp_taskdata_t *task_sink = KMP_TASK_TO_TASKDATA(sink_task);
ompt_callbacks.ompt_callback(ompt_callback_task_dependence)(
&(task_source->ompt_task_info.task_data),
&(task_sink->ompt_task_info.task_data));
}
#endif /* OMPT_SUPPORT && OMPT_OPTIONAL */
}
static inline kmp_int32
__kmp_depnode_link_successor(kmp_int32 gtid, kmp_info_t *thread,
kmp_task_t *task, kmp_depnode_t *node,
kmp_depnode_list_t *plist) {
if (!plist)
return 0;
kmp_int32 npredecessors = 0;
// link node as successor of list elements
for (kmp_depnode_list_t *p = plist; p; p = p->next) {
kmp_depnode_t *dep = p->node;
if (dep->dn.task) {
KMP_ACQUIRE_DEPNODE(gtid, dep);
if (dep->dn.task) {
__kmp_track_dependence(dep, node, task);
dep->dn.successors = __kmp_add_node(thread, dep->dn.successors, node);
KA_TRACE(40, ("__kmp_process_deps: T#%d adding dependence from %p to "
"%p\n",
gtid, KMP_TASK_TO_TASKDATA(dep->dn.task),
KMP_TASK_TO_TASKDATA(task)));
npredecessors++;
}
KMP_RELEASE_DEPNODE(gtid, dep);
}
}
return npredecessors;
}
static inline kmp_int32 __kmp_depnode_link_successor(kmp_int32 gtid,
kmp_info_t *thread,
kmp_task_t *task,
kmp_depnode_t *source,
kmp_depnode_t *sink) {
if (!sink)
return 0;
kmp_int32 npredecessors = 0;
if (sink->dn.task) {
// synchronously add source to sink' list of successors
KMP_ACQUIRE_DEPNODE(gtid, sink);
if (sink->dn.task) {
__kmp_track_dependence(sink, source, task);
sink->dn.successors = __kmp_add_node(thread, sink->dn.successors, source);
KA_TRACE(40, ("__kmp_process_deps: T#%d adding dependence from %p to "
"%p\n",
gtid, KMP_TASK_TO_TASKDATA(sink->dn.task),
KMP_TASK_TO_TASKDATA(task)));
npredecessors++;
}
KMP_RELEASE_DEPNODE(gtid, sink);
}
return npredecessors;
}
template <bool filter>
static inline kmp_int32
__kmp_process_deps(kmp_int32 gtid, kmp_depnode_t *node, kmp_dephash_t *hash,
bool dep_barrier, kmp_int32 ndeps,
kmp_depend_info_t *dep_list, kmp_task_t *task) {
KA_TRACE(30, ("__kmp_process_deps<%d>: T#%d processing %d dependencies : "
"dep_barrier = %d\n",
filter, gtid, ndeps, dep_barrier));
kmp_info_t *thread = __kmp_threads[gtid];
kmp_int32 npredecessors = 0;
for (kmp_int32 i = 0; i < ndeps; i++) {
const kmp_depend_info_t *dep = &dep_list[i];
if (filter && dep->base_addr == 0)
continue; // skip filtered entries
kmp_dephash_entry_t *info =
__kmp_dephash_find(thread, hash, dep->base_addr);
kmp_depnode_t *last_out = info->last_out;
kmp_depnode_list_t *last_ins = info->last_ins;
kmp_depnode_list_t *last_mtxs = info->last_mtxs;
if (dep->flags.out) { // out --> clean lists of ins and mtxs if any
if (last_ins || last_mtxs) {
if (info->last_flag == ENTRY_LAST_INS) { // INS were last
npredecessors +=
__kmp_depnode_link_successor(gtid, thread, task, node, last_ins);
} else { // MTXS were last
npredecessors +=
__kmp_depnode_link_successor(gtid, thread, task, node, last_mtxs);
}
__kmp_depnode_list_free(thread, last_ins);
__kmp_depnode_list_free(thread, last_mtxs);
info->last_ins = NULL;
info->last_mtxs = NULL;
} else {
npredecessors +=
__kmp_depnode_link_successor(gtid, thread, task, node, last_out);
}
__kmp_node_deref(thread, last_out);
if (dep_barrier) {
// if this is a sync point in the serial sequence, then the previous
// outputs are guaranteed to be completed after the execution of this
// task so the previous output nodes can be cleared.
info->last_out = NULL;
} else {
info->last_out = __kmp_node_ref(node);
}
} else if (dep->flags.in) {
// in --> link node to either last_out or last_mtxs, clean earlier deps
if (last_mtxs) {
npredecessors +=
__kmp_depnode_link_successor(gtid, thread, task, node, last_mtxs);
__kmp_node_deref(thread, last_out);
info->last_out = NULL;
if (info->last_flag == ENTRY_LAST_MTXS && last_ins) { // MTXS were last
// clean old INS before creating new list
__kmp_depnode_list_free(thread, last_ins);
info->last_ins = NULL;
}
} else {
// link node as successor of the last_out if any
npredecessors +=
__kmp_depnode_link_successor(gtid, thread, task, node, last_out);
}
info->last_flag = ENTRY_LAST_INS;
info->last_ins = __kmp_add_node(thread, info->last_ins, node);
} else {
KMP_DEBUG_ASSERT(dep->flags.mtx == 1);
// mtx --> link node to either last_out or last_ins, clean earlier deps
if (last_ins) {
npredecessors +=
__kmp_depnode_link_successor(gtid, thread, task, node, last_ins);
__kmp_node_deref(thread, last_out);
info->last_out = NULL;
if (info->last_flag == ENTRY_LAST_INS && last_mtxs) { // INS were last
// clean old MTXS before creating new list
__kmp_depnode_list_free(thread, last_mtxs);
info->last_mtxs = NULL;
}
} else {
// link node as successor of the last_out if any
npredecessors +=
__kmp_depnode_link_successor(gtid, thread, task, node, last_out);
}
info->last_flag = ENTRY_LAST_MTXS;
info->last_mtxs = __kmp_add_node(thread, info->last_mtxs, node);
if (info->mtx_lock == NULL) {
info->mtx_lock = (kmp_lock_t *)__kmp_allocate(sizeof(kmp_lock_t));
__kmp_init_lock(info->mtx_lock);
}
KMP_DEBUG_ASSERT(node->dn.mtx_num_locks < MAX_MTX_DEPS);
kmp_int32 m;
// Save lock in node's array
for (m = 0; m < MAX_MTX_DEPS; ++m) {
// sort pointers in decreasing order to avoid potential livelock
if (node->dn.mtx_locks[m] < info->mtx_lock) {
KMP_DEBUG_ASSERT(node->dn.mtx_locks[node->dn.mtx_num_locks] == NULL);
for (int n = node->dn.mtx_num_locks; n > m; --n) {
// shift right all lesser non-NULL pointers
KMP_DEBUG_ASSERT(node->dn.mtx_locks[n - 1] != NULL);
node->dn.mtx_locks[n] = node->dn.mtx_locks[n - 1];
}
node->dn.mtx_locks[m] = info->mtx_lock;
break;
}
}
KMP_DEBUG_ASSERT(m < MAX_MTX_DEPS); // must break from loop
node->dn.mtx_num_locks++;
}
}
KA_TRACE(30, ("__kmp_process_deps<%d>: T#%d found %d predecessors\n", filter,
gtid, npredecessors));
return npredecessors;
}
#define NO_DEP_BARRIER (false)
#define DEP_BARRIER (true)
// returns true if the task has any outstanding dependence
static bool __kmp_check_deps(kmp_int32 gtid, kmp_depnode_t *node,
kmp_task_t *task, kmp_dephash_t *hash,
bool dep_barrier, kmp_int32 ndeps,
kmp_depend_info_t *dep_list,
kmp_int32 ndeps_noalias,
kmp_depend_info_t *noalias_dep_list) {
int i, n_mtxs = 0;
#if KMP_DEBUG
kmp_taskdata_t *taskdata = KMP_TASK_TO_TASKDATA(task);
#endif
KA_TRACE(20, ("__kmp_check_deps: T#%d checking dependencies for task %p : %d "
"possibly aliased dependencies, %d non-aliased depedencies : "
"dep_barrier=%d .\n",
gtid, taskdata, ndeps, ndeps_noalias, dep_barrier));
// Filter deps in dep_list
// TODO: Different algorithm for large dep_list ( > 10 ? )
for (i = 0; i < ndeps; i++) {
if (dep_list[i].base_addr != 0) {
for (int j = i + 1; j < ndeps; j++) {
if (dep_list[i].base_addr == dep_list[j].base_addr) {
dep_list[i].flags.in |= dep_list[j].flags.in;
dep_list[i].flags.out |=
(dep_list[j].flags.out ||
(dep_list[i].flags.in && dep_list[j].flags.mtx) ||
(dep_list[i].flags.mtx && dep_list[j].flags.in));
dep_list[i].flags.mtx =
dep_list[i].flags.mtx | dep_list[j].flags.mtx &&
!dep_list[i].flags.out;
dep_list[j].base_addr = 0; // Mark j element as void
}
}
if (dep_list[i].flags.mtx) {
// limit number of mtx deps to MAX_MTX_DEPS per node
if (n_mtxs < MAX_MTX_DEPS && task != NULL) {
++n_mtxs;
} else {
dep_list[i].flags.in = 1; // downgrade mutexinoutset to inout
dep_list[i].flags.out = 1;
dep_list[i].flags.mtx = 0;
}
}
}
}
// doesn't need to be atomic as no other thread is going to be accessing this
// node just yet.
// npredecessors is set -1 to ensure that none of the releasing tasks queues
// this task before we have finished processing all the dependencies
node->dn.npredecessors = -1;
// used to pack all npredecessors additions into a single atomic operation at
// the end
int npredecessors;
npredecessors = __kmp_process_deps<true>(gtid, node, hash, dep_barrier, ndeps,
dep_list, task);
npredecessors += __kmp_process_deps<false>(
gtid, node, hash, dep_barrier, ndeps_noalias, noalias_dep_list, task);
node->dn.task = task;
KMP_MB();
// Account for our initial fake value
npredecessors++;
// Update predecessors and obtain current value to check if there are still
// any outstandig dependences (some tasks may have finished while we processed
// the dependences)
npredecessors =
node->dn.npredecessors.fetch_add(npredecessors) + npredecessors;
KA_TRACE(20, ("__kmp_check_deps: T#%d found %d predecessors for task %p \n",
gtid, npredecessors, taskdata));
// beyond this point the task could be queued (and executed) by a releasing
// task...
return npredecessors > 0 ? true : false;
}
/*!
@ingroup TASKING
@param loc_ref location of the original task directive
@param gtid Global Thread ID of encountering thread
@param new_task task thunk allocated by __kmp_omp_task_alloc() for the ''new
task''
@param ndeps Number of depend items with possible aliasing
@param dep_list List of depend items with possible aliasing
@param ndeps_noalias Number of depend items with no aliasing
@param noalias_dep_list List of depend items with no aliasing
@return Returns either TASK_CURRENT_NOT_QUEUED if the current task was not
suspendend and queued, or TASK_CURRENT_QUEUED if it was suspended and queued
Schedule a non-thread-switchable task with dependences for execution
*/
kmp_int32 __kmpc_omp_task_with_deps(ident_t *loc_ref, kmp_int32 gtid,
kmp_task_t *new_task, kmp_int32 ndeps,
kmp_depend_info_t *dep_list,
kmp_int32 ndeps_noalias,
kmp_depend_info_t *noalias_dep_list) {
kmp_taskdata_t *new_taskdata = KMP_TASK_TO_TASKDATA(new_task);
KA_TRACE(10, ("__kmpc_omp_task_with_deps(enter): T#%d loc=%p task=%p\n", gtid,
loc_ref, new_taskdata));
kmp_info_t *thread = __kmp_threads[gtid];
kmp_taskdata_t *current_task = thread->th.th_current_task;
#if OMPT_SUPPORT
if (ompt_enabled.enabled) {
OMPT_STORE_RETURN_ADDRESS(gtid);
if (!current_task->ompt_task_info.frame.enter_frame.ptr)
current_task->ompt_task_info.frame.enter_frame.ptr =
OMPT_GET_FRAME_ADDRESS(0);
if (ompt_enabled.ompt_callback_task_create) {
ompt_data_t task_data = ompt_data_none;
ompt_callbacks.ompt_callback(ompt_callback_task_create)(
current_task ? &(current_task->ompt_task_info.task_data) : &task_data,
current_task ? &(current_task->ompt_task_info.frame) : NULL,
&(new_taskdata->ompt_task_info.task_data),
ompt_task_explicit | TASK_TYPE_DETAILS_FORMAT(new_taskdata), 1,
OMPT_LOAD_RETURN_ADDRESS(gtid));
}
new_taskdata->ompt_task_info.frame.enter_frame.ptr = OMPT_GET_FRAME_ADDRESS(0);
}
#if OMPT_OPTIONAL
/* OMPT grab all dependences if requested by the tool */
if (ndeps + ndeps_noalias > 0 &&
ompt_enabled.ompt_callback_dependences) {
kmp_int32 i;
new_taskdata->ompt_task_info.ndeps = ndeps + ndeps_noalias;
new_taskdata->ompt_task_info.deps =
(ompt_dependence_t *)KMP_OMPT_DEPS_ALLOC(
thread, (ndeps + ndeps_noalias) * sizeof(ompt_dependence_t));
KMP_ASSERT(new_taskdata->ompt_task_info.deps != NULL);
for (i = 0; i < ndeps; i++) {
new_taskdata->ompt_task_info.deps[i].variable.ptr =
(void *)dep_list[i].base_addr;
if (dep_list[i].flags.in && dep_list[i].flags.out)
new_taskdata->ompt_task_info.deps[i].dependence_type =
ompt_dependence_type_inout;
else if (dep_list[i].flags.out)
new_taskdata->ompt_task_info.deps[i].dependence_type =
ompt_dependence_type_out;
else if (dep_list[i].flags.in)
new_taskdata->ompt_task_info.deps[i].dependence_type =
ompt_dependence_type_in;
}
for (i = 0; i < ndeps_noalias; i++) {
new_taskdata->ompt_task_info.deps[ndeps + i].variable.ptr =
(void *)noalias_dep_list[i].base_addr;
if (noalias_dep_list[i].flags.in && noalias_dep_list[i].flags.out)
new_taskdata->ompt_task_info.deps[ndeps + i].dependence_type =
ompt_dependence_type_inout;
else if (noalias_dep_list[i].flags.out)
new_taskdata->ompt_task_info.deps[ndeps + i].dependence_type =
ompt_dependence_type_out;
else if (noalias_dep_list[i].flags.in)
new_taskdata->ompt_task_info.deps[ndeps + i].dependence_type =
ompt_dependence_type_in;
}
ompt_callbacks.ompt_callback(ompt_callback_dependences)(
&(new_taskdata->ompt_task_info.task_data),
new_taskdata->ompt_task_info.deps, new_taskdata->ompt_task_info.ndeps);
/* We can now free the allocated memory for the dependencies */
/* For OMPD we might want to delay the free until task_end */
KMP_OMPT_DEPS_FREE(thread, new_taskdata->ompt_task_info.deps);
new_taskdata->ompt_task_info.deps = NULL;
new_taskdata->ompt_task_info.ndeps = 0;
}
#endif /* OMPT_OPTIONAL */
#endif /* OMPT_SUPPORT */
bool serial = current_task->td_flags.team_serial ||
current_task->td_flags.tasking_ser ||
current_task->td_flags.final;
#if OMP_45_ENABLED
kmp_task_team_t *task_team = thread->th.th_task_team;
serial = serial && !(task_team && task_team->tt.tt_found_proxy_tasks);
#endif
if (!serial && (ndeps > 0 || ndeps_noalias > 0)) {
/* if no dependencies have been tracked yet, create the dependence hash */
if (current_task->td_dephash == NULL)
current_task->td_dephash = __kmp_dephash_create(thread, current_task);
#if USE_FAST_MEMORY
kmp_depnode_t *node =
(kmp_depnode_t *)__kmp_fast_allocate(thread, sizeof(kmp_depnode_t));
#else
kmp_depnode_t *node =
(kmp_depnode_t *)__kmp_thread_malloc(thread, sizeof(kmp_depnode_t));
#endif
__kmp_init_node(node);
new_taskdata->td_depnode = node;
if (__kmp_check_deps(gtid, node, new_task, current_task->td_dephash,
NO_DEP_BARRIER, ndeps, dep_list, ndeps_noalias,
noalias_dep_list)) {
KA_TRACE(10, ("__kmpc_omp_task_with_deps(exit): T#%d task had blocking "
"dependencies: "
"loc=%p task=%p, return: TASK_CURRENT_NOT_QUEUED\n",
gtid, loc_ref, new_taskdata));
#if OMPT_SUPPORT
if (ompt_enabled.enabled) {
current_task->ompt_task_info.frame.enter_frame = ompt_data_none;
}
#endif
return TASK_CURRENT_NOT_QUEUED;
}
} else {
KA_TRACE(10, ("__kmpc_omp_task_with_deps(exit): T#%d ignored dependencies "
"for task (serialized)"
"loc=%p task=%p\n",
gtid, loc_ref, new_taskdata));
}
KA_TRACE(10, ("__kmpc_omp_task_with_deps(exit): T#%d task had no blocking "
"dependencies : "
"loc=%p task=%p, transferring to __kmp_omp_task\n",
gtid, loc_ref, new_taskdata));
kmp_int32 ret = __kmp_omp_task(gtid, new_task, true);
#if OMPT_SUPPORT
if (ompt_enabled.enabled) {
current_task->ompt_task_info.frame.enter_frame = ompt_data_none;
}
#endif
return ret;
}
/*!
@ingroup TASKING
@param loc_ref location of the original task directive
@param gtid Global Thread ID of encountering thread
@param ndeps Number of depend items with possible aliasing
@param dep_list List of depend items with possible aliasing
@param ndeps_noalias Number of depend items with no aliasing
@param noalias_dep_list List of depend items with no aliasing
Blocks the current task until all specifies dependencies have been fulfilled.
*/
void __kmpc_omp_wait_deps(ident_t *loc_ref, kmp_int32 gtid, kmp_int32 ndeps,
kmp_depend_info_t *dep_list, kmp_int32 ndeps_noalias,
kmp_depend_info_t *noalias_dep_list) {
KA_TRACE(10, ("__kmpc_omp_wait_deps(enter): T#%d loc=%p\n", gtid, loc_ref));
if (ndeps == 0 && ndeps_noalias == 0) {
KA_TRACE(10, ("__kmpc_omp_wait_deps(exit): T#%d has no dependencies to "
"wait upon : loc=%p\n",
gtid, loc_ref));
return;
}
kmp_info_t *thread = __kmp_threads[gtid];
kmp_taskdata_t *current_task = thread->th.th_current_task;
// We can return immediately as:
// - dependences are not computed in serial teams (except with proxy tasks)
// - if the dephash is not yet created it means we have nothing to wait for
bool ignore = current_task->td_flags.team_serial ||
current_task->td_flags.tasking_ser ||
current_task->td_flags.final;
#if OMP_45_ENABLED
ignore = ignore && thread->th.th_task_team != NULL &&
thread->th.th_task_team->tt.tt_found_proxy_tasks == FALSE;
#endif
ignore = ignore || current_task->td_dephash == NULL;
if (ignore) {
KA_TRACE(10, ("__kmpc_omp_wait_deps(exit): T#%d has no blocking "
"dependencies : loc=%p\n",
gtid, loc_ref));
return;
}
kmp_depnode_t node = {0};
__kmp_init_node(&node);
if (!__kmp_check_deps(gtid, &node, NULL, current_task->td_dephash,
DEP_BARRIER, ndeps, dep_list, ndeps_noalias,
noalias_dep_list)) {
KA_TRACE(10, ("__kmpc_omp_wait_deps(exit): T#%d has no blocking "
"dependencies : loc=%p\n",
gtid, loc_ref));
return;
}
int thread_finished = FALSE;
kmp_flag_32 flag((std::atomic<kmp_uint32> *)&node.dn.npredecessors, 0U);
while (node.dn.npredecessors > 0) {
flag.execute_tasks(thread, gtid, FALSE,
&thread_finished USE_ITT_BUILD_ARG(NULL),
__kmp_task_stealing_constraint);
}
KA_TRACE(10, ("__kmpc_omp_wait_deps(exit): T#%d finished waiting : loc=%p\n",
gtid, loc_ref));
}
#endif /* OMP_40_ENABLED */

150
runtime/src/kmp_taskdeps.h Normal file
View File

@ -0,0 +1,150 @@
/*
* kmp_taskdeps.h
*/
//===----------------------------------------------------------------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is dual licensed under the MIT and the University of Illinois Open
// Source Licenses. See LICENSE.txt for details.
//
//===----------------------------------------------------------------------===//
#ifndef KMP_TASKDEPS_H
#define KMP_TASKDEPS_H
#include "kmp.h"
#if OMP_40_ENABLED
#define KMP_ACQUIRE_DEPNODE(gtid, n) __kmp_acquire_lock(&(n)->dn.lock, (gtid))
#define KMP_RELEASE_DEPNODE(gtid, n) __kmp_release_lock(&(n)->dn.lock, (gtid))
static inline void __kmp_node_deref(kmp_info_t *thread, kmp_depnode_t *node) {
if (!node)
return;
kmp_int32 n = KMP_ATOMIC_DEC(&node->dn.nrefs) - 1;
if (n == 0) {
KMP_ASSERT(node->dn.nrefs == 0);
#if USE_FAST_MEMORY
__kmp_fast_free(thread, node);
#else
__kmp_thread_free(thread, node);
#endif
}
}
static inline void __kmp_depnode_list_free(kmp_info_t *thread,
kmp_depnode_list *list) {
kmp_depnode_list *next;
for (; list; list = next) {
next = list->next;
__kmp_node_deref(thread, list->node);
#if USE_FAST_MEMORY
__kmp_fast_free(thread, list);
#else
__kmp_thread_free(thread, list);
#endif
}
}
static inline void __kmp_dephash_free_entries(kmp_info_t *thread,
kmp_dephash_t *h) {
for (size_t i = 0; i < h->size; i++) {
if (h->buckets[i]) {
kmp_dephash_entry_t *next;
for (kmp_dephash_entry_t *entry = h->buckets[i]; entry; entry = next) {
next = entry->next_in_bucket;
__kmp_depnode_list_free(thread, entry->last_ins);
__kmp_depnode_list_free(thread, entry->last_mtxs);
__kmp_node_deref(thread, entry->last_out);
if (entry->mtx_lock) {
__kmp_destroy_lock(entry->mtx_lock);
__kmp_free(entry->mtx_lock);
}
#if USE_FAST_MEMORY
__kmp_fast_free(thread, entry);
#else
__kmp_thread_free(thread, entry);
#endif
}
h->buckets[i] = 0;
}
}
}
static inline void __kmp_dephash_free(kmp_info_t *thread, kmp_dephash_t *h) {
__kmp_dephash_free_entries(thread, h);
#if USE_FAST_MEMORY
__kmp_fast_free(thread, h);
#else
__kmp_thread_free(thread, h);
#endif
}
static inline void __kmp_release_deps(kmp_int32 gtid, kmp_taskdata_t *task) {
kmp_info_t *thread = __kmp_threads[gtid];
kmp_depnode_t *node = task->td_depnode;
if (task->td_dephash) {
KA_TRACE(
40, ("__kmp_release_deps: T#%d freeing dependencies hash of task %p.\n",
gtid, task));
__kmp_dephash_free(thread, task->td_dephash);
task->td_dephash = NULL;
}
if (!node)
return;
KA_TRACE(20, ("__kmp_release_deps: T#%d notifying successors of task %p.\n",
gtid, task));
KMP_ACQUIRE_DEPNODE(gtid, node);
node->dn.task =
NULL; // mark this task as finished, so no new dependencies are generated
KMP_RELEASE_DEPNODE(gtid, node);
kmp_depnode_list_t *next;
for (kmp_depnode_list_t *p = node->dn.successors; p; p = next) {
kmp_depnode_t *successor = p->node;
kmp_int32 npredecessors = KMP_ATOMIC_DEC(&successor->dn.npredecessors) - 1;
// successor task can be NULL for wait_depends or because deps are still
// being processed
if (npredecessors == 0) {
KMP_MB();
if (successor->dn.task) {
KA_TRACE(20, ("__kmp_release_deps: T#%d successor %p of %p scheduled "
"for execution.\n",
gtid, successor->dn.task, task));
__kmp_omp_task(gtid, successor->dn.task, false);
}
}
next = p->next;
__kmp_node_deref(thread, p->node);
#if USE_FAST_MEMORY
__kmp_fast_free(thread, p);
#else
__kmp_thread_free(thread, p);
#endif
}
__kmp_node_deref(thread, node);
KA_TRACE(
20,
("__kmp_release_deps: T#%d all successors of %p notified of completion\n",
gtid, task));
}
#endif // OMP_40_ENABLED
#endif // KMP_TASKDEPS_H

4293
runtime/src/kmp_tasking.cpp Normal file

File diff suppressed because it is too large Load Diff

2029
runtime/src/kmp_taskq.cpp Normal file

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,800 @@
/*
* kmp_threadprivate.cpp -- OpenMP threadprivate support library
*/
//===----------------------------------------------------------------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is dual licensed under the MIT and the University of Illinois Open
// Source Licenses. See LICENSE.txt for details.
//
//===----------------------------------------------------------------------===//
#include "kmp.h"
#include "kmp_i18n.h"
#include "kmp_itt.h"
#define USE_CHECKS_COMMON
#define KMP_INLINE_SUBR 1
void kmp_threadprivate_insert_private_data(int gtid, void *pc_addr,
void *data_addr, size_t pc_size);
struct private_common *kmp_threadprivate_insert(int gtid, void *pc_addr,
void *data_addr,
size_t pc_size);
struct shared_table __kmp_threadprivate_d_table;
static
#ifdef KMP_INLINE_SUBR
__forceinline
#endif
struct private_common *
__kmp_threadprivate_find_task_common(struct common_table *tbl, int gtid,
void *pc_addr)
{
struct private_common *tn;
#ifdef KMP_TASK_COMMON_DEBUG
KC_TRACE(10, ("__kmp_threadprivate_find_task_common: thread#%d, called with "
"address %p\n",
gtid, pc_addr));
dump_list();
#endif
for (tn = tbl->data[KMP_HASH(pc_addr)]; tn; tn = tn->next) {
if (tn->gbl_addr == pc_addr) {
#ifdef KMP_TASK_COMMON_DEBUG
KC_TRACE(10, ("__kmp_threadprivate_find_task_common: thread#%d, found "
"node %p on list\n",
gtid, pc_addr));
#endif
return tn;
}
}
return 0;
}
static
#ifdef KMP_INLINE_SUBR
__forceinline
#endif
struct shared_common *
__kmp_find_shared_task_common(struct shared_table *tbl, int gtid,
void *pc_addr) {
struct shared_common *tn;
for (tn = tbl->data[KMP_HASH(pc_addr)]; tn; tn = tn->next) {
if (tn->gbl_addr == pc_addr) {
#ifdef KMP_TASK_COMMON_DEBUG
KC_TRACE(
10,
("__kmp_find_shared_task_common: thread#%d, found node %p on list\n",
gtid, pc_addr));
#endif
return tn;
}
}
return 0;
}
// Create a template for the data initialized storage. Either the template is
// NULL indicating zero fill, or the template is a copy of the original data.
static struct private_data *__kmp_init_common_data(void *pc_addr,
size_t pc_size) {
struct private_data *d;
size_t i;
char *p;
d = (struct private_data *)__kmp_allocate(sizeof(struct private_data));
/*
d->data = 0; // AC: commented out because __kmp_allocate zeroes the
memory
d->next = 0;
*/
d->size = pc_size;
d->more = 1;
p = (char *)pc_addr;
for (i = pc_size; i > 0; --i) {
if (*p++ != '\0') {
d->data = __kmp_allocate(pc_size);
KMP_MEMCPY(d->data, pc_addr, pc_size);
break;
}
}
return d;
}
// Initialize the data area from the template.
static void __kmp_copy_common_data(void *pc_addr, struct private_data *d) {
char *addr = (char *)pc_addr;
int i, offset;
for (offset = 0; d != 0; d = d->next) {
for (i = d->more; i > 0; --i) {
if (d->data == 0)
memset(&addr[offset], '\0', d->size);
else
KMP_MEMCPY(&addr[offset], d->data, d->size);
offset += d->size;
}
}
}
/* we are called from __kmp_serial_initialize() with __kmp_initz_lock held. */
void __kmp_common_initialize(void) {
if (!TCR_4(__kmp_init_common)) {
int q;
#ifdef KMP_DEBUG
int gtid;
#endif
__kmp_threadpriv_cache_list = NULL;
#ifdef KMP_DEBUG
/* verify the uber masters were initialized */
for (gtid = 0; gtid < __kmp_threads_capacity; gtid++)
if (__kmp_root[gtid]) {
KMP_DEBUG_ASSERT(__kmp_root[gtid]->r.r_uber_thread);
for (q = 0; q < KMP_HASH_TABLE_SIZE; ++q)
KMP_DEBUG_ASSERT(
!__kmp_root[gtid]->r.r_uber_thread->th.th_pri_common->data[q]);
/* __kmp_root[ gitd ]-> r.r_uber_thread ->
* th.th_pri_common -> data[ q ] = 0;*/
}
#endif /* KMP_DEBUG */
for (q = 0; q < KMP_HASH_TABLE_SIZE; ++q)
__kmp_threadprivate_d_table.data[q] = 0;
TCW_4(__kmp_init_common, TRUE);
}
}
/* Call all destructors for threadprivate data belonging to all threads.
Currently unused! */
void __kmp_common_destroy(void) {
if (TCR_4(__kmp_init_common)) {
int q;
TCW_4(__kmp_init_common, FALSE);
for (q = 0; q < KMP_HASH_TABLE_SIZE; ++q) {
int gtid;
struct private_common *tn;
struct shared_common *d_tn;
/* C++ destructors need to be called once per thread before exiting.
Don't call destructors for master thread though unless we used copy
constructor */
for (d_tn = __kmp_threadprivate_d_table.data[q]; d_tn;
d_tn = d_tn->next) {
if (d_tn->is_vec) {
if (d_tn->dt.dtorv != 0) {
for (gtid = 0; gtid < __kmp_all_nth; ++gtid) {
if (__kmp_threads[gtid]) {
if ((__kmp_foreign_tp) ? (!KMP_INITIAL_GTID(gtid))
: (!KMP_UBER_GTID(gtid))) {
tn = __kmp_threadprivate_find_task_common(
__kmp_threads[gtid]->th.th_pri_common, gtid,
d_tn->gbl_addr);
if (tn) {
(*d_tn->dt.dtorv)(tn->par_addr, d_tn->vec_len);
}
}
}
}
if (d_tn->obj_init != 0) {
(*d_tn->dt.dtorv)(d_tn->obj_init, d_tn->vec_len);
}
}
} else {
if (d_tn->dt.dtor != 0) {
for (gtid = 0; gtid < __kmp_all_nth; ++gtid) {
if (__kmp_threads[gtid]) {
if ((__kmp_foreign_tp) ? (!KMP_INITIAL_GTID(gtid))
: (!KMP_UBER_GTID(gtid))) {
tn = __kmp_threadprivate_find_task_common(
__kmp_threads[gtid]->th.th_pri_common, gtid,
d_tn->gbl_addr);
if (tn) {
(*d_tn->dt.dtor)(tn->par_addr);
}
}
}
}
if (d_tn->obj_init != 0) {
(*d_tn->dt.dtor)(d_tn->obj_init);
}
}
}
}
__kmp_threadprivate_d_table.data[q] = 0;
}
}
}
/* Call all destructors for threadprivate data belonging to this thread */
void __kmp_common_destroy_gtid(int gtid) {
struct private_common *tn;
struct shared_common *d_tn;
if (!TCR_4(__kmp_init_gtid)) {
// This is possible when one of multiple roots initiates early library
// termination in a sequential region while other teams are active, and its
// child threads are about to end.
return;
}
KC_TRACE(10, ("__kmp_common_destroy_gtid: T#%d called\n", gtid));
if ((__kmp_foreign_tp) ? (!KMP_INITIAL_GTID(gtid)) : (!KMP_UBER_GTID(gtid))) {
if (TCR_4(__kmp_init_common)) {
/* Cannot do this here since not all threads have destroyed their data */
/* TCW_4(__kmp_init_common, FALSE); */
for (tn = __kmp_threads[gtid]->th.th_pri_head; tn; tn = tn->link) {
d_tn = __kmp_find_shared_task_common(&__kmp_threadprivate_d_table, gtid,
tn->gbl_addr);
KMP_DEBUG_ASSERT(d_tn);
if (d_tn->is_vec) {
if (d_tn->dt.dtorv != 0) {
(void)(*d_tn->dt.dtorv)(tn->par_addr, d_tn->vec_len);
}
if (d_tn->obj_init != 0) {
(void)(*d_tn->dt.dtorv)(d_tn->obj_init, d_tn->vec_len);
}
} else {
if (d_tn->dt.dtor != 0) {
(void)(*d_tn->dt.dtor)(tn->par_addr);
}
if (d_tn->obj_init != 0) {
(void)(*d_tn->dt.dtor)(d_tn->obj_init);
}
}
}
KC_TRACE(30, ("__kmp_common_destroy_gtid: T#%d threadprivate destructors "
"complete\n",
gtid));
}
}
}
#ifdef KMP_TASK_COMMON_DEBUG
static void dump_list(void) {
int p, q;
for (p = 0; p < __kmp_all_nth; ++p) {
if (!__kmp_threads[p])
continue;
for (q = 0; q < KMP_HASH_TABLE_SIZE; ++q) {
if (__kmp_threads[p]->th.th_pri_common->data[q]) {
struct private_common *tn;
KC_TRACE(10, ("\tdump_list: gtid:%d addresses\n", p));
for (tn = __kmp_threads[p]->th.th_pri_common->data[q]; tn;
tn = tn->next) {
KC_TRACE(10,
("\tdump_list: THREADPRIVATE: Serial %p -> Parallel %p\n",
tn->gbl_addr, tn->par_addr));
}
}
}
}
}
#endif /* KMP_TASK_COMMON_DEBUG */
// NOTE: this routine is to be called only from the serial part of the program.
void kmp_threadprivate_insert_private_data(int gtid, void *pc_addr,
void *data_addr, size_t pc_size) {
struct shared_common **lnk_tn, *d_tn;
KMP_DEBUG_ASSERT(__kmp_threads[gtid] &&
__kmp_threads[gtid]->th.th_root->r.r_active == 0);
d_tn = __kmp_find_shared_task_common(&__kmp_threadprivate_d_table, gtid,
pc_addr);
if (d_tn == 0) {
d_tn = (struct shared_common *)__kmp_allocate(sizeof(struct shared_common));
d_tn->gbl_addr = pc_addr;
d_tn->pod_init = __kmp_init_common_data(data_addr, pc_size);
/*
d_tn->obj_init = 0; // AC: commented out because __kmp_allocate
zeroes the memory
d_tn->ct.ctor = 0;
d_tn->cct.cctor = 0;;
d_tn->dt.dtor = 0;
d_tn->is_vec = FALSE;
d_tn->vec_len = 0L;
*/
d_tn->cmn_size = pc_size;
__kmp_acquire_lock(&__kmp_global_lock, gtid);
lnk_tn = &(__kmp_threadprivate_d_table.data[KMP_HASH(pc_addr)]);
d_tn->next = *lnk_tn;
*lnk_tn = d_tn;
__kmp_release_lock(&__kmp_global_lock, gtid);
}
}
struct private_common *kmp_threadprivate_insert(int gtid, void *pc_addr,
void *data_addr,
size_t pc_size) {
struct private_common *tn, **tt;
struct shared_common *d_tn;
/* +++++++++ START OF CRITICAL SECTION +++++++++ */
__kmp_acquire_lock(&__kmp_global_lock, gtid);
tn = (struct private_common *)__kmp_allocate(sizeof(struct private_common));
tn->gbl_addr = pc_addr;
d_tn = __kmp_find_shared_task_common(
&__kmp_threadprivate_d_table, gtid,
pc_addr); /* Only the MASTER data table exists. */
if (d_tn != 0) {
/* This threadprivate variable has already been seen. */
if (d_tn->pod_init == 0 && d_tn->obj_init == 0) {
d_tn->cmn_size = pc_size;
if (d_tn->is_vec) {
if (d_tn->ct.ctorv != 0) {
/* Construct from scratch so no prototype exists */
d_tn->obj_init = 0;
} else if (d_tn->cct.cctorv != 0) {
/* Now data initialize the prototype since it was previously
* registered */
d_tn->obj_init = (void *)__kmp_allocate(d_tn->cmn_size);
(void)(*d_tn->cct.cctorv)(d_tn->obj_init, pc_addr, d_tn->vec_len);
} else {
d_tn->pod_init = __kmp_init_common_data(data_addr, d_tn->cmn_size);
}
} else {
if (d_tn->ct.ctor != 0) {
/* Construct from scratch so no prototype exists */
d_tn->obj_init = 0;
} else if (d_tn->cct.cctor != 0) {
/* Now data initialize the prototype since it was previously
registered */
d_tn->obj_init = (void *)__kmp_allocate(d_tn->cmn_size);
(void)(*d_tn->cct.cctor)(d_tn->obj_init, pc_addr);
} else {
d_tn->pod_init = __kmp_init_common_data(data_addr, d_tn->cmn_size);
}
}
}
} else {
struct shared_common **lnk_tn;
d_tn = (struct shared_common *)__kmp_allocate(sizeof(struct shared_common));
d_tn->gbl_addr = pc_addr;
d_tn->cmn_size = pc_size;
d_tn->pod_init = __kmp_init_common_data(data_addr, pc_size);
/*
d_tn->obj_init = 0; // AC: commented out because __kmp_allocate
zeroes the memory
d_tn->ct.ctor = 0;
d_tn->cct.cctor = 0;
d_tn->dt.dtor = 0;
d_tn->is_vec = FALSE;
d_tn->vec_len = 0L;
*/
lnk_tn = &(__kmp_threadprivate_d_table.data[KMP_HASH(pc_addr)]);
d_tn->next = *lnk_tn;
*lnk_tn = d_tn;
}
tn->cmn_size = d_tn->cmn_size;
if ((__kmp_foreign_tp) ? (KMP_INITIAL_GTID(gtid)) : (KMP_UBER_GTID(gtid))) {
tn->par_addr = (void *)pc_addr;
} else {
tn->par_addr = (void *)__kmp_allocate(tn->cmn_size);
}
__kmp_release_lock(&__kmp_global_lock, gtid);
/* +++++++++ END OF CRITICAL SECTION +++++++++ */
#ifdef USE_CHECKS_COMMON
if (pc_size > d_tn->cmn_size) {
KC_TRACE(
10, ("__kmp_threadprivate_insert: THREADPRIVATE: %p (%" KMP_UINTPTR_SPEC
" ,%" KMP_UINTPTR_SPEC ")\n",
pc_addr, pc_size, d_tn->cmn_size));
KMP_FATAL(TPCommonBlocksInconsist);
}
#endif /* USE_CHECKS_COMMON */
tt = &(__kmp_threads[gtid]->th.th_pri_common->data[KMP_HASH(pc_addr)]);
#ifdef KMP_TASK_COMMON_DEBUG
if (*tt != 0) {
KC_TRACE(
10,
("__kmp_threadprivate_insert: WARNING! thread#%d: collision on %p\n",
gtid, pc_addr));
}
#endif
tn->next = *tt;
*tt = tn;
#ifdef KMP_TASK_COMMON_DEBUG
KC_TRACE(10,
("__kmp_threadprivate_insert: thread#%d, inserted node %p on list\n",
gtid, pc_addr));
dump_list();
#endif
/* Link the node into a simple list */
tn->link = __kmp_threads[gtid]->th.th_pri_head;
__kmp_threads[gtid]->th.th_pri_head = tn;
if ((__kmp_foreign_tp) ? (KMP_INITIAL_GTID(gtid)) : (KMP_UBER_GTID(gtid)))
return tn;
/* if C++ object with copy constructor, use it;
* else if C++ object with constructor, use it for the non-master copies only;
* else use pod_init and memcpy
*
* C++ constructors need to be called once for each non-master thread on
* allocate
* C++ copy constructors need to be called once for each thread on allocate */
/* C++ object with constructors/destructors; don't call constructors for
master thread though */
if (d_tn->is_vec) {
if (d_tn->ct.ctorv != 0) {
(void)(*d_tn->ct.ctorv)(tn->par_addr, d_tn->vec_len);
} else if (d_tn->cct.cctorv != 0) {
(void)(*d_tn->cct.cctorv)(tn->par_addr, d_tn->obj_init, d_tn->vec_len);
} else if (tn->par_addr != tn->gbl_addr) {
__kmp_copy_common_data(tn->par_addr, d_tn->pod_init);
}
} else {
if (d_tn->ct.ctor != 0) {
(void)(*d_tn->ct.ctor)(tn->par_addr);
} else if (d_tn->cct.cctor != 0) {
(void)(*d_tn->cct.cctor)(tn->par_addr, d_tn->obj_init);
} else if (tn->par_addr != tn->gbl_addr) {
__kmp_copy_common_data(tn->par_addr, d_tn->pod_init);
}
}
/* !BUILD_OPENMP_C
if (tn->par_addr != tn->gbl_addr)
__kmp_copy_common_data( tn->par_addr, d_tn->pod_init ); */
return tn;
}
/* ------------------------------------------------------------------------ */
/* We are currently parallel, and we know the thread id. */
/* ------------------------------------------------------------------------ */
/*!
@ingroup THREADPRIVATE
@param loc source location information
@param data pointer to data being privatized
@param ctor pointer to constructor function for data
@param cctor pointer to copy constructor function for data
@param dtor pointer to destructor function for data
Register constructors and destructors for thread private data.
This function is called when executing in parallel, when we know the thread id.
*/
void __kmpc_threadprivate_register(ident_t *loc, void *data, kmpc_ctor ctor,
kmpc_cctor cctor, kmpc_dtor dtor) {
struct shared_common *d_tn, **lnk_tn;
KC_TRACE(10, ("__kmpc_threadprivate_register: called\n"));
#ifdef USE_CHECKS_COMMON
/* copy constructor must be zero for current code gen (Nov 2002 - jph) */
KMP_ASSERT(cctor == 0);
#endif /* USE_CHECKS_COMMON */
/* Only the global data table exists. */
d_tn = __kmp_find_shared_task_common(&__kmp_threadprivate_d_table, -1, data);
if (d_tn == 0) {
d_tn = (struct shared_common *)__kmp_allocate(sizeof(struct shared_common));
d_tn->gbl_addr = data;
d_tn->ct.ctor = ctor;
d_tn->cct.cctor = cctor;
d_tn->dt.dtor = dtor;
/*
d_tn->is_vec = FALSE; // AC: commented out because __kmp_allocate
zeroes the memory
d_tn->vec_len = 0L;
d_tn->obj_init = 0;
d_tn->pod_init = 0;
*/
lnk_tn = &(__kmp_threadprivate_d_table.data[KMP_HASH(data)]);
d_tn->next = *lnk_tn;
*lnk_tn = d_tn;
}
}
void *__kmpc_threadprivate(ident_t *loc, kmp_int32 global_tid, void *data,
size_t size) {
void *ret;
struct private_common *tn;
KC_TRACE(10, ("__kmpc_threadprivate: T#%d called\n", global_tid));
#ifdef USE_CHECKS_COMMON
if (!__kmp_init_serial)
KMP_FATAL(RTLNotInitialized);
#endif /* USE_CHECKS_COMMON */
if (!__kmp_threads[global_tid]->th.th_root->r.r_active && !__kmp_foreign_tp) {
/* The parallel address will NEVER overlap with the data_address */
/* dkp: 3rd arg to kmp_threadprivate_insert_private_data() is the
* data_address; use data_address = data */
KC_TRACE(20, ("__kmpc_threadprivate: T#%d inserting private data\n",
global_tid));
kmp_threadprivate_insert_private_data(global_tid, data, data, size);
ret = data;
} else {
KC_TRACE(
50,
("__kmpc_threadprivate: T#%d try to find private data at address %p\n",
global_tid, data));
tn = __kmp_threadprivate_find_task_common(
__kmp_threads[global_tid]->th.th_pri_common, global_tid, data);
if (tn) {
KC_TRACE(20, ("__kmpc_threadprivate: T#%d found data\n", global_tid));
#ifdef USE_CHECKS_COMMON
if ((size_t)size > tn->cmn_size) {
KC_TRACE(10, ("THREADPRIVATE: %p (%" KMP_UINTPTR_SPEC
" ,%" KMP_UINTPTR_SPEC ")\n",
data, size, tn->cmn_size));
KMP_FATAL(TPCommonBlocksInconsist);
}
#endif /* USE_CHECKS_COMMON */
} else {
/* The parallel address will NEVER overlap with the data_address */
/* dkp: 3rd arg to kmp_threadprivate_insert() is the data_address; use
* data_address = data */
KC_TRACE(20, ("__kmpc_threadprivate: T#%d inserting data\n", global_tid));
tn = kmp_threadprivate_insert(global_tid, data, data, size);
}
ret = tn->par_addr;
}
KC_TRACE(10, ("__kmpc_threadprivate: T#%d exiting; return value = %p\n",
global_tid, ret));
return ret;
}
static kmp_cached_addr_t *__kmp_find_cache(void *data) {
kmp_cached_addr_t *ptr = __kmp_threadpriv_cache_list;
while (ptr && ptr->data != data)
ptr = ptr->next;
return ptr;
}
/*!
@ingroup THREADPRIVATE
@param loc source location information
@param global_tid global thread number
@param data pointer to data to privatize
@param size size of data to privatize
@param cache pointer to cache
@return pointer to private storage
Allocate private storage for threadprivate data.
*/
void *
__kmpc_threadprivate_cached(ident_t *loc,
kmp_int32 global_tid, // gtid.
void *data, // Pointer to original global variable.
size_t size, // Size of original global variable.
void ***cache) {
KC_TRACE(10, ("__kmpc_threadprivate_cached: T#%d called with cache: %p, "
"address: %p, size: %" KMP_SIZE_T_SPEC "\n",
global_tid, *cache, data, size));
if (TCR_PTR(*cache) == 0) {
__kmp_acquire_lock(&__kmp_global_lock, global_tid);
if (TCR_PTR(*cache) == 0) {
__kmp_acquire_bootstrap_lock(&__kmp_tp_cached_lock);
// Compiler often passes in NULL cache, even if it's already been created
void **my_cache;
kmp_cached_addr_t *tp_cache_addr;
// Look for an existing cache
tp_cache_addr = __kmp_find_cache(data);
if (!tp_cache_addr) { // Cache was never created; do it now
__kmp_tp_cached = 1;
KMP_ITT_IGNORE(my_cache = (void **)__kmp_allocate(
sizeof(void *) * __kmp_tp_capacity +
sizeof(kmp_cached_addr_t)););
// No need to zero the allocated memory; __kmp_allocate does that.
KC_TRACE(50, ("__kmpc_threadprivate_cached: T#%d allocated cache at "
"address %p\n",
global_tid, my_cache));
/* TODO: free all this memory in __kmp_common_destroy using
* __kmp_threadpriv_cache_list */
/* Add address of mycache to linked list for cleanup later */
tp_cache_addr = (kmp_cached_addr_t *)&my_cache[__kmp_tp_capacity];
tp_cache_addr->addr = my_cache;
tp_cache_addr->data = data;
tp_cache_addr->compiler_cache = cache;
tp_cache_addr->next = __kmp_threadpriv_cache_list;
__kmp_threadpriv_cache_list = tp_cache_addr;
} else { // A cache was already created; use it
my_cache = tp_cache_addr->addr;
tp_cache_addr->compiler_cache = cache;
}
KMP_MB();
TCW_PTR(*cache, my_cache);
__kmp_release_bootstrap_lock(&__kmp_tp_cached_lock);
KMP_MB();
}
__kmp_release_lock(&__kmp_global_lock, global_tid);
}
void *ret;
if ((ret = TCR_PTR((*cache)[global_tid])) == 0) {
ret = __kmpc_threadprivate(loc, global_tid, data, (size_t)size);
TCW_PTR((*cache)[global_tid], ret);
}
KC_TRACE(10,
("__kmpc_threadprivate_cached: T#%d exiting; return value = %p\n",
global_tid, ret));
return ret;
}
// This function should only be called when both __kmp_tp_cached_lock and
// kmp_forkjoin_lock are held.
void __kmp_threadprivate_resize_cache(int newCapacity) {
KC_TRACE(10, ("__kmp_threadprivate_resize_cache: called with size: %d\n",
newCapacity));
kmp_cached_addr_t *ptr = __kmp_threadpriv_cache_list;
while (ptr) {
if (ptr->data) { // this location has an active cache; resize it
void **my_cache;
KMP_ITT_IGNORE(my_cache =
(void **)__kmp_allocate(sizeof(void *) * newCapacity +
sizeof(kmp_cached_addr_t)););
// No need to zero the allocated memory; __kmp_allocate does that.
KC_TRACE(50, ("__kmp_threadprivate_resize_cache: allocated cache at %p\n",
my_cache));
// Now copy old cache into new cache
void **old_cache = ptr->addr;
for (int i = 0; i < __kmp_tp_capacity; ++i) {
my_cache[i] = old_cache[i];
}
// Add address of new my_cache to linked list for cleanup later
kmp_cached_addr_t *tp_cache_addr;
tp_cache_addr = (kmp_cached_addr_t *)&my_cache[newCapacity];
tp_cache_addr->addr = my_cache;
tp_cache_addr->data = ptr->data;
tp_cache_addr->compiler_cache = ptr->compiler_cache;
tp_cache_addr->next = __kmp_threadpriv_cache_list;
__kmp_threadpriv_cache_list = tp_cache_addr;
// Copy new cache to compiler's location: We can copy directly
// to (*compiler_cache) if compiler guarantees it will keep
// using the same location for the cache. This is not yet true
// for some compilers, in which case we have to check if
// compiler_cache is still pointing at old cache, and if so, we
// can point it at the new cache with an atomic compare&swap
// operation. (Old method will always work, but we should shift
// to new method (commented line below) when Intel and Clang
// compilers use new method.)
(void)KMP_COMPARE_AND_STORE_PTR(tp_cache_addr->compiler_cache, old_cache,
my_cache);
// TCW_PTR(*(tp_cache_addr->compiler_cache), my_cache);
// If the store doesn't happen here, the compiler's old behavior will
// inevitably call __kmpc_threadprivate_cache with a new location for the
// cache, and that function will store the resized cache there at that
// point.
// Nullify old cache's data pointer so we skip it next time
ptr->data = NULL;
}
ptr = ptr->next;
}
// After all caches are resized, update __kmp_tp_capacity to the new size
*(volatile int *)&__kmp_tp_capacity = newCapacity;
}
/*!
@ingroup THREADPRIVATE
@param loc source location information
@param data pointer to data being privatized
@param ctor pointer to constructor function for data
@param cctor pointer to copy constructor function for data
@param dtor pointer to destructor function for data
@param vector_length length of the vector (bytes or elements?)
Register vector constructors and destructors for thread private data.
*/
void __kmpc_threadprivate_register_vec(ident_t *loc, void *data,
kmpc_ctor_vec ctor, kmpc_cctor_vec cctor,
kmpc_dtor_vec dtor,
size_t vector_length) {
struct shared_common *d_tn, **lnk_tn;
KC_TRACE(10, ("__kmpc_threadprivate_register_vec: called\n"));
#ifdef USE_CHECKS_COMMON
/* copy constructor must be zero for current code gen (Nov 2002 - jph) */
KMP_ASSERT(cctor == 0);
#endif /* USE_CHECKS_COMMON */
d_tn = __kmp_find_shared_task_common(
&__kmp_threadprivate_d_table, -1,
data); /* Only the global data table exists. */
if (d_tn == 0) {
d_tn = (struct shared_common *)__kmp_allocate(sizeof(struct shared_common));
d_tn->gbl_addr = data;
d_tn->ct.ctorv = ctor;
d_tn->cct.cctorv = cctor;
d_tn->dt.dtorv = dtor;
d_tn->is_vec = TRUE;
d_tn->vec_len = (size_t)vector_length;
// d_tn->obj_init = 0; // AC: __kmp_allocate zeroes the memory
// d_tn->pod_init = 0;
lnk_tn = &(__kmp_threadprivate_d_table.data[KMP_HASH(data)]);
d_tn->next = *lnk_tn;
*lnk_tn = d_tn;
}
}
void __kmp_cleanup_threadprivate_caches() {
kmp_cached_addr_t *ptr = __kmp_threadpriv_cache_list;
while (ptr) {
void **cache = ptr->addr;
__kmp_threadpriv_cache_list = ptr->next;
if (*ptr->compiler_cache)
*ptr->compiler_cache = NULL;
ptr->compiler_cache = NULL;
ptr->data = NULL;
ptr->addr = NULL;
ptr->next = NULL;
// Threadprivate data pointed at by cache entries are destroyed at end of
// __kmp_launch_thread with __kmp_common_destroy_gtid.
__kmp_free(cache); // implicitly frees ptr too
ptr = __kmp_threadpriv_cache_list;
}
}

410
runtime/src/kmp_utility.cpp Normal file
View File

@ -0,0 +1,410 @@
/*
* kmp_utility.cpp -- Utility routines for the OpenMP support library.
*/
//===----------------------------------------------------------------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is dual licensed under the MIT and the University of Illinois Open
// Source Licenses. See LICENSE.txt for details.
//
//===----------------------------------------------------------------------===//
#include "kmp.h"
#include "kmp_i18n.h"
#include "kmp_str.h"
#include "kmp_wrapper_getpid.h"
#include <float.h>
static const char *unknown = "unknown";
#if KMP_ARCH_X86 || KMP_ARCH_X86_64
/* NOTE: If called before serial_initialize (i.e. from runtime_initialize), then
the debugging package has not been initialized yet, and only "0" will print
debugging output since the environment variables have not been read. */
#ifdef KMP_DEBUG
static int trace_level = 5;
#endif
/* LOG_ID_BITS = ( 1 + floor( log_2( max( log_per_phy - 1, 1 ))))
* APIC_ID = (PHY_ID << LOG_ID_BITS) | LOG_ID
* PHY_ID = APIC_ID >> LOG_ID_BITS
*/
int __kmp_get_physical_id(int log_per_phy, int apic_id) {
int index_lsb, index_msb, temp;
if (log_per_phy > 1) {
index_lsb = 0;
index_msb = 31;
temp = log_per_phy;
while ((temp & 1) == 0) {
temp >>= 1;
index_lsb++;
}
temp = log_per_phy;
while ((temp & 0x80000000) == 0) {
temp <<= 1;
index_msb--;
}
/* If >1 bits were set in log_per_phy, choose next higher power of 2 */
if (index_lsb != index_msb)
index_msb++;
return ((int)(apic_id >> index_msb));
}
return apic_id;
}
/*
* LOG_ID_BITS = ( 1 + floor( log_2( max( log_per_phy - 1, 1 ))))
* APIC_ID = (PHY_ID << LOG_ID_BITS) | LOG_ID
* LOG_ID = APIC_ID & (( 1 << LOG_ID_BITS ) - 1 )
*/
int __kmp_get_logical_id(int log_per_phy, int apic_id) {
unsigned current_bit;
int bits_seen;
if (log_per_phy <= 1)
return (0);
bits_seen = 0;
for (current_bit = 1; log_per_phy != 0; current_bit <<= 1) {
if (log_per_phy & current_bit) {
log_per_phy &= ~current_bit;
bits_seen++;
}
}
/* If exactly 1 bit was set in log_per_phy, choose next lower power of 2 */
if (bits_seen == 1) {
current_bit >>= 1;
}
return ((int)((current_bit - 1) & apic_id));
}
static kmp_uint64 __kmp_parse_frequency( // R: Frequency in Hz.
char const *frequency // I: Float number and unit: MHz, GHz, or TGz.
) {
double value = 0.0;
char *unit = NULL;
kmp_uint64 result = 0; /* Zero is a better unknown value than all ones. */
if (frequency == NULL) {
return result;
}
value = strtod(frequency, &unit);
if (0 < value &&
value <= DBL_MAX) { // Good value (not overflow, underflow, etc).
if (strcmp(unit, "MHz") == 0) {
value = value * 1.0E+6;
} else if (strcmp(unit, "GHz") == 0) {
value = value * 1.0E+9;
} else if (strcmp(unit, "THz") == 0) {
value = value * 1.0E+12;
} else { // Wrong unit.
return result;
}
result = value;
}
return result;
} // func __kmp_parse_cpu_frequency
void __kmp_query_cpuid(kmp_cpuinfo_t *p) {
struct kmp_cpuid buf;
int max_arg;
int log_per_phy;
#ifdef KMP_DEBUG
int cflush_size;
#endif
p->initialized = 1;
p->sse2 = 1; // Assume SSE2 by default.
__kmp_x86_cpuid(0, 0, &buf);
KA_TRACE(trace_level,
("INFO: CPUID %d: EAX=0x%08X EBX=0x%08X ECX=0x%08X EDX=0x%08X\n", 0,
buf.eax, buf.ebx, buf.ecx, buf.edx));
max_arg = buf.eax;
p->apic_id = -1;
if (max_arg >= 1) {
int i;
kmp_uint32 t, data[4];
__kmp_x86_cpuid(1, 0, &buf);
KA_TRACE(trace_level,
("INFO: CPUID %d: EAX=0x%08X EBX=0x%08X ECX=0x%08X EDX=0x%08X\n",
1, buf.eax, buf.ebx, buf.ecx, buf.edx));
{
#define get_value(reg, lo, mask) (((reg) >> (lo)) & (mask))
p->signature = buf.eax;
p->family = get_value(buf.eax, 20, 0xff) + get_value(buf.eax, 8, 0x0f);
p->model =
(get_value(buf.eax, 16, 0x0f) << 4) + get_value(buf.eax, 4, 0x0f);
p->stepping = get_value(buf.eax, 0, 0x0f);
#undef get_value
KA_TRACE(trace_level, (" family = %d, model = %d, stepping = %d\n",
p->family, p->model, p->stepping));
}
for (t = buf.ebx, i = 0; i < 4; t >>= 8, ++i) {
data[i] = (t & 0xff);
}
p->sse2 = (buf.edx >> 26) & 1;
#ifdef KMP_DEBUG
if ((buf.edx >> 4) & 1) {
/* TSC - Timestamp Counter Available */
KA_TRACE(trace_level, (" TSC"));
}
if ((buf.edx >> 8) & 1) {
/* CX8 - CMPXCHG8B Instruction Available */
KA_TRACE(trace_level, (" CX8"));
}
if ((buf.edx >> 9) & 1) {
/* APIC - Local APIC Present (multi-processor operation support */
KA_TRACE(trace_level, (" APIC"));
}
if ((buf.edx >> 15) & 1) {
/* CMOV - Conditional MOVe Instruction Available */
KA_TRACE(trace_level, (" CMOV"));
}
if ((buf.edx >> 18) & 1) {
/* PSN - Processor Serial Number Available */
KA_TRACE(trace_level, (" PSN"));
}
if ((buf.edx >> 19) & 1) {
/* CLFULSH - Cache Flush Instruction Available */
cflush_size =
data[1] * 8; /* Bits 15-08: CLFLUSH line size = 8 (64 bytes) */
KA_TRACE(trace_level, (" CLFLUSH(%db)", cflush_size));
}
if ((buf.edx >> 21) & 1) {
/* DTES - Debug Trace & EMON Store */
KA_TRACE(trace_level, (" DTES"));
}
if ((buf.edx >> 22) & 1) {
/* ACPI - ACPI Support Available */
KA_TRACE(trace_level, (" ACPI"));
}
if ((buf.edx >> 23) & 1) {
/* MMX - Multimedia Extensions */
KA_TRACE(trace_level, (" MMX"));
}
if ((buf.edx >> 25) & 1) {
/* SSE - SSE Instructions */
KA_TRACE(trace_level, (" SSE"));
}
if ((buf.edx >> 26) & 1) {
/* SSE2 - SSE2 Instructions */
KA_TRACE(trace_level, (" SSE2"));
}
if ((buf.edx >> 27) & 1) {
/* SLFSNP - Self-Snooping Cache */
KA_TRACE(trace_level, (" SLFSNP"));
}
#endif /* KMP_DEBUG */
if ((buf.edx >> 28) & 1) {
/* Bits 23-16: Logical Processors per Physical Processor (1 for P4) */
log_per_phy = data[2];
p->apic_id = data[3]; /* Bits 31-24: Processor Initial APIC ID (X) */
KA_TRACE(trace_level, (" HT(%d TPUs)", log_per_phy));
if (log_per_phy > 1) {
/* default to 1k FOR JT-enabled processors (4k on OS X*) */
#if KMP_OS_DARWIN
p->cpu_stackoffset = 4 * 1024;
#else
p->cpu_stackoffset = 1 * 1024;
#endif
}
p->physical_id = __kmp_get_physical_id(log_per_phy, p->apic_id);
p->logical_id = __kmp_get_logical_id(log_per_phy, p->apic_id);
}
#ifdef KMP_DEBUG
if ((buf.edx >> 29) & 1) {
/* ATHROTL - Automatic Throttle Control */
KA_TRACE(trace_level, (" ATHROTL"));
}
KA_TRACE(trace_level, (" ]\n"));
for (i = 2; i <= max_arg; ++i) {
__kmp_x86_cpuid(i, 0, &buf);
KA_TRACE(trace_level,
("INFO: CPUID %d: EAX=0x%08X EBX=0x%08X ECX=0x%08X EDX=0x%08X\n",
i, buf.eax, buf.ebx, buf.ecx, buf.edx));
}
#endif
#if KMP_USE_ADAPTIVE_LOCKS
p->rtm = 0;
if (max_arg > 7) {
/* RTM bit CPUID.07:EBX, bit 11 */
__kmp_x86_cpuid(7, 0, &buf);
p->rtm = (buf.ebx >> 11) & 1;
KA_TRACE(trace_level, (" RTM"));
}
#endif
}
{ // Parse CPU brand string for frequency, saving the string for later.
int i;
kmp_cpuid_t *base = (kmp_cpuid_t *)&p->name[0];
// Get CPU brand string.
for (i = 0; i < 3; ++i) {
__kmp_x86_cpuid(0x80000002 + i, 0, base + i);
}
p->name[sizeof(p->name) - 1] = 0; // Just in case. ;-)
KA_TRACE(trace_level, ("cpu brand string: \"%s\"\n", &p->name[0]));
// Parse frequency.
p->frequency = __kmp_parse_frequency(strrchr(&p->name[0], ' '));
KA_TRACE(trace_level,
("cpu frequency from brand string: %" KMP_UINT64_SPEC "\n",
p->frequency));
}
}
#endif /* KMP_ARCH_X86 || KMP_ARCH_X86_64 */
void __kmp_expand_host_name(char *buffer, size_t size) {
KMP_DEBUG_ASSERT(size >= sizeof(unknown));
#if KMP_OS_WINDOWS
{
DWORD s = size;
if (!GetComputerNameA(buffer, &s))
KMP_STRCPY_S(buffer, size, unknown);
}
#else
buffer[size - 2] = 0;
if (gethostname(buffer, size) || buffer[size - 2] != 0)
KMP_STRCPY_S(buffer, size, unknown);
#endif
}
/* Expand the meta characters in the filename:
* Currently defined characters are:
* %H the hostname
* %P the number of threads used.
* %I the unique identifier for this run.
*/
void __kmp_expand_file_name(char *result, size_t rlen, char *pattern) {
char *pos = result, *end = result + rlen - 1;
char buffer[256];
int default_cpu_width = 1;
int snp_result;
KMP_DEBUG_ASSERT(rlen > 0);
*end = 0;
{
int i;
for (i = __kmp_xproc; i >= 10; i /= 10, ++default_cpu_width)
;
}
if (pattern != NULL) {
while (*pattern != '\0' && pos < end) {
if (*pattern != '%') {
*pos++ = *pattern++;
} else {
char *old_pattern = pattern;
int width = 1;
int cpu_width = default_cpu_width;
++pattern;
if (*pattern >= '0' && *pattern <= '9') {
width = 0;
do {
width = (width * 10) + *pattern++ - '0';
} while (*pattern >= '0' && *pattern <= '9');
if (width < 0 || width > 1024)
width = 1;
cpu_width = width;
}
switch (*pattern) {
case 'H':
case 'h': {
__kmp_expand_host_name(buffer, sizeof(buffer));
KMP_STRNCPY(pos, buffer, end - pos + 1);
if (*end == 0) {
while (*pos)
++pos;
++pattern;
} else
pos = end;
} break;
case 'P':
case 'p': {
snp_result = KMP_SNPRINTF(pos, end - pos + 1, "%0*d", cpu_width,
__kmp_dflt_team_nth);
if (snp_result >= 0 && snp_result <= end - pos) {
while (*pos)
++pos;
++pattern;
} else
pos = end;
} break;
case 'I':
case 'i': {
pid_t id = getpid();
#if KMP_ARCH_X86_64 && defined(__MINGW32__)
snp_result = KMP_SNPRINTF(pos, end - pos + 1, "%0*lld", width, id);
#else
snp_result = KMP_SNPRINTF(pos, end - pos + 1, "%0*d", width, id);
#endif
if (snp_result >= 0 && snp_result <= end - pos) {
while (*pos)
++pos;
++pattern;
} else
pos = end;
break;
}
case '%': {
*pos++ = '%';
++pattern;
break;
}
default: {
*pos++ = '%';
pattern = old_pattern + 1;
break;
}
}
}
}
/* TODO: How do we get rid of this? */
if (*pattern != '\0')
KMP_FATAL(FileNameTooLong);
}
*pos = '\0';
}

208
runtime/src/kmp_version.cpp Normal file
View File

@ -0,0 +1,208 @@
/*
* kmp_version.cpp
*/
//===----------------------------------------------------------------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is dual licensed under the MIT and the University of Illinois Open
// Source Licenses. See LICENSE.txt for details.
//
//===----------------------------------------------------------------------===//
#include "kmp.h"
#include "kmp_io.h"
#include "kmp_version.h"
// Replace with snapshot date YYYYMMDD for promotion build.
#define KMP_VERSION_BUILD 20140926
// Helper macros to convert value of macro to string literal.
#define _stringer(x) #x
#define stringer(x) _stringer(x)
// Detect compiler.
#if KMP_COMPILER_ICC
#if __INTEL_COMPILER == 1010
#define KMP_COMPILER "Intel(R) C++ Compiler 10.1"
#elif __INTEL_COMPILER == 1100
#define KMP_COMPILER "Intel(R) C++ Compiler 11.0"
#elif __INTEL_COMPILER == 1110
#define KMP_COMPILER "Intel(R) C++ Compiler 11.1"
#elif __INTEL_COMPILER == 1200
#define KMP_COMPILER "Intel(R) C++ Compiler 12.0"
#elif __INTEL_COMPILER == 1210
#define KMP_COMPILER "Intel(R) C++ Compiler 12.1"
#elif __INTEL_COMPILER == 1300
#define KMP_COMPILER "Intel(R) C++ Compiler 13.0"
#elif __INTEL_COMPILER == 1310
#define KMP_COMPILER "Intel(R) C++ Compiler 13.1"
#elif __INTEL_COMPILER == 1400
#define KMP_COMPILER "Intel(R) C++ Compiler 14.0"
#elif __INTEL_COMPILER == 1410
#define KMP_COMPILER "Intel(R) C++ Compiler 14.1"
#elif __INTEL_COMPILER == 1500
#define KMP_COMPILER "Intel(R) C++ Compiler 15.0"
#elif __INTEL_COMPILER == 1600
#define KMP_COMPILER "Intel(R) C++ Compiler 16.0"
#elif __INTEL_COMPILER == 1700
#define KMP_COMPILER "Intel(R) C++ Compiler 17.0"
#elif __INTEL_COMPILER == 1800
#define KMP_COMPILER "Intel(R) C++ Compiler 18.0"
#elif __INTEL_COMPILER == 9998
#define KMP_COMPILER "Intel(R) C++ Compiler mainline"
#elif __INTEL_COMPILER == 9999
#define KMP_COMPILER "Intel(R) C++ Compiler mainline"
#endif
#elif KMP_COMPILER_CLANG
#define KMP_COMPILER \
"Clang " stringer(__clang_major__) "." stringer(__clang_minor__)
#elif KMP_COMPILER_GCC
#define KMP_COMPILER "GCC " stringer(__GNUC__) "." stringer(__GNUC_MINOR__)
#elif KMP_COMPILER_MSVC
#define KMP_COMPILER "MSVC " stringer(_MSC_FULL_VER)
#endif
#ifndef KMP_COMPILER
#warning "Unknown compiler"
#define KMP_COMPILER "unknown compiler"
#endif
// Detect librray type (perf, stub).
#ifdef KMP_STUB
#define KMP_LIB_TYPE "stub"
#else
#define KMP_LIB_TYPE "performance"
#endif // KMP_LIB_TYPE
// Detect link type (static, dynamic).
#if KMP_DYNAMIC_LIB
#define KMP_LINK_TYPE "dynamic"
#else
#define KMP_LINK_TYPE "static"
#endif // KMP_LINK_TYPE
// Finally, define strings.
#define KMP_LIBRARY KMP_LIB_TYPE " library (" KMP_LINK_TYPE ")"
#define KMP_COPYRIGHT ""
int const __kmp_version_major = KMP_VERSION_MAJOR;
int const __kmp_version_minor = KMP_VERSION_MINOR;
int const __kmp_version_build = KMP_VERSION_BUILD;
int const __kmp_openmp_version =
#if OMP_50_ENABLED
201611;
#elif OMP_45_ENABLED
201511;
#elif OMP_40_ENABLED
201307;
#else
201107;
#endif
/* Do NOT change the format of this string! Intel(R) Thread Profiler checks for
a specific format some changes in the recognition routine there need to be
made before this is changed. */
char const __kmp_copyright[] = KMP_VERSION_PREFIX KMP_LIBRARY
" ver. " stringer(KMP_VERSION_MAJOR) "." stringer(
KMP_VERSION_MINOR) "." stringer(KMP_VERSION_BUILD) " " KMP_COPYRIGHT;
char const __kmp_version_copyright[] = KMP_VERSION_PREFIX KMP_COPYRIGHT;
char const __kmp_version_lib_ver[] =
KMP_VERSION_PREFIX "version: " stringer(KMP_VERSION_MAJOR) "." stringer(
KMP_VERSION_MINOR) "." stringer(KMP_VERSION_BUILD);
char const __kmp_version_lib_type[] =
KMP_VERSION_PREFIX "library type: " KMP_LIB_TYPE;
char const __kmp_version_link_type[] =
KMP_VERSION_PREFIX "link type: " KMP_LINK_TYPE;
char const __kmp_version_build_time[] = KMP_VERSION_PREFIX "build time: "
"no_timestamp";
#if KMP_MIC2
char const __kmp_version_target_env[] =
KMP_VERSION_PREFIX "target environment: MIC2";
#endif
char const __kmp_version_build_compiler[] =
KMP_VERSION_PREFIX "build compiler: " KMP_COMPILER;
// Called at serial initialization time.
static int __kmp_version_1_printed = FALSE;
void __kmp_print_version_1(void) {
if (__kmp_version_1_printed) {
return;
}
__kmp_version_1_printed = TRUE;
#ifndef KMP_STUB
kmp_str_buf_t buffer;
__kmp_str_buf_init(&buffer);
// Print version strings skipping initial magic.
__kmp_str_buf_print(&buffer, "%s\n",
&__kmp_version_lib_ver[KMP_VERSION_MAGIC_LEN]);
__kmp_str_buf_print(&buffer, "%s\n",
&__kmp_version_lib_type[KMP_VERSION_MAGIC_LEN]);
__kmp_str_buf_print(&buffer, "%s\n",
&__kmp_version_link_type[KMP_VERSION_MAGIC_LEN]);
__kmp_str_buf_print(&buffer, "%s\n",
&__kmp_version_build_time[KMP_VERSION_MAGIC_LEN]);
#if KMP_MIC
__kmp_str_buf_print(&buffer, "%s\n",
&__kmp_version_target_env[KMP_VERSION_MAGIC_LEN]);
#endif
__kmp_str_buf_print(&buffer, "%s\n",
&__kmp_version_build_compiler[KMP_VERSION_MAGIC_LEN]);
#if defined(KMP_GOMP_COMPAT)
__kmp_str_buf_print(&buffer, "%s\n",
&__kmp_version_alt_comp[KMP_VERSION_MAGIC_LEN]);
#endif /* defined(KMP_GOMP_COMPAT) */
__kmp_str_buf_print(&buffer, "%s\n",
&__kmp_version_omp_api[KMP_VERSION_MAGIC_LEN]);
__kmp_str_buf_print(&buffer, "%sdynamic error checking: %s\n",
KMP_VERSION_PREF_STR,
(__kmp_env_consistency_check ? "yes" : "no"));
#ifdef KMP_DEBUG
for (int i = bs_plain_barrier; i < bs_last_barrier; ++i) {
__kmp_str_buf_print(
&buffer, "%s%s barrier branch bits: gather=%u, release=%u\n",
KMP_VERSION_PREF_STR, __kmp_barrier_type_name[i],
__kmp_barrier_gather_branch_bits[i],
__kmp_barrier_release_branch_bits[i]); // __kmp_str_buf_print
}
for (int i = bs_plain_barrier; i < bs_last_barrier; ++i) {
__kmp_str_buf_print(
&buffer, "%s%s barrier pattern: gather=%s, release=%s\n",
KMP_VERSION_PREF_STR, __kmp_barrier_type_name[i],
__kmp_barrier_pattern_name[__kmp_barrier_gather_pattern[i]],
__kmp_barrier_pattern_name
[__kmp_barrier_release_pattern[i]]); // __kmp_str_buf_print
}
__kmp_str_buf_print(&buffer, "%s\n",
&__kmp_version_lock[KMP_VERSION_MAGIC_LEN]);
#endif
__kmp_str_buf_print(
&buffer, "%sthread affinity support: %s\n", KMP_VERSION_PREF_STR,
#if KMP_AFFINITY_SUPPORTED
(KMP_AFFINITY_CAPABLE()
? (__kmp_affinity_type == affinity_none ? "not used" : "yes")
: "no")
#else
"no"
#endif
);
__kmp_printf("%s", buffer.str);
__kmp_str_buf_free(&buffer);
K_DIAG(1, ("KMP_VERSION is true\n"));
#endif // KMP_STUB
} // __kmp_print_version_1
// Called at parallel initialization time.
static int __kmp_version_2_printed = FALSE;
void __kmp_print_version_2(void) {
if (__kmp_version_2_printed) {
return;
}
__kmp_version_2_printed = TRUE;
} // __kmp_print_version_2
// end of file //

67
runtime/src/kmp_version.h Normal file
View File

@ -0,0 +1,67 @@
/*
* kmp_version.h -- version number for this release
*/
//===----------------------------------------------------------------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is dual licensed under the MIT and the University of Illinois Open
// Source Licenses. See LICENSE.txt for details.
//
//===----------------------------------------------------------------------===//
#ifndef KMP_VERSION_H
#define KMP_VERSION_H
#ifdef __cplusplus
extern "C" {
#endif // __cplusplus
#ifndef KMP_VERSION_MAJOR
#error KMP_VERSION_MAJOR macro is not defined.
#endif
#define KMP_VERSION_MINOR 0
/* Using "magic" prefix in all the version strings is rather convenient to get
static version info from binaries by using standard utilities "strings" and
"grep", e. g.:
$ strings libomp.so | grep "@(#)"
gives clean list of all version strings in the library. Leading zero helps
to keep version string separate from printable characters which may occurs
just before version string. */
#define KMP_VERSION_MAGIC_STR "\x00@(#) "
#define KMP_VERSION_MAGIC_LEN 6 // Length of KMP_VERSION_MAGIC_STR.
#define KMP_VERSION_PREF_STR "Intel(R) OMP "
#define KMP_VERSION_PREFIX KMP_VERSION_MAGIC_STR KMP_VERSION_PREF_STR
/* declare all the version string constants for KMP_VERSION env. variable */
extern int const __kmp_version_major;
extern int const __kmp_version_minor;
extern int const __kmp_version_build;
extern int const __kmp_openmp_version;
extern char const
__kmp_copyright[]; // Old variable, kept for compatibility with ITC and ITP.
extern char const __kmp_version_copyright[];
extern char const __kmp_version_lib_ver[];
extern char const __kmp_version_lib_type[];
extern char const __kmp_version_link_type[];
extern char const __kmp_version_build_time[];
extern char const __kmp_version_target_env[];
extern char const __kmp_version_build_compiler[];
extern char const __kmp_version_alt_comp[];
extern char const __kmp_version_omp_api[];
// ??? extern char const __kmp_version_debug[];
extern char const __kmp_version_lock[];
extern char const __kmp_version_nested_stats_reporting[];
extern char const __kmp_version_ftnstdcall[];
extern char const __kmp_version_ftncdecl[];
extern char const __kmp_version_ftnextra[];
void __kmp_print_version_1(void);
void __kmp_print_version_2(void);
#ifdef __cplusplus
} // extern "C"
#endif // __cplusplus
#endif /* KMP_VERSION_H */

View File

@ -0,0 +1,26 @@
/*
* kmp_wait_release.cpp -- Wait/Release implementation
*/
//===----------------------------------------------------------------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is dual licensed under the MIT and the University of Illinois Open
// Source Licenses. See LICENSE.txt for details.
//
//===----------------------------------------------------------------------===//
#include "kmp_wait_release.h"
void __kmp_wait_64(kmp_info_t *this_thr, kmp_flag_64 *flag,
int final_spin USE_ITT_BUILD_ARG(void *itt_sync_obj)) {
if (final_spin)
__kmp_wait_template<kmp_flag_64, TRUE>(
this_thr, flag USE_ITT_BUILD_ARG(itt_sync_obj));
else
__kmp_wait_template<kmp_flag_64, FALSE>(
this_thr, flag USE_ITT_BUILD_ARG(itt_sync_obj));
}
void __kmp_release_64(kmp_flag_64 *flag) { __kmp_release_template(flag); }

View File

@ -0,0 +1,905 @@
/*
* kmp_wait_release.h -- Wait/Release implementation
*/
//===----------------------------------------------------------------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is dual licensed under the MIT and the University of Illinois Open
// Source Licenses. See LICENSE.txt for details.
//
//===----------------------------------------------------------------------===//
#ifndef KMP_WAIT_RELEASE_H
#define KMP_WAIT_RELEASE_H
#include "kmp.h"
#include "kmp_itt.h"
#include "kmp_stats.h"
#if OMPT_SUPPORT
#include "ompt-specific.h"
#endif
/*!
@defgroup WAIT_RELEASE Wait/Release operations
The definitions and functions here implement the lowest level thread
synchronizations of suspending a thread and awaking it. They are used to build
higher level operations such as barriers and fork/join.
*/
/*!
@ingroup WAIT_RELEASE
@{
*/
/*!
* The flag_type describes the storage used for the flag.
*/
enum flag_type {
flag32, /**< 32 bit flags */
flag64, /**< 64 bit flags */
flag_oncore /**< special 64-bit flag for on-core barrier (hierarchical) */
};
/*!
* Base class for wait/release volatile flag
*/
template <typename P> class kmp_flag_native {
volatile P *loc;
flag_type t;
public:
typedef P flag_t;
kmp_flag_native(volatile P *p, flag_type ft) : loc(p), t(ft) {}
volatile P *get() { return loc; }
void *get_void_p() { return RCAST(void *, CCAST(P *, loc)); }
void set(volatile P *new_loc) { loc = new_loc; }
flag_type get_type() { return t; }
P load() { return *loc; }
void store(P val) { *loc = val; }
};
/*!
* Base class for wait/release atomic flag
*/
template <typename P> class kmp_flag {
std::atomic<P>
*loc; /**< Pointer to the flag storage that is modified by another thread
*/
flag_type t; /**< "Type" of the flag in loc */
public:
typedef P flag_t;
kmp_flag(std::atomic<P> *p, flag_type ft) : loc(p), t(ft) {}
/*!
* @result the pointer to the actual flag
*/
std::atomic<P> *get() { return loc; }
/*!
* @result void* pointer to the actual flag
*/
void *get_void_p() { return RCAST(void *, loc); }
/*!
* @param new_loc in set loc to point at new_loc
*/
void set(std::atomic<P> *new_loc) { loc = new_loc; }
/*!
* @result the flag_type
*/
flag_type get_type() { return t; }
/*!
* @result flag value
*/
P load() { return loc->load(std::memory_order_acquire); }
/*!
* @param val the new flag value to be stored
*/
void store(P val) { loc->store(val, std::memory_order_release); }
// Derived classes must provide the following:
/*
kmp_info_t * get_waiter(kmp_uint32 i);
kmp_uint32 get_num_waiters();
bool done_check();
bool done_check_val(P old_loc);
bool notdone_check();
P internal_release();
void suspend(int th_gtid);
void resume(int th_gtid);
P set_sleeping();
P unset_sleeping();
bool is_sleeping();
bool is_any_sleeping();
bool is_sleeping_val(P old_loc);
int execute_tasks(kmp_info_t *this_thr, kmp_int32 gtid, int final_spin,
int *thread_finished
USE_ITT_BUILD_ARG(void * itt_sync_obj), kmp_int32
is_constrained);
*/
};
#if OMPT_SUPPORT
OMPT_NOINLINE
static void __ompt_implicit_task_end(kmp_info_t *this_thr,
ompt_state_t ompt_state,
ompt_data_t *tId) {
int ds_tid = this_thr->th.th_info.ds.ds_tid;
if (ompt_state == ompt_state_wait_barrier_implicit) {
this_thr->th.ompt_thread_info.state = ompt_state_overhead;
#if OMPT_OPTIONAL
void *codeptr = NULL;
if (ompt_enabled.ompt_callback_sync_region_wait) {
ompt_callbacks.ompt_callback(ompt_callback_sync_region_wait)(
ompt_sync_region_barrier, ompt_scope_end, NULL, tId, codeptr);
}
if (ompt_enabled.ompt_callback_sync_region) {
ompt_callbacks.ompt_callback(ompt_callback_sync_region)(
ompt_sync_region_barrier, ompt_scope_end, NULL, tId, codeptr);
}
#endif
if (!KMP_MASTER_TID(ds_tid)) {
if (ompt_enabled.ompt_callback_implicit_task) {
ompt_callbacks.ompt_callback(ompt_callback_implicit_task)(
ompt_scope_end, NULL, tId, 0, ds_tid, ompt_task_implicit);
}
// return to idle state
this_thr->th.ompt_thread_info.state = ompt_state_idle;
} else {
this_thr->th.ompt_thread_info.state = ompt_state_overhead;
}
}
}
#endif
/* Spin wait loop that first does pause, then yield, then sleep. A thread that
calls __kmp_wait_* must make certain that another thread calls __kmp_release
to wake it back up to prevent deadlocks!
NOTE: We may not belong to a team at this point. */
template <class C, int final_spin>
static inline void
__kmp_wait_template(kmp_info_t *this_thr,
C *flag USE_ITT_BUILD_ARG(void *itt_sync_obj)) {
#if USE_ITT_BUILD && USE_ITT_NOTIFY
volatile void *spin = flag->get();
#endif
kmp_uint32 spins;
int th_gtid;
int tasks_completed = FALSE;
int oversubscribed;
#if !KMP_USE_MONITOR
kmp_uint64 poll_count;
kmp_uint64 hibernate_goal;
#else
kmp_uint32 hibernate;
#endif
KMP_FSYNC_SPIN_INIT(spin, NULL);
if (flag->done_check()) {
KMP_FSYNC_SPIN_ACQUIRED(CCAST(void *, spin));
return;
}
th_gtid = this_thr->th.th_info.ds.ds_gtid;
#if KMP_OS_UNIX
if (final_spin)
KMP_ATOMIC_ST_REL(&this_thr->th.th_blocking, true);
#endif
KA_TRACE(20,
("__kmp_wait_sleep: T#%d waiting for flag(%p)\n", th_gtid, flag));
#if KMP_STATS_ENABLED
stats_state_e thread_state = KMP_GET_THREAD_STATE();
#endif
/* OMPT Behavior:
THIS function is called from
__kmp_barrier (2 times) (implicit or explicit barrier in parallel regions)
these have join / fork behavior
In these cases, we don't change the state or trigger events in THIS
function.
Events are triggered in the calling code (__kmp_barrier):
state := ompt_state_overhead
barrier-begin
barrier-wait-begin
state := ompt_state_wait_barrier
call join-barrier-implementation (finally arrive here)
{}
call fork-barrier-implementation (finally arrive here)
{}
state := ompt_state_overhead
barrier-wait-end
barrier-end
state := ompt_state_work_parallel
__kmp_fork_barrier (after thread creation, before executing implicit task)
call fork-barrier-implementation (finally arrive here)
{} // worker arrive here with state = ompt_state_idle
__kmp_join_barrier (implicit barrier at end of parallel region)
state := ompt_state_barrier_implicit
barrier-begin
barrier-wait-begin
call join-barrier-implementation (finally arrive here
final_spin=FALSE)
{
}
__kmp_fork_barrier (implicit barrier at end of parallel region)
call fork-barrier-implementation (finally arrive here final_spin=TRUE)
Worker after task-team is finished:
barrier-wait-end
barrier-end
implicit-task-end
idle-begin
state := ompt_state_idle
Before leaving, if state = ompt_state_idle
idle-end
state := ompt_state_overhead
*/
#if OMPT_SUPPORT
ompt_state_t ompt_entry_state;
ompt_data_t *tId;
if (ompt_enabled.enabled) {
ompt_entry_state = this_thr->th.ompt_thread_info.state;
if (!final_spin || ompt_entry_state != ompt_state_wait_barrier_implicit ||
KMP_MASTER_TID(this_thr->th.th_info.ds.ds_tid)) {
ompt_lw_taskteam_t *team =
this_thr->th.th_team->t.ompt_serialized_team_info;
if (team) {
tId = &(team->ompt_task_info.task_data);
} else {
tId = OMPT_CUR_TASK_DATA(this_thr);
}
} else {
tId = &(this_thr->th.ompt_thread_info.task_data);
}
if (final_spin && (__kmp_tasking_mode == tskm_immediate_exec ||
this_thr->th.th_task_team == NULL)) {
// implicit task is done. Either no taskqueue, or task-team finished
__ompt_implicit_task_end(this_thr, ompt_entry_state, tId);
}
}
#endif
// Setup for waiting
KMP_INIT_YIELD(spins);
if (__kmp_dflt_blocktime != KMP_MAX_BLOCKTIME) {
#if KMP_USE_MONITOR
// The worker threads cannot rely on the team struct existing at this point.
// Use the bt values cached in the thread struct instead.
#ifdef KMP_ADJUST_BLOCKTIME
if (__kmp_zero_bt && !this_thr->th.th_team_bt_set)
// Force immediate suspend if not set by user and more threads than
// available procs
hibernate = 0;
else
hibernate = this_thr->th.th_team_bt_intervals;
#else
hibernate = this_thr->th.th_team_bt_intervals;
#endif /* KMP_ADJUST_BLOCKTIME */
/* If the blocktime is nonzero, we want to make sure that we spin wait for
the entirety of the specified #intervals, plus up to one interval more.
This increment make certain that this thread doesn't go to sleep too
soon. */
if (hibernate != 0)
hibernate++;
// Add in the current time value.
hibernate += TCR_4(__kmp_global.g.g_time.dt.t_value);
KF_TRACE(20, ("__kmp_wait_sleep: T#%d now=%d, hibernate=%d, intervals=%d\n",
th_gtid, __kmp_global.g.g_time.dt.t_value, hibernate,
hibernate - __kmp_global.g.g_time.dt.t_value));
#else
hibernate_goal = KMP_NOW() + this_thr->th.th_team_bt_intervals;
poll_count = 0;
#endif // KMP_USE_MONITOR
}
oversubscribed = (TCR_4(__kmp_nth) > __kmp_avail_proc);
KMP_MB();
// Main wait spin loop
while (flag->notdone_check()) {
int in_pool;
kmp_task_team_t *task_team = NULL;
if (__kmp_tasking_mode != tskm_immediate_exec) {
task_team = this_thr->th.th_task_team;
/* If the thread's task team pointer is NULL, it means one of 3 things:
1) A newly-created thread is first being released by
__kmp_fork_barrier(), and its task team has not been set up yet.
2) All tasks have been executed to completion.
3) Tasking is off for this region. This could be because we are in a
serialized region (perhaps the outer one), or else tasking was manually
disabled (KMP_TASKING=0). */
if (task_team != NULL) {
if (TCR_SYNC_4(task_team->tt.tt_active)) {
if (KMP_TASKING_ENABLED(task_team))
flag->execute_tasks(
this_thr, th_gtid, final_spin,
&tasks_completed USE_ITT_BUILD_ARG(itt_sync_obj), 0);
else
this_thr->th.th_reap_state = KMP_SAFE_TO_REAP;
} else {
KMP_DEBUG_ASSERT(!KMP_MASTER_TID(this_thr->th.th_info.ds.ds_tid));
#if OMPT_SUPPORT
// task-team is done now, other cases should be catched above
if (final_spin && ompt_enabled.enabled)
__ompt_implicit_task_end(this_thr, ompt_entry_state, tId);
#endif
this_thr->th.th_task_team = NULL;
this_thr->th.th_reap_state = KMP_SAFE_TO_REAP;
}
} else {
this_thr->th.th_reap_state = KMP_SAFE_TO_REAP;
} // if
} // if
KMP_FSYNC_SPIN_PREPARE(CCAST(void *, spin));
if (TCR_4(__kmp_global.g.g_done)) {
if (__kmp_global.g.g_abort)
__kmp_abort_thread();
break;
}
// If we are oversubscribed, or have waited a bit (and
// KMP_LIBRARY=throughput), then yield
// TODO: Should it be number of cores instead of thread contexts? Like:
// KMP_YIELD(TCR_4(__kmp_nth) > __kmp_ncores);
// Need performance improvement data to make the change...
if (oversubscribed) {
KMP_YIELD(1);
} else {
KMP_YIELD_SPIN(spins);
}
// Check if this thread was transferred from a team
// to the thread pool (or vice-versa) while spinning.
in_pool = !!TCR_4(this_thr->th.th_in_pool);
if (in_pool != !!this_thr->th.th_active_in_pool) {
if (in_pool) { // Recently transferred from team to pool
KMP_ATOMIC_INC(&__kmp_thread_pool_active_nth);
this_thr->th.th_active_in_pool = TRUE;
/* Here, we cannot assert that:
KMP_DEBUG_ASSERT(TCR_4(__kmp_thread_pool_active_nth) <=
__kmp_thread_pool_nth);
__kmp_thread_pool_nth is inc/dec'd by the master thread while the
fork/join lock is held, whereas __kmp_thread_pool_active_nth is
inc/dec'd asynchronously by the workers. The two can get out of sync
for brief periods of time. */
} else { // Recently transferred from pool to team
KMP_ATOMIC_DEC(&__kmp_thread_pool_active_nth);
KMP_DEBUG_ASSERT(TCR_4(__kmp_thread_pool_active_nth) >= 0);
this_thr->th.th_active_in_pool = FALSE;
}
}
#if KMP_STATS_ENABLED
// Check if thread has been signalled to idle state
// This indicates that the logical "join-barrier" has finished
if (this_thr->th.th_stats->isIdle() &&
KMP_GET_THREAD_STATE() == FORK_JOIN_BARRIER) {
KMP_SET_THREAD_STATE(IDLE);
KMP_PUSH_PARTITIONED_TIMER(OMP_idle);
}
#endif
// Don't suspend if KMP_BLOCKTIME is set to "infinite"
if (__kmp_dflt_blocktime == KMP_MAX_BLOCKTIME)
continue;
// Don't suspend if there is a likelihood of new tasks being spawned.
if ((task_team != NULL) && TCR_4(task_team->tt.tt_found_tasks))
continue;
#if KMP_USE_MONITOR
// If we have waited a bit more, fall asleep
if (TCR_4(__kmp_global.g.g_time.dt.t_value) < hibernate)
continue;
#else
if (KMP_BLOCKING(hibernate_goal, poll_count++))
continue;
#endif
KF_TRACE(50, ("__kmp_wait_sleep: T#%d suspend time reached\n", th_gtid));
#if KMP_OS_UNIX
if (final_spin)
KMP_ATOMIC_ST_REL(&this_thr->th.th_blocking, false);
#endif
flag->suspend(th_gtid);
#if KMP_OS_UNIX
if (final_spin)
KMP_ATOMIC_ST_REL(&this_thr->th.th_blocking, true);
#endif
if (TCR_4(__kmp_global.g.g_done)) {
if (__kmp_global.g.g_abort)
__kmp_abort_thread();
break;
} else if (__kmp_tasking_mode != tskm_immediate_exec &&
this_thr->th.th_reap_state == KMP_SAFE_TO_REAP) {
this_thr->th.th_reap_state = KMP_NOT_SAFE_TO_REAP;
}
// TODO: If thread is done with work and times out, disband/free
}
#if OMPT_SUPPORT
ompt_state_t ompt_exit_state = this_thr->th.ompt_thread_info.state;
if (ompt_enabled.enabled && ompt_exit_state != ompt_state_undefined) {
#if OMPT_OPTIONAL
if (final_spin) {
__ompt_implicit_task_end(this_thr, ompt_exit_state, tId);
ompt_exit_state = this_thr->th.ompt_thread_info.state;
}
#endif
if (ompt_exit_state == ompt_state_idle) {
this_thr->th.ompt_thread_info.state = ompt_state_overhead;
}
}
#endif
#if KMP_STATS_ENABLED
// If we were put into idle state, pop that off the state stack
if (KMP_GET_THREAD_STATE() == IDLE) {
KMP_POP_PARTITIONED_TIMER();
KMP_SET_THREAD_STATE(thread_state);
this_thr->th.th_stats->resetIdleFlag();
}
#endif
#if KMP_OS_UNIX
if (final_spin)
KMP_ATOMIC_ST_REL(&this_thr->th.th_blocking, false);
#endif
KMP_FSYNC_SPIN_ACQUIRED(CCAST(void *, spin));
}
/* Release any threads specified as waiting on the flag by releasing the flag
and resume the waiting thread if indicated by the sleep bit(s). A thread that
calls __kmp_wait_template must call this function to wake up the potentially
sleeping thread and prevent deadlocks! */
template <class C> static inline void __kmp_release_template(C *flag) {
#ifdef KMP_DEBUG
int gtid = TCR_4(__kmp_init_gtid) ? __kmp_get_gtid() : -1;
#endif
KF_TRACE(20, ("__kmp_release: T#%d releasing flag(%x)\n", gtid, flag->get()));
KMP_DEBUG_ASSERT(flag->get());
KMP_FSYNC_RELEASING(flag->get_void_p());
flag->internal_release();
KF_TRACE(100, ("__kmp_release: T#%d set new spin=%d\n", gtid, flag->get(),
flag->load()));
if (__kmp_dflt_blocktime != KMP_MAX_BLOCKTIME) {
// Only need to check sleep stuff if infinite block time not set.
// Are *any* threads waiting on flag sleeping?
if (flag->is_any_sleeping()) {
for (unsigned int i = 0; i < flag->get_num_waiters(); ++i) {
// if sleeping waiter exists at i, sets current_waiter to i inside flag
kmp_info_t *waiter = flag->get_waiter(i);
if (waiter) {
int wait_gtid = waiter->th.th_info.ds.ds_gtid;
// Wake up thread if needed
KF_TRACE(50, ("__kmp_release: T#%d waking up thread T#%d since sleep "
"flag(%p) set\n",
gtid, wait_gtid, flag->get()));
flag->resume(wait_gtid); // unsets flag's current_waiter when done
}
}
}
}
}
template <typename FlagType> struct flag_traits {};
template <> struct flag_traits<kmp_uint32> {
typedef kmp_uint32 flag_t;
static const flag_type t = flag32;
static inline flag_t tcr(flag_t f) { return TCR_4(f); }
static inline flag_t test_then_add4(volatile flag_t *f) {
return KMP_TEST_THEN_ADD4_32(RCAST(volatile kmp_int32 *, f));
}
static inline flag_t test_then_or(volatile flag_t *f, flag_t v) {
return KMP_TEST_THEN_OR32(f, v);
}
static inline flag_t test_then_and(volatile flag_t *f, flag_t v) {
return KMP_TEST_THEN_AND32(f, v);
}
};
template <> struct flag_traits<kmp_uint64> {
typedef kmp_uint64 flag_t;
static const flag_type t = flag64;
static inline flag_t tcr(flag_t f) { return TCR_8(f); }
static inline flag_t test_then_add4(volatile flag_t *f) {
return KMP_TEST_THEN_ADD4_64(RCAST(volatile kmp_int64 *, f));
}
static inline flag_t test_then_or(volatile flag_t *f, flag_t v) {
return KMP_TEST_THEN_OR64(f, v);
}
static inline flag_t test_then_and(volatile flag_t *f, flag_t v) {
return KMP_TEST_THEN_AND64(f, v);
}
};
// Basic flag that does not use C11 Atomics
template <typename FlagType>
class kmp_basic_flag_native : public kmp_flag_native<FlagType> {
typedef flag_traits<FlagType> traits_type;
FlagType checker; /**< Value to compare flag to to check if flag has been
released. */
kmp_info_t
*waiting_threads[1]; /**< Array of threads sleeping on this thread. */
kmp_uint32
num_waiting_threads; /**< Number of threads sleeping on this thread. */
public:
kmp_basic_flag_native(volatile FlagType *p)
: kmp_flag_native<FlagType>(p, traits_type::t), num_waiting_threads(0) {}
kmp_basic_flag_native(volatile FlagType *p, kmp_info_t *thr)
: kmp_flag_native<FlagType>(p, traits_type::t), num_waiting_threads(1) {
waiting_threads[0] = thr;
}
kmp_basic_flag_native(volatile FlagType *p, FlagType c)
: kmp_flag_native<FlagType>(p, traits_type::t), checker(c),
num_waiting_threads(0) {}
/*!
* param i in index into waiting_threads
* @result the thread that is waiting at index i
*/
kmp_info_t *get_waiter(kmp_uint32 i) {
KMP_DEBUG_ASSERT(i < num_waiting_threads);
return waiting_threads[i];
}
/*!
* @result num_waiting_threads
*/
kmp_uint32 get_num_waiters() { return num_waiting_threads; }
/*!
* @param thr in the thread which is now waiting
*
* Insert a waiting thread at index 0.
*/
void set_waiter(kmp_info_t *thr) {
waiting_threads[0] = thr;
num_waiting_threads = 1;
}
/*!
* @result true if the flag object has been released.
*/
bool done_check() { return traits_type::tcr(*(this->get())) == checker; }
/*!
* @param old_loc in old value of flag
* @result true if the flag's old value indicates it was released.
*/
bool done_check_val(FlagType old_loc) { return old_loc == checker; }
/*!
* @result true if the flag object is not yet released.
* Used in __kmp_wait_template like:
* @code
* while (flag.notdone_check()) { pause(); }
* @endcode
*/
bool notdone_check() { return traits_type::tcr(*(this->get())) != checker; }
/*!
* @result Actual flag value before release was applied.
* Trigger all waiting threads to run by modifying flag to release state.
*/
void internal_release() {
(void)traits_type::test_then_add4((volatile FlagType *)this->get());
}
/*!
* @result Actual flag value before sleep bit(s) set.
* Notes that there is at least one thread sleeping on the flag by setting
* sleep bit(s).
*/
FlagType set_sleeping() {
return traits_type::test_then_or((volatile FlagType *)this->get(),
KMP_BARRIER_SLEEP_STATE);
}
/*!
* @result Actual flag value before sleep bit(s) cleared.
* Notes that there are no longer threads sleeping on the flag by clearing
* sleep bit(s).
*/
FlagType unset_sleeping() {
return traits_type::test_then_and((volatile FlagType *)this->get(),
~KMP_BARRIER_SLEEP_STATE);
}
/*!
* @param old_loc in old value of flag
* Test whether there are threads sleeping on the flag's old value in old_loc.
*/
bool is_sleeping_val(FlagType old_loc) {
return old_loc & KMP_BARRIER_SLEEP_STATE;
}
/*!
* Test whether there are threads sleeping on the flag.
*/
bool is_sleeping() { return is_sleeping_val(*(this->get())); }
bool is_any_sleeping() { return is_sleeping_val(*(this->get())); }
kmp_uint8 *get_stolen() { return NULL; }
enum barrier_type get_bt() { return bs_last_barrier; }
};
template <typename FlagType> class kmp_basic_flag : public kmp_flag<FlagType> {
typedef flag_traits<FlagType> traits_type;
FlagType checker; /**< Value to compare flag to to check if flag has been
released. */
kmp_info_t
*waiting_threads[1]; /**< Array of threads sleeping on this thread. */
kmp_uint32
num_waiting_threads; /**< Number of threads sleeping on this thread. */
public:
kmp_basic_flag(std::atomic<FlagType> *p)
: kmp_flag<FlagType>(p, traits_type::t), num_waiting_threads(0) {}
kmp_basic_flag(std::atomic<FlagType> *p, kmp_info_t *thr)
: kmp_flag<FlagType>(p, traits_type::t), num_waiting_threads(1) {
waiting_threads[0] = thr;
}
kmp_basic_flag(std::atomic<FlagType> *p, FlagType c)
: kmp_flag<FlagType>(p, traits_type::t), checker(c),
num_waiting_threads(0) {}
/*!
* param i in index into waiting_threads
* @result the thread that is waiting at index i
*/
kmp_info_t *get_waiter(kmp_uint32 i) {
KMP_DEBUG_ASSERT(i < num_waiting_threads);
return waiting_threads[i];
}
/*!
* @result num_waiting_threads
*/
kmp_uint32 get_num_waiters() { return num_waiting_threads; }
/*!
* @param thr in the thread which is now waiting
*
* Insert a waiting thread at index 0.
*/
void set_waiter(kmp_info_t *thr) {
waiting_threads[0] = thr;
num_waiting_threads = 1;
}
/*!
* @result true if the flag object has been released.
*/
bool done_check() { return this->load() == checker; }
/*!
* @param old_loc in old value of flag
* @result true if the flag's old value indicates it was released.
*/
bool done_check_val(FlagType old_loc) { return old_loc == checker; }
/*!
* @result true if the flag object is not yet released.
* Used in __kmp_wait_template like:
* @code
* while (flag.notdone_check()) { pause(); }
* @endcode
*/
bool notdone_check() { return this->load() != checker; }
/*!
* @result Actual flag value before release was applied.
* Trigger all waiting threads to run by modifying flag to release state.
*/
void internal_release() { KMP_ATOMIC_ADD(this->get(), 4); }
/*!
* @result Actual flag value before sleep bit(s) set.
* Notes that there is at least one thread sleeping on the flag by setting
* sleep bit(s).
*/
FlagType set_sleeping() {
return KMP_ATOMIC_OR(this->get(), KMP_BARRIER_SLEEP_STATE);
}
/*!
* @result Actual flag value before sleep bit(s) cleared.
* Notes that there are no longer threads sleeping on the flag by clearing
* sleep bit(s).
*/
FlagType unset_sleeping() {
return KMP_ATOMIC_AND(this->get(), ~KMP_BARRIER_SLEEP_STATE);
}
/*!
* @param old_loc in old value of flag
* Test whether there are threads sleeping on the flag's old value in old_loc.
*/
bool is_sleeping_val(FlagType old_loc) {
return old_loc & KMP_BARRIER_SLEEP_STATE;
}
/*!
* Test whether there are threads sleeping on the flag.
*/
bool is_sleeping() { return is_sleeping_val(this->load()); }
bool is_any_sleeping() { return is_sleeping_val(this->load()); }
kmp_uint8 *get_stolen() { return NULL; }
enum barrier_type get_bt() { return bs_last_barrier; }
};
class kmp_flag_32 : public kmp_basic_flag<kmp_uint32> {
public:
kmp_flag_32(std::atomic<kmp_uint32> *p) : kmp_basic_flag<kmp_uint32>(p) {}
kmp_flag_32(std::atomic<kmp_uint32> *p, kmp_info_t *thr)
: kmp_basic_flag<kmp_uint32>(p, thr) {}
kmp_flag_32(std::atomic<kmp_uint32> *p, kmp_uint32 c)
: kmp_basic_flag<kmp_uint32>(p, c) {}
void suspend(int th_gtid) { __kmp_suspend_32(th_gtid, this); }
void resume(int th_gtid) { __kmp_resume_32(th_gtid, this); }
int execute_tasks(kmp_info_t *this_thr, kmp_int32 gtid, int final_spin,
int *thread_finished USE_ITT_BUILD_ARG(void *itt_sync_obj),
kmp_int32 is_constrained) {
return __kmp_execute_tasks_32(
this_thr, gtid, this, final_spin,
thread_finished USE_ITT_BUILD_ARG(itt_sync_obj), is_constrained);
}
void wait(kmp_info_t *this_thr,
int final_spin USE_ITT_BUILD_ARG(void *itt_sync_obj)) {
if (final_spin)
__kmp_wait_template<kmp_flag_32, TRUE>(
this_thr, this USE_ITT_BUILD_ARG(itt_sync_obj));
else
__kmp_wait_template<kmp_flag_32, FALSE>(
this_thr, this USE_ITT_BUILD_ARG(itt_sync_obj));
}
void release() { __kmp_release_template(this); }
flag_type get_ptr_type() { return flag32; }
};
class kmp_flag_64 : public kmp_basic_flag_native<kmp_uint64> {
public:
kmp_flag_64(volatile kmp_uint64 *p) : kmp_basic_flag_native<kmp_uint64>(p) {}
kmp_flag_64(volatile kmp_uint64 *p, kmp_info_t *thr)
: kmp_basic_flag_native<kmp_uint64>(p, thr) {}
kmp_flag_64(volatile kmp_uint64 *p, kmp_uint64 c)
: kmp_basic_flag_native<kmp_uint64>(p, c) {}
void suspend(int th_gtid) { __kmp_suspend_64(th_gtid, this); }
void resume(int th_gtid) { __kmp_resume_64(th_gtid, this); }
int execute_tasks(kmp_info_t *this_thr, kmp_int32 gtid, int final_spin,
int *thread_finished USE_ITT_BUILD_ARG(void *itt_sync_obj),
kmp_int32 is_constrained) {
return __kmp_execute_tasks_64(
this_thr, gtid, this, final_spin,
thread_finished USE_ITT_BUILD_ARG(itt_sync_obj), is_constrained);
}
void wait(kmp_info_t *this_thr,
int final_spin USE_ITT_BUILD_ARG(void *itt_sync_obj)) {
if (final_spin)
__kmp_wait_template<kmp_flag_64, TRUE>(
this_thr, this USE_ITT_BUILD_ARG(itt_sync_obj));
else
__kmp_wait_template<kmp_flag_64, FALSE>(
this_thr, this USE_ITT_BUILD_ARG(itt_sync_obj));
}
void release() { __kmp_release_template(this); }
flag_type get_ptr_type() { return flag64; }
};
// Hierarchical 64-bit on-core barrier instantiation
class kmp_flag_oncore : public kmp_flag_native<kmp_uint64> {
kmp_uint64 checker;
kmp_info_t *waiting_threads[1];
kmp_uint32 num_waiting_threads;
kmp_uint32
offset; /**< Portion of flag that is of interest for an operation. */
bool flag_switch; /**< Indicates a switch in flag location. */
enum barrier_type bt; /**< Barrier type. */
kmp_info_t *this_thr; /**< Thread that may be redirected to different flag
location. */
#if USE_ITT_BUILD
void *
itt_sync_obj; /**< ITT object that must be passed to new flag location. */
#endif
unsigned char &byteref(volatile kmp_uint64 *loc, size_t offset) {
return (RCAST(unsigned char *, CCAST(kmp_uint64 *, loc)))[offset];
}
public:
kmp_flag_oncore(volatile kmp_uint64 *p)
: kmp_flag_native<kmp_uint64>(p, flag_oncore), num_waiting_threads(0),
flag_switch(false) {}
kmp_flag_oncore(volatile kmp_uint64 *p, kmp_uint32 idx)
: kmp_flag_native<kmp_uint64>(p, flag_oncore), num_waiting_threads(0),
offset(idx), flag_switch(false) {}
kmp_flag_oncore(volatile kmp_uint64 *p, kmp_uint64 c, kmp_uint32 idx,
enum barrier_type bar_t,
kmp_info_t *thr USE_ITT_BUILD_ARG(void *itt))
: kmp_flag_native<kmp_uint64>(p, flag_oncore), checker(c),
num_waiting_threads(0), offset(idx), flag_switch(false), bt(bar_t),
this_thr(thr) USE_ITT_BUILD_ARG(itt_sync_obj(itt)) {}
kmp_info_t *get_waiter(kmp_uint32 i) {
KMP_DEBUG_ASSERT(i < num_waiting_threads);
return waiting_threads[i];
}
kmp_uint32 get_num_waiters() { return num_waiting_threads; }
void set_waiter(kmp_info_t *thr) {
waiting_threads[0] = thr;
num_waiting_threads = 1;
}
bool done_check_val(kmp_uint64 old_loc) {
return byteref(&old_loc, offset) == checker;
}
bool done_check() { return done_check_val(*get()); }
bool notdone_check() {
// Calculate flag_switch
if (this_thr->th.th_bar[bt].bb.wait_flag == KMP_BARRIER_SWITCH_TO_OWN_FLAG)
flag_switch = true;
if (byteref(get(), offset) != 1 && !flag_switch)
return true;
else if (flag_switch) {
this_thr->th.th_bar[bt].bb.wait_flag = KMP_BARRIER_SWITCHING;
kmp_flag_64 flag(&this_thr->th.th_bar[bt].bb.b_go,
(kmp_uint64)KMP_BARRIER_STATE_BUMP);
__kmp_wait_64(this_thr, &flag, TRUE USE_ITT_BUILD_ARG(itt_sync_obj));
}
return false;
}
void internal_release() {
// Other threads can write their own bytes simultaneously.
if (__kmp_dflt_blocktime == KMP_MAX_BLOCKTIME) {
byteref(get(), offset) = 1;
} else {
kmp_uint64 mask = 0;
byteref(&mask, offset) = 1;
KMP_TEST_THEN_OR64(get(), mask);
}
}
kmp_uint64 set_sleeping() {
return KMP_TEST_THEN_OR64(get(), KMP_BARRIER_SLEEP_STATE);
}
kmp_uint64 unset_sleeping() {
return KMP_TEST_THEN_AND64(get(), ~KMP_BARRIER_SLEEP_STATE);
}
bool is_sleeping_val(kmp_uint64 old_loc) {
return old_loc & KMP_BARRIER_SLEEP_STATE;
}
bool is_sleeping() { return is_sleeping_val(*get()); }
bool is_any_sleeping() { return is_sleeping_val(*get()); }
void wait(kmp_info_t *this_thr, int final_spin) {
if (final_spin)
__kmp_wait_template<kmp_flag_oncore, TRUE>(
this_thr, this USE_ITT_BUILD_ARG(itt_sync_obj));
else
__kmp_wait_template<kmp_flag_oncore, FALSE>(
this_thr, this USE_ITT_BUILD_ARG(itt_sync_obj));
}
void release() { __kmp_release_template(this); }
void suspend(int th_gtid) { __kmp_suspend_oncore(th_gtid, this); }
void resume(int th_gtid) { __kmp_resume_oncore(th_gtid, this); }
int execute_tasks(kmp_info_t *this_thr, kmp_int32 gtid, int final_spin,
int *thread_finished USE_ITT_BUILD_ARG(void *itt_sync_obj),
kmp_int32 is_constrained) {
return __kmp_execute_tasks_oncore(
this_thr, gtid, this, final_spin,
thread_finished USE_ITT_BUILD_ARG(itt_sync_obj), is_constrained);
}
kmp_uint8 *get_stolen() { return NULL; }
enum barrier_type get_bt() { return bt; }
flag_type get_ptr_type() { return flag_oncore; }
};
// Used to wake up threads, volatile void* flag is usually the th_sleep_loc
// associated with int gtid.
static inline void __kmp_null_resume_wrapper(int gtid, volatile void *flag) {
if (!flag)
return;
switch (RCAST(kmp_flag_64 *, CCAST(void *, flag))->get_type()) {
case flag32:
__kmp_resume_32(gtid, NULL);
break;
case flag64:
__kmp_resume_64(gtid, NULL);
break;
case flag_oncore:
__kmp_resume_oncore(gtid, NULL);
break;
}
}
/*!
@}
*/
#endif // KMP_WAIT_RELEASE_H

View File

@ -0,0 +1,73 @@
/*
* kmp_wrapper_getpid.h -- getpid() declaration.
*/
//===----------------------------------------------------------------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is dual licensed under the MIT and the University of Illinois Open
// Source Licenses. See LICENSE.txt for details.
//
//===----------------------------------------------------------------------===//
#ifndef KMP_WRAPPER_GETPID_H
#define KMP_WRAPPER_GETPID_H
#if KMP_OS_UNIX
// On Unix-like systems (Linux* OS and OS X*) getpid() is declared in standard
// headers.
#include <sys/syscall.h>
#include <sys/types.h>
#include <unistd.h>
#if KMP_OS_DARWIN
// OS X
#define __kmp_gettid() syscall(SYS_thread_selfid)
#elif KMP_OS_NETBSD
#include <lwp.h>
#define __kmp_gettid() _lwp_self()
#elif defined(SYS_gettid)
// Hopefully other Unix systems define SYS_gettid syscall for getting os thread
// id
#define __kmp_gettid() syscall(SYS_gettid)
#else
#warning No gettid found, use getpid instead
#define __kmp_gettid() getpid()
#endif
#elif KMP_OS_WINDOWS
// On Windows* OS _getpid() returns int (not pid_t) and is declared in
// "process.h".
#include <process.h>
// Let us simulate Unix.
#if KMP_MSVC_COMPAT
typedef int pid_t;
#endif
#define getpid _getpid
#define __kmp_gettid() GetCurrentThreadId()
#else
#error Unknown or unsupported OS.
#endif
/* TODO: All the libomp source code uses pid_t type for storing the result of
getpid(), it is good. But often it printed as "%d", that is not good, because
it ignores pid_t definition (may pid_t be longer that int?). It seems all pid
prints should be rewritten as:
printf( "%" KMP_UINT64_SPEC, (kmp_uint64) pid );
or (at least) as
printf( "%" KMP_UINT32_SPEC, (kmp_uint32) pid );
(kmp_uint32, kmp_uint64, KMP_UINT64_SPEC, and KMP_UNIT32_SPEC are defined in
"kmp_os.h".) */
#endif // KMP_WRAPPER_GETPID_H
// end of file //

View File

@ -0,0 +1,197 @@
/*
* kmp_wrapper_malloc.h -- Wrappers for memory allocation routines
* (malloc(), free(), and others).
*/
//===----------------------------------------------------------------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is dual licensed under the MIT and the University of Illinois Open
// Source Licenses. See LICENSE.txt for details.
//
//===----------------------------------------------------------------------===//
#ifndef KMP_WRAPPER_MALLOC_H
#define KMP_WRAPPER_MALLOC_H
/* This header serves for 3 purposes:
1. Declaring standard memory allocation rourines in OS-independent way.
2. Passing source location info through memory allocation wrappers.
3. Enabling native memory debugging capabilities.
1. Declaring standard memory allocation rourines in OS-independent way.
-----------------------------------------------------------------------
On Linux* OS, alloca() function is declared in <alloca.h> header, while on
Windows* OS there is no <alloca.h> header, function _alloca() (note
underscore!) is declared in <malloc.h>. This header eliminates these
differences, so client code incluiding "kmp_wrapper_malloc.h" can rely on
following routines:
malloc
calloc
realloc
free
alloca
in OS-independent way. It also enables memory tracking capabilities in debug
build. (Currently it is available only on Windows* OS.)
2. Passing source location info through memory allocation wrappers.
-------------------------------------------------------------------
Some tools may help debugging memory errors, for example, report memory
leaks. However, memory allocation wrappers may hinder source location.
For example:
void * aligned_malloc( int size ) {
void * ptr = malloc( size ); // All the memory leaks will be reported at
// this line.
// some adjustments...
return ptr;
};
ptr = aligned_malloc( size ); // Memory leak will *not* be detected here. :-(
To overcome the problem, information about original source location should
be passed through all the memory allocation wrappers, for example:
void * aligned_malloc( int size, char const * file, int line ) {
void * ptr = _malloc_dbg( size, file, line );
// some adjustments...
return ptr;
};
void * ptr = aligned_malloc( size, __FILE__, __LINE__ );
This is a good idea for debug, but passing additional arguments impacts
performance. Disabling extra arguments in release version of the software
introduces too many conditional compilation, which makes code unreadable.
This header defines few macros and functions facilitating it:
void * _aligned_malloc( int size KMP_SRC_LOC_DECL ) {
void * ptr = malloc_src_loc( size KMP_SRC_LOC_PARM );
// some adjustments...
return ptr;
};
#define aligned_malloc( size ) _aligned_malloc( (size) KMP_SRC_LOC_CURR )
// Use macro instead of direct call to function.
void * ptr = aligned_malloc( size ); // Bingo! Memory leak will be
// reported at this line.
3. Enabling native memory debugging capabilities.
-------------------------------------------------
Some platforms may offer memory debugging capabilities. For example, debug
version of Microsoft RTL tracks all memory allocations and can report memory
leaks. This header enables this, and makes report more useful (see "Passing
source location info through memory allocation wrappers").
*/
#include <stdlib.h>
#include "kmp_os.h"
// Include alloca() declaration.
#if KMP_OS_WINDOWS
#include <malloc.h> // Windows* OS: _alloca() declared in "malloc.h".
#if KMP_MSVC_COMPAT
#define alloca _alloca // Allow to use alloca() with no underscore.
#endif
#elif KMP_OS_DRAGONFLY || KMP_OS_FREEBSD || KMP_OS_NETBSD || KMP_OS_OPENBSD
// Declared in "stdlib.h".
#elif KMP_OS_UNIX
#include <alloca.h> // Linux* OS and OS X*: alloc() declared in "alloca".
#else
#error Unknown or unsupported OS.
#endif
/* KMP_SRC_LOC_DECL -- Declaring source location paramemters, to be used in
function declaration.
KMP_SRC_LOC_PARM -- Source location paramemters, to be used to pass
parameters to underlying levels.
KMP_SRC_LOC_CURR -- Source location arguments describing current location,
to be used at top-level.
Typical usage:
void * _aligned_malloc( int size KMP_SRC_LOC_DECL ) {
// Note: Comma is missed before KMP_SRC_LOC_DECL.
KE_TRACE( 25, ( "called from %s:%d\n", KMP_SRC_LOC_PARM ) );
...
}
#define aligned_malloc( size ) _aligned_malloc( (size) KMP_SRC_LOC_CURR )
// Use macro instead of direct call to function -- macro passes info
// about current source location to the func.
*/
#if KMP_DEBUG
#define KMP_SRC_LOC_DECL , char const *_file_, int _line_
#define KMP_SRC_LOC_PARM , _file_, _line_
#define KMP_SRC_LOC_CURR , __FILE__, __LINE__
#else
#define KMP_SRC_LOC_DECL
#define KMP_SRC_LOC_PARM
#define KMP_SRC_LOC_CURR
#endif // KMP_DEBUG
/* malloc_src_loc() and free_src_loc() are pseudo-functions (really macros)
with accepts extra arguments (source location info) in debug mode. They
should be used in place of malloc() and free(), this allows enabling native
memory debugging capabilities (if any).
Typical usage:
ptr = malloc_src_loc( size KMP_SRC_LOC_PARM );
// Inside memory allocation wrapper, or
ptr = malloc_src_loc( size KMP_SRC_LOC_CURR );
// Outside of memory allocation wrapper.
*/
#define malloc_src_loc(args) _malloc_src_loc(args)
#define free_src_loc(args) _free_src_loc(args)
/* Depending on build mode (debug or release), malloc_src_loc is declared with
1 or 3 parameters, but calls to malloc_src_loc() are always the same:
... malloc_src_loc( size KMP_SRC_LOC_PARM ); // or KMP_SRC_LOC_CURR
Compiler issues warning/error "too few arguments in macro invocation".
Declaring two macros, malloc_src_loc() and _malloc_src_loc(), overcomes the
problem. */
#if KMP_DEBUG
#if KMP_OS_WINDOWS && _DEBUG
// KMP_DEBUG != _DEBUG. MS debug RTL is available only if _DEBUG is defined.
// Windows* OS has native memory debugging capabilities. Enable them.
#include <crtdbg.h>
#define KMP_MEM_BLOCK _CLIENT_BLOCK
#define malloc(size) _malloc_dbg((size), KMP_MEM_BLOCK, __FILE__, __LINE__)
#define calloc(num, size) \
_calloc_dbg((num), (size), KMP_MEM_BLOCK, __FILE__, __LINE__)
#define realloc(ptr, size) \
_realloc_dbg((ptr), (size), KMP_MEM_BLOCK, __FILE__, __LINE__)
#define free(ptr) _free_dbg((ptr), KMP_MEM_BLOCK)
#define _malloc_src_loc(size, file, line) \
_malloc_dbg((size), KMP_MEM_BLOCK, (file), (line))
#define _free_src_loc(ptr, file, line) _free_dbg((ptr), KMP_MEM_BLOCK)
#else
// Linux* OS, OS X*, or non-debug Windows* OS.
#define _malloc_src_loc(size, file, line) malloc((size))
#define _free_src_loc(ptr, file, line) free((ptr))
#endif
#else
// In release build malloc_src_loc() and free_src_loc() do not have extra
// parameters.
#define _malloc_src_loc(size) malloc((size))
#define _free_src_loc(ptr) free((ptr))
#endif // KMP_DEBUG
#endif // KMP_WRAPPER_MALLOC_H
// end of file //

70
runtime/src/libomp.rc.var Normal file
View File

@ -0,0 +1,70 @@
// libomp.rc.var
//
////===----------------------------------------------------------------------===//
////
//// The LLVM Compiler Infrastructure
////
//// This file is dual licensed under the MIT and the University of Illinois Open
//// Source Licenses. See LICENSE.txt for details.
////
////===----------------------------------------------------------------------===//
//
#include "winresrc.h"
#include "kmp_config.h"
LANGUAGE LANG_ENGLISH, SUBLANG_ENGLISH_US // English (U.S.) resources
#pragma code_page(1252)
VS_VERSION_INFO VERSIONINFO
// Parts of FILEVERSION and PRODUCTVERSION are 16-bit fields, entire build date yyyymmdd
// does not fit into one version part, so we need to split it into yyyy and mmdd:
FILEVERSION @LIBOMP_VERSION_MAJOR@,@LIBOMP_VERSION_MINOR@,@LIBOMP_VERSION_BUILD_YEAR@,@LIBOMP_VERSION_BUILD_MONTH_DAY@
PRODUCTVERSION @LIBOMP_VERSION_MAJOR@,@LIBOMP_VERSION_MINOR@,@LIBOMP_VERSION_BUILD_YEAR@,@LIBOMP_VERSION_BUILD_MONTH_DAY@
FILEFLAGSMASK VS_FFI_FILEFLAGSMASK
FILEFLAGS 0
#if KMP_DEBUG
| VS_FF_DEBUG
#endif
#if @LIBOMP_VERSION_BUILD@ == 0
| VS_FF_PRIVATEBUILD | VS_FF_PRERELEASE
#endif
FILEOS VOS_NT_WINDOWS32 // Windows* Server* 2003, XP*, 2000, or NT*
FILETYPE VFT_DLL
BEGIN
BLOCK "StringFileInfo"
BEGIN
BLOCK "040904b0" // U.S. English, Unicode (0x04b0 == 1200)
BEGIN
// FileDescription and LegalCopyright should be short.
VALUE "FileDescription", "LLVM* OpenMP* Runtime Library\0"
// Following values may be relatively long.
VALUE "CompanyName", "LLVM\0"
// VALUE "LegalTrademarks", "\0" // Not used for now.
VALUE "ProductName", "LLVM* OpenMP* Runtime Library\0"
VALUE "ProductVersion", "@LIBOMP_VERSION_MAJOR@.@LIBOMP_VERSION_MINOR@\0"
VALUE "FileVersion", "@LIBOMP_VERSION_BUILD@\0"
VALUE "InternalName", "@LIBOMP_LIB_FILE@\0"
VALUE "OriginalFilename", "@LIBOMP_LIB_FILE@\0"
VALUE "Comments",
"LLVM* OpenMP* @LIBOMP_LEGAL_TYPE@ Library "
"version @LIBOMP_VERSION_MAJOR@.@LIBOMP_VERSION_MINOR@.@LIBOMP_VERSION_BUILD@ "
"for @LIBOMP_LEGAL_ARCH@ architecture built on @LIBOMP_BUILD_DATE@.\0"
#if @LIBOMP_VERSION_BUILD@ == 0
VALUE "PrivateBuild",
"This is a development build.\0"
#endif
// VALUE "SpecialBuild", "\0" // Not used for now.
END
END
BLOCK "VarFileInfo"
BEGIN
VALUE "Translation", 1033, 1200
// 1033 -- U.S. English, 1200 -- Unicode
END
END
// end of file //

View File

@ -0,0 +1,112 @@
/******************************************************************************
* File: ompt-event-specific.h
*
* Description:
*
* specify which of the OMPT events are implemented by this runtime system
* and the level of their implementation by a runtime system.
*****************************************************************************/
//===----------------------------------------------------------------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is dual licensed under the MIT and the University of Illinois Open
// Source Licenses. See LICENSE.txt for details.
//
//===----------------------------------------------------------------------===//
#ifndef __OMPT_EVENT_SPECIFIC_H__
#define __OMPT_EVENT_SPECIFIC_H__
#define _ompt_tokenpaste_helper(x, y) x##y
#define _ompt_tokenpaste(x, y) _ompt_tokenpaste_helper(x, y)
#define ompt_event_implementation_status(e) _ompt_tokenpaste(e, _implemented)
/*----------------------------------------------------------------------------
| Specify whether an event may occur or not, and whether event callbacks
| never, sometimes, or always occur.
|
| The values for these constants are defined in section 6.1.2 of
| the OMPT TR. They are exposed to tools through ompt_set_callback.
+--------------------------------------------------------------------------*/
#define ompt_event_UNIMPLEMENTED ompt_set_never
#define ompt_event_MAY_CONVENIENT ompt_set_sometimes
#define ompt_event_MAY_ALWAYS ompt_set_always
#if OMPT_OPTIONAL
#define ompt_event_MAY_ALWAYS_OPTIONAL ompt_event_MAY_ALWAYS
#else
#define ompt_event_MAY_ALWAYS_OPTIONAL ompt_event_UNIMPLEMENTED
#endif
/*----------------------------------------------------------------------------
| Mandatory Events
+--------------------------------------------------------------------------*/
#define ompt_callback_thread_begin_implemented ompt_event_MAY_ALWAYS
#define ompt_callback_thread_end_implemented ompt_event_MAY_ALWAYS
#define ompt_callback_parallel_begin_implemented ompt_event_MAY_ALWAYS
#define ompt_callback_parallel_end_implemented ompt_event_MAY_ALWAYS
#define ompt_callback_task_create_implemented ompt_event_MAY_ALWAYS
#define ompt_callback_task_schedule_implemented ompt_event_MAY_ALWAYS
#define ompt_callback_implicit_task_implemented ompt_event_MAY_ALWAYS
#define ompt_callback_target_implemented ompt_event_UNIMPLEMENTED
#define ompt_callback_target_data_op_implemented ompt_event_UNIMPLEMENTED
#define ompt_callback_target_submit_implemented ompt_event_UNIMPLEMENTED
#define ompt_callback_control_tool_implemented ompt_event_MAY_ALWAYS
#define ompt_callback_device_initialize_implemented ompt_event_UNIMPLEMENTED
#define ompt_callback_device_finalize_implemented ompt_event_UNIMPLEMENTED
#define ompt_callback_device_load_implemented ompt_event_UNIMPLEMENTED
#define ompt_callback_device_unload_implemented ompt_event_UNIMPLEMENTED
/*----------------------------------------------------------------------------
| Optional Events
+--------------------------------------------------------------------------*/
#define ompt_callback_sync_region_wait_implemented \
ompt_event_MAY_ALWAYS_OPTIONAL
#define ompt_callback_mutex_released_implemented ompt_event_MAY_ALWAYS_OPTIONAL
#if OMP_40_ENABLED
#define ompt_callback_dependences_implemented \
ompt_event_MAY_ALWAYS_OPTIONAL
#define ompt_callback_task_dependence_implemented ompt_event_MAY_ALWAYS_OPTIONAL
#else
#define ompt_callback_dependences_implemented ompt_event_UNIMPLEMENTED
#define ompt_callback_task_dependence_implemented ompt_event_UNIMPLEMENTED
#endif /* OMP_40_ENABLED */
#define ompt_callback_work_implemented ompt_event_MAY_ALWAYS_OPTIONAL
#define ompt_callback_master_implemented ompt_event_MAY_ALWAYS_OPTIONAL
#define ompt_callback_target_map_implemented ompt_event_UNIMPLEMENTED
#define ompt_callback_sync_region_implemented ompt_event_MAY_ALWAYS_OPTIONAL
#define ompt_callback_lock_init_implemented ompt_event_MAY_ALWAYS_OPTIONAL
#define ompt_callback_lock_destroy_implemented ompt_event_MAY_ALWAYS_OPTIONAL
#define ompt_callback_mutex_acquire_implemented ompt_event_MAY_ALWAYS_OPTIONAL
#define ompt_callback_mutex_acquired_implemented ompt_event_MAY_ALWAYS_OPTIONAL
#define ompt_callback_nest_lock_implemented ompt_event_MAY_ALWAYS_OPTIONAL
#define ompt_callback_flush_implemented ompt_event_MAY_ALWAYS_OPTIONAL
#define ompt_callback_cancel_implemented ompt_event_MAY_ALWAYS_OPTIONAL
#define ompt_callback_reduction_implemented ompt_event_UNIMPLEMENTED
#define ompt_callback_dispatch_implemented ompt_event_UNIMPLEMENTED
#endif

View File

@ -0,0 +1,735 @@
/*
* ompt-general.cpp -- OMPT implementation of interface functions
*/
//===----------------------------------------------------------------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is dual licensed under the MIT and the University of Illinois Open
// Source Licenses. See LICENSE.txt for details.
//
//===----------------------------------------------------------------------===//
/*****************************************************************************
* system include files
****************************************************************************/
#include <assert.h>
#include <stdint.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#if KMP_OS_UNIX
#include <dlfcn.h>
#endif
/*****************************************************************************
* ompt include files
****************************************************************************/
#include "ompt-specific.cpp"
/*****************************************************************************
* macros
****************************************************************************/
#define ompt_get_callback_success 1
#define ompt_get_callback_failure 0
#define no_tool_present 0
#define OMPT_API_ROUTINE static
#ifndef OMPT_STR_MATCH
#define OMPT_STR_MATCH(haystack, needle) (!strcasecmp(haystack, needle))
#endif
/*****************************************************************************
* types
****************************************************************************/
typedef struct {
const char *state_name;
ompt_state_t state_id;
} ompt_state_info_t;
typedef struct {
const char *name;
kmp_mutex_impl_t id;
} kmp_mutex_impl_info_t;
enum tool_setting_e {
omp_tool_error,
omp_tool_unset,
omp_tool_disabled,
omp_tool_enabled
};
/*****************************************************************************
* global variables
****************************************************************************/
ompt_callbacks_active_t ompt_enabled;
ompt_state_info_t ompt_state_info[] = {
#define ompt_state_macro(state, code) {#state, state},
FOREACH_OMPT_STATE(ompt_state_macro)
#undef ompt_state_macro
};
kmp_mutex_impl_info_t kmp_mutex_impl_info[] = {
#define kmp_mutex_impl_macro(name, id) {#name, name},
FOREACH_KMP_MUTEX_IMPL(kmp_mutex_impl_macro)
#undef kmp_mutex_impl_macro
};
ompt_callbacks_internal_t ompt_callbacks;
static ompt_start_tool_result_t *ompt_start_tool_result = NULL;
/*****************************************************************************
* forward declarations
****************************************************************************/
static ompt_interface_fn_t ompt_fn_lookup(const char *s);
OMPT_API_ROUTINE ompt_data_t *ompt_get_thread_data(void);
/*****************************************************************************
* initialization and finalization (private operations)
****************************************************************************/
typedef ompt_start_tool_result_t *(*ompt_start_tool_t)(unsigned int,
const char *);
#if KMP_OS_DARWIN
// While Darwin supports weak symbols, the library that wishes to provide a new
// implementation has to link against this runtime which defeats the purpose
// of having tools that are agnostic of the underlying runtime implementation.
//
// Fortunately, the linker includes all symbols of an executable in the global
// symbol table by default so dlsym() even finds static implementations of
// ompt_start_tool. For this to work on Linux, -Wl,--export-dynamic needs to be
// passed when building the application which we don't want to rely on.
static ompt_start_tool_result_t *ompt_tool_darwin(unsigned int omp_version,
const char *runtime_version) {
ompt_start_tool_result_t *ret = NULL;
// Search symbol in the current address space.
ompt_start_tool_t start_tool =
(ompt_start_tool_t)dlsym(RTLD_DEFAULT, "ompt_start_tool");
if (start_tool) {
ret = start_tool(omp_version, runtime_version);
}
return ret;
}
#elif OMPT_HAVE_WEAK_ATTRIBUTE
// On Unix-like systems that support weak symbols the following implementation
// of ompt_start_tool() will be used in case no tool-supplied implementation of
// this function is present in the address space of a process.
_OMP_EXTERN OMPT_WEAK_ATTRIBUTE ompt_start_tool_result_t *
ompt_start_tool(unsigned int omp_version, const char *runtime_version) {
ompt_start_tool_result_t *ret = NULL;
// Search next symbol in the current address space. This can happen if the
// runtime library is linked before the tool. Since glibc 2.2 strong symbols
// don't override weak symbols that have been found before unless the user
// sets the environment variable LD_DYNAMIC_WEAK.
ompt_start_tool_t next_tool =
(ompt_start_tool_t)dlsym(RTLD_NEXT, "ompt_start_tool");
if (next_tool) {
ret = next_tool(omp_version, runtime_version);
}
return ret;
}
#elif OMPT_HAVE_PSAPI
// On Windows, the ompt_tool_windows function is used to find the
// ompt_start_tool symbol across all modules loaded by a process. If
// ompt_start_tool is found, ompt_start_tool's return value is used to
// initialize the tool. Otherwise, NULL is returned and OMPT won't be enabled.
#include <psapi.h>
#pragma comment(lib, "psapi.lib")
// The number of loaded modules to start enumeration with EnumProcessModules()
#define NUM_MODULES 128
static ompt_start_tool_result_t *
ompt_tool_windows(unsigned int omp_version, const char *runtime_version) {
int i;
DWORD needed, new_size;
HMODULE *modules;
HANDLE process = GetCurrentProcess();
modules = (HMODULE *)malloc(NUM_MODULES * sizeof(HMODULE));
ompt_start_tool_t ompt_tool_p = NULL;
#if OMPT_DEBUG
printf("ompt_tool_windows(): looking for ompt_start_tool\n");
#endif
if (!EnumProcessModules(process, modules, NUM_MODULES * sizeof(HMODULE),
&needed)) {
// Regardless of the error reason use the stub initialization function
free(modules);
return NULL;
}
// Check if NUM_MODULES is enough to list all modules
new_size = needed / sizeof(HMODULE);
if (new_size > NUM_MODULES) {
#if OMPT_DEBUG
printf("ompt_tool_windows(): resize buffer to %d bytes\n", needed);
#endif
modules = (HMODULE *)realloc(modules, needed);
// If resizing failed use the stub function.
if (!EnumProcessModules(process, modules, needed, &needed)) {
free(modules);
return NULL;
}
}
for (i = 0; i < new_size; ++i) {
(FARPROC &)ompt_tool_p = GetProcAddress(modules[i], "ompt_start_tool");
if (ompt_tool_p) {
#if OMPT_DEBUG
TCHAR modName[MAX_PATH];
if (GetModuleFileName(modules[i], modName, MAX_PATH))
printf("ompt_tool_windows(): ompt_start_tool found in module %s\n",
modName);
#endif
free(modules);
return (*ompt_tool_p)(omp_version, runtime_version);
}
#if OMPT_DEBUG
else {
TCHAR modName[MAX_PATH];
if (GetModuleFileName(modules[i], modName, MAX_PATH))
printf("ompt_tool_windows(): ompt_start_tool not found in module %s\n",
modName);
}
#endif
}
free(modules);
return NULL;
}
#else
#error Activation of OMPT is not supported on this platform.
#endif
static ompt_start_tool_result_t *
ompt_try_start_tool(unsigned int omp_version, const char *runtime_version) {
ompt_start_tool_result_t *ret = NULL;
ompt_start_tool_t start_tool = NULL;
#if KMP_OS_WINDOWS
// Cannot use colon to describe a list of absolute paths on Windows
const char *sep = ";";
#else
const char *sep = ":";
#endif
#if KMP_OS_DARWIN
// Try in the current address space
ret = ompt_tool_darwin(omp_version, runtime_version);
#elif OMPT_HAVE_WEAK_ATTRIBUTE
ret = ompt_start_tool(omp_version, runtime_version);
#elif OMPT_HAVE_PSAPI
ret = ompt_tool_windows(omp_version, runtime_version);
#else
#error Activation of OMPT is not supported on this platform.
#endif
if (ret)
return ret;
// Try tool-libraries-var ICV
const char *tool_libs = getenv("OMP_TOOL_LIBRARIES");
if (tool_libs) {
char *libs = __kmp_str_format("%s", tool_libs);
char *buf;
char *fname = __kmp_str_token(libs, sep, &buf);
while (fname) {
#if KMP_OS_UNIX
void *h = dlopen(fname, RTLD_LAZY);
if (h) {
start_tool = (ompt_start_tool_t)dlsym(h, "ompt_start_tool");
#elif KMP_OS_WINDOWS
HMODULE h = LoadLibrary(fname);
if (h) {
start_tool = (ompt_start_tool_t)GetProcAddress(h, "ompt_start_tool");
#else
#error Activation of OMPT is not supported on this platform.
#endif
if (start_tool && (ret = (*start_tool)(omp_version, runtime_version)))
break;
}
fname = __kmp_str_token(NULL, sep, &buf);
}
__kmp_str_free(&libs);
}
return ret;
}
void ompt_pre_init() {
//--------------------------------------------------
// Execute the pre-initialization logic only once.
//--------------------------------------------------
static int ompt_pre_initialized = 0;
if (ompt_pre_initialized)
return;
ompt_pre_initialized = 1;
//--------------------------------------------------
// Use a tool iff a tool is enabled and available.
//--------------------------------------------------
const char *ompt_env_var = getenv("OMP_TOOL");
tool_setting_e tool_setting = omp_tool_error;
if (!ompt_env_var || !strcmp(ompt_env_var, ""))
tool_setting = omp_tool_unset;
else if (OMPT_STR_MATCH(ompt_env_var, "disabled"))
tool_setting = omp_tool_disabled;
else if (OMPT_STR_MATCH(ompt_env_var, "enabled"))
tool_setting = omp_tool_enabled;
#if OMPT_DEBUG
printf("ompt_pre_init(): tool_setting = %d\n", tool_setting);
#endif
switch (tool_setting) {
case omp_tool_disabled:
break;
case omp_tool_unset:
case omp_tool_enabled:
//--------------------------------------------------
// Load tool iff specified in environment variable
//--------------------------------------------------
ompt_start_tool_result =
ompt_try_start_tool(__kmp_openmp_version, ompt_get_runtime_version());
memset(&ompt_enabled, 0, sizeof(ompt_enabled));
break;
case omp_tool_error:
fprintf(stderr, "Warning: OMP_TOOL has invalid value \"%s\".\n"
" legal values are (NULL,\"\",\"disabled\","
"\"enabled\").\n",
ompt_env_var);
break;
}
#if OMPT_DEBUG
printf("ompt_pre_init(): ompt_enabled = %d\n", ompt_enabled);
#endif
}
extern "C" int omp_get_initial_device(void);
void ompt_post_init() {
//--------------------------------------------------
// Execute the post-initialization logic only once.
//--------------------------------------------------
static int ompt_post_initialized = 0;
if (ompt_post_initialized)
return;
ompt_post_initialized = 1;
//--------------------------------------------------
// Initialize the tool if so indicated.
//--------------------------------------------------
if (ompt_start_tool_result) {
ompt_enabled.enabled = !!ompt_start_tool_result->initialize(
ompt_fn_lookup, omp_get_initial_device(), &(ompt_start_tool_result->tool_data));
if (!ompt_enabled.enabled) {
// tool not enabled, zero out the bitmap, and done
memset(&ompt_enabled, 0, sizeof(ompt_enabled));
return;
}
kmp_info_t *root_thread = ompt_get_thread();
ompt_set_thread_state(root_thread, ompt_state_overhead);
if (ompt_enabled.ompt_callback_thread_begin) {
ompt_callbacks.ompt_callback(ompt_callback_thread_begin)(
ompt_thread_initial, __ompt_get_thread_data_internal());
}
ompt_data_t *task_data;
__ompt_get_task_info_internal(0, NULL, &task_data, NULL, NULL, NULL);
if (ompt_enabled.ompt_callback_task_create) {
ompt_callbacks.ompt_callback(ompt_callback_task_create)(
NULL, NULL, task_data, ompt_task_initial, 0, NULL);
}
ompt_set_thread_state(root_thread, ompt_state_work_serial);
}
}
void ompt_fini() {
if (ompt_enabled.enabled) {
ompt_start_tool_result->finalize(&(ompt_start_tool_result->tool_data));
}
memset(&ompt_enabled, 0, sizeof(ompt_enabled));
}
/*****************************************************************************
* interface operations
****************************************************************************/
/*****************************************************************************
* state
****************************************************************************/
OMPT_API_ROUTINE int ompt_enumerate_states(int current_state, int *next_state,
const char **next_state_name) {
const static int len = sizeof(ompt_state_info) / sizeof(ompt_state_info_t);
int i = 0;
for (i = 0; i < len - 1; i++) {
if (ompt_state_info[i].state_id == current_state) {
*next_state = ompt_state_info[i + 1].state_id;
*next_state_name = ompt_state_info[i + 1].state_name;
return 1;
}
}
return 0;
}
OMPT_API_ROUTINE int ompt_enumerate_mutex_impls(int current_impl,
int *next_impl,
const char **next_impl_name) {
const static int len =
sizeof(kmp_mutex_impl_info) / sizeof(kmp_mutex_impl_info_t);
int i = 0;
for (i = 0; i < len - 1; i++) {
if (kmp_mutex_impl_info[i].id != current_impl)
continue;
*next_impl = kmp_mutex_impl_info[i + 1].id;
*next_impl_name = kmp_mutex_impl_info[i + 1].name;
return 1;
}
return 0;
}
/*****************************************************************************
* callbacks
****************************************************************************/
OMPT_API_ROUTINE ompt_set_result_t ompt_set_callback(ompt_callbacks_t which,
ompt_callback_t callback) {
switch (which) {
#define ompt_event_macro(event_name, callback_type, event_id) \
case event_name: \
if (ompt_event_implementation_status(event_name)) { \
ompt_callbacks.ompt_callback(event_name) = (callback_type)callback; \
ompt_enabled.event_name = (callback != 0); \
} \
if (callback) \
return ompt_event_implementation_status(event_name); \
else \
return ompt_set_always;
FOREACH_OMPT_EVENT(ompt_event_macro)
#undef ompt_event_macro
default:
return ompt_set_error;
}
}
OMPT_API_ROUTINE int ompt_get_callback(ompt_callbacks_t which,
ompt_callback_t *callback) {
if (!ompt_enabled.enabled)
return ompt_get_callback_failure;
switch (which) {
#define ompt_event_macro(event_name, callback_type, event_id) \
case event_name: \
if (ompt_event_implementation_status(event_name)) { \
ompt_callback_t mycb = \
(ompt_callback_t)ompt_callbacks.ompt_callback(event_name); \
if (ompt_enabled.event_name && mycb) { \
*callback = mycb; \
return ompt_get_callback_success; \
} \
} \
return ompt_get_callback_failure;
FOREACH_OMPT_EVENT(ompt_event_macro)
#undef ompt_event_macro
default:
return ompt_get_callback_failure;
}
}
/*****************************************************************************
* parallel regions
****************************************************************************/
OMPT_API_ROUTINE int ompt_get_parallel_info(int ancestor_level,
ompt_data_t **parallel_data,
int *team_size) {
if (!ompt_enabled.enabled)
return 0;
return __ompt_get_parallel_info_internal(ancestor_level, parallel_data,
team_size);
}
OMPT_API_ROUTINE int ompt_get_state(ompt_wait_id_t *wait_id) {
if (!ompt_enabled.enabled)
return ompt_state_work_serial;
int thread_state = __ompt_get_state_internal(wait_id);
if (thread_state == ompt_state_undefined) {
thread_state = ompt_state_work_serial;
}
return thread_state;
}
/*****************************************************************************
* tasks
****************************************************************************/
OMPT_API_ROUTINE ompt_data_t *ompt_get_thread_data(void) {
if (!ompt_enabled.enabled)
return NULL;
return __ompt_get_thread_data_internal();
}
OMPT_API_ROUTINE int ompt_get_task_info(int ancestor_level, int *type,
ompt_data_t **task_data,
ompt_frame_t **task_frame,
ompt_data_t **parallel_data,
int *thread_num) {
if (!ompt_enabled.enabled)
return 0;
return __ompt_get_task_info_internal(ancestor_level, type, task_data,
task_frame, parallel_data, thread_num);
}
OMPT_API_ROUTINE int ompt_get_task_memory(void **addr, size_t *size,
int block) {
// stub
return 0;
}
/*****************************************************************************
* num_procs
****************************************************************************/
OMPT_API_ROUTINE int ompt_get_num_procs(void) {
// copied from kmp_ftn_entry.h (but modified: OMPT can only be called when
// runtime is initialized)
return __kmp_avail_proc;
}
/*****************************************************************************
* places
****************************************************************************/
OMPT_API_ROUTINE int ompt_get_num_places(void) {
// copied from kmp_ftn_entry.h (but modified)
#if !KMP_AFFINITY_SUPPORTED
return 0;
#else
if (!KMP_AFFINITY_CAPABLE())
return 0;
return __kmp_affinity_num_masks;
#endif
}
OMPT_API_ROUTINE int ompt_get_place_proc_ids(int place_num, int ids_size,
int *ids) {
// copied from kmp_ftn_entry.h (but modified)
#if !KMP_AFFINITY_SUPPORTED
return 0;
#else
int i, count;
int tmp_ids[ids_size];
if (!KMP_AFFINITY_CAPABLE())
return 0;
if (place_num < 0 || place_num >= (int)__kmp_affinity_num_masks)
return 0;
/* TODO: Is this safe for asynchronous call from signal handler during runtime
* shutdown? */
kmp_affin_mask_t *mask = KMP_CPU_INDEX(__kmp_affinity_masks, place_num);
count = 0;
KMP_CPU_SET_ITERATE(i, mask) {
if ((!KMP_CPU_ISSET(i, __kmp_affin_fullMask)) ||
(!KMP_CPU_ISSET(i, mask))) {
continue;
}
if (count < ids_size)
tmp_ids[count] = i;
count++;
}
if (ids_size >= count) {
for (i = 0; i < count; i++) {
ids[i] = tmp_ids[i];
}
}
return count;
#endif
}
OMPT_API_ROUTINE int ompt_get_place_num(void) {
// copied from kmp_ftn_entry.h (but modified)
#if !KMP_AFFINITY_SUPPORTED
return -1;
#else
if (!ompt_enabled.enabled || __kmp_get_gtid() < 0)
return -1;
int gtid;
kmp_info_t *thread;
if (!KMP_AFFINITY_CAPABLE())
return -1;
gtid = __kmp_entry_gtid();
thread = __kmp_thread_from_gtid(gtid);
if (thread == NULL || thread->th.th_current_place < 0)
return -1;
return thread->th.th_current_place;
#endif
}
OMPT_API_ROUTINE int ompt_get_partition_place_nums(int place_nums_size,
int *place_nums) {
// copied from kmp_ftn_entry.h (but modified)
#if !KMP_AFFINITY_SUPPORTED
return 0;
#else
if (!ompt_enabled.enabled || __kmp_get_gtid() < 0)
return 0;
int i, gtid, place_num, first_place, last_place, start, end;
kmp_info_t *thread;
if (!KMP_AFFINITY_CAPABLE())
return 0;
gtid = __kmp_entry_gtid();
thread = __kmp_thread_from_gtid(gtid);
if (thread == NULL)
return 0;
first_place = thread->th.th_first_place;
last_place = thread->th.th_last_place;
if (first_place < 0 || last_place < 0)
return 0;
if (first_place <= last_place) {
start = first_place;
end = last_place;
} else {
start = last_place;
end = first_place;
}
if (end - start <= place_nums_size)
for (i = 0, place_num = start; place_num <= end; ++place_num, ++i) {
place_nums[i] = place_num;
}
return end - start + 1;
#endif
}
/*****************************************************************************
* places
****************************************************************************/
OMPT_API_ROUTINE int ompt_get_proc_id(void) {
if (!ompt_enabled.enabled || __kmp_get_gtid() < 0)
return -1;
#if KMP_OS_LINUX
return sched_getcpu();
#elif KMP_OS_WINDOWS
PROCESSOR_NUMBER pn;
GetCurrentProcessorNumberEx(&pn);
return 64 * pn.Group + pn.Number;
#else
return -1;
#endif
}
/*****************************************************************************
* compatability
****************************************************************************/
/*
* Currently unused function
OMPT_API_ROUTINE int ompt_get_ompt_version() { return OMPT_VERSION; }
*/
/*****************************************************************************
* application-facing API
****************************************************************************/
/*----------------------------------------------------------------------------
| control
---------------------------------------------------------------------------*/
int __kmp_control_tool(uint64_t command, uint64_t modifier, void *arg) {
if (ompt_enabled.enabled) {
if (ompt_enabled.ompt_callback_control_tool) {
return ompt_callbacks.ompt_callback(ompt_callback_control_tool)(
command, modifier, arg, OMPT_LOAD_RETURN_ADDRESS(__kmp_entry_gtid()));
} else {
return -1;
}
} else {
return -2;
}
}
/*****************************************************************************
* misc
****************************************************************************/
OMPT_API_ROUTINE uint64_t ompt_get_unique_id(void) {
return __ompt_get_unique_id_internal();
}
OMPT_API_ROUTINE void ompt_finalize_tool(void) {
// stub
}
/*****************************************************************************
* Target
****************************************************************************/
OMPT_API_ROUTINE int ompt_get_target_info(uint64_t *device_num,
ompt_id_t *target_id,
ompt_id_t *host_op_id) {
return 0; // thread is not in a target region
}
OMPT_API_ROUTINE int ompt_get_num_devices(void) {
return 1; // only one device (the current device) is available
}
/*****************************************************************************
* API inquiry for tool
****************************************************************************/
static ompt_interface_fn_t ompt_fn_lookup(const char *s) {
#define ompt_interface_fn(fn) \
fn##_t fn##_f = fn; \
if (strcmp(s, #fn) == 0) \
return (ompt_interface_fn_t)fn##_f;
FOREACH_OMPT_INQUIRY_FN(ompt_interface_fn)
return (ompt_interface_fn_t)0;
}

129
runtime/src/ompt-internal.h Normal file
View File

@ -0,0 +1,129 @@
/*
* ompt-internal.h - header of OMPT internal data structures
*/
//===----------------------------------------------------------------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is dual licensed under the MIT and the University of Illinois Open
// Source Licenses. See LICENSE.txt for details.
//
//===----------------------------------------------------------------------===//
#ifndef __OMPT_INTERNAL_H__
#define __OMPT_INTERNAL_H__
#include "ompt-event-specific.h"
#include "omp-tools.h"
#define OMPT_VERSION 1
#define _OMP_EXTERN extern "C"
#define OMPT_INVOKER(x) \
((x == fork_context_gnu) ? ompt_parallel_invoker_program \
: ompt_parallel_invoker_runtime)
#define ompt_callback(e) e##_callback
typedef struct ompt_callbacks_internal_s {
#define ompt_event_macro(event, callback, eventid) \
callback ompt_callback(event);
FOREACH_OMPT_EVENT(ompt_event_macro)
#undef ompt_event_macro
} ompt_callbacks_internal_t;
typedef struct ompt_callbacks_active_s {
unsigned int enabled : 1;
#define ompt_event_macro(event, callback, eventid) unsigned int event : 1;
FOREACH_OMPT_EVENT(ompt_event_macro)
#undef ompt_event_macro
} ompt_callbacks_active_t;
#define TASK_TYPE_DETAILS_FORMAT(info) \
((info->td_flags.task_serial || info->td_flags.tasking_ser) \
? ompt_task_undeferred \
: 0x0) | \
((!(info->td_flags.tiedness)) ? ompt_task_untied : 0x0) | \
(info->td_flags.final ? ompt_task_final : 0x0) | \
(info->td_flags.merged_if0 ? ompt_task_mergeable : 0x0)
typedef struct {
ompt_frame_t frame;
ompt_data_t task_data;
struct kmp_taskdata *scheduling_parent;
int thread_num;
#if OMP_40_ENABLED
int ndeps;
ompt_dependence_t *deps;
#endif /* OMP_40_ENABLED */
} ompt_task_info_t;
typedef struct {
ompt_data_t parallel_data;
void *master_return_address;
} ompt_team_info_t;
typedef struct ompt_lw_taskteam_s {
ompt_team_info_t ompt_team_info;
ompt_task_info_t ompt_task_info;
int heap;
struct ompt_lw_taskteam_s *parent;
} ompt_lw_taskteam_t;
typedef struct {
ompt_data_t thread_data;
ompt_data_t task_data; /* stored here from implicit barrier-begin until
implicit-task-end */
void *return_address; /* stored here on entry of runtime */
ompt_state_t state;
ompt_wait_id_t wait_id;
int ompt_task_yielded;
void *idle_frame;
} ompt_thread_info_t;
extern ompt_callbacks_internal_t ompt_callbacks;
#if OMP_40_ENABLED && OMPT_SUPPORT && OMPT_OPTIONAL
#if USE_FAST_MEMORY
#define KMP_OMPT_DEPS_ALLOC __kmp_fast_allocate
#define KMP_OMPT_DEPS_FREE __kmp_fast_free
#else
#define KMP_OMPT_DEPS_ALLOC __kmp_thread_malloc
#define KMP_OMPT_DEPS_FREE __kmp_thread_free
#endif
#endif /* OMP_40_ENABLED && OMPT_SUPPORT && OMPT_OPTIONAL */
#ifdef __cplusplus
extern "C" {
#endif
void ompt_pre_init(void);
void ompt_post_init(void);
void ompt_fini(void);
#define OMPT_GET_RETURN_ADDRESS(level) __builtin_return_address(level)
#define OMPT_GET_FRAME_ADDRESS(level) __builtin_frame_address(level)
int __kmp_control_tool(uint64_t command, uint64_t modifier, void *arg);
extern ompt_callbacks_active_t ompt_enabled;
#if KMP_OS_WINDOWS
#define UNLIKELY(x) (x)
#define OMPT_NOINLINE __declspec(noinline)
#else
#define UNLIKELY(x) __builtin_expect(!!(x), 0)
#define OMPT_NOINLINE __attribute__((noinline))
#endif
#ifdef __cplusplus
};
#endif
#endif

View File

@ -0,0 +1,451 @@
/*
* ompt-specific.cpp -- OMPT internal functions
*/
//===----------------------------------------------------------------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is dual licensed under the MIT and the University of Illinois Open
// Source Licenses. See LICENSE.txt for details.
//
//===----------------------------------------------------------------------===//
//******************************************************************************
// include files
//******************************************************************************
#include "kmp.h"
#include "ompt-specific.h"
#if KMP_OS_UNIX
#include <dlfcn.h>
#endif
#if KMP_OS_WINDOWS
#define THREAD_LOCAL __declspec(thread)
#else
#define THREAD_LOCAL __thread
#endif
#define OMPT_WEAK_ATTRIBUTE KMP_WEAK_ATTRIBUTE
//******************************************************************************
// macros
//******************************************************************************
#define LWT_FROM_TEAM(team) (team)->t.ompt_serialized_team_info
#define OMPT_THREAD_ID_BITS 16
//******************************************************************************
// private operations
//******************************************************************************
//----------------------------------------------------------
// traverse the team and task hierarchy
// note: __ompt_get_teaminfo and __ompt_get_task_info_object
// traverse the hierarchy similarly and need to be
// kept consistent
//----------------------------------------------------------
ompt_team_info_t *__ompt_get_teaminfo(int depth, int *size) {
kmp_info_t *thr = ompt_get_thread();
if (thr) {
kmp_team *team = thr->th.th_team;
if (team == NULL)
return NULL;
ompt_lw_taskteam_t *next_lwt = LWT_FROM_TEAM(team), *lwt = NULL;
while (depth > 0) {
// next lightweight team (if any)
if (lwt)
lwt = lwt->parent;
// next heavyweight team (if any) after
// lightweight teams are exhausted
if (!lwt && team) {
if (next_lwt) {
lwt = next_lwt;
next_lwt = NULL;
} else {
team = team->t.t_parent;
if (team) {
next_lwt = LWT_FROM_TEAM(team);
}
}
}
depth--;
}
if (lwt) {
// lightweight teams have one task
if (size)
*size = 1;
// return team info for lightweight team
return &lwt->ompt_team_info;
} else if (team) {
// extract size from heavyweight team
if (size)
*size = team->t.t_nproc;
// return team info for heavyweight team
return &team->t.ompt_team_info;
}
}
return NULL;
}
ompt_task_info_t *__ompt_get_task_info_object(int depth) {
ompt_task_info_t *info = NULL;
kmp_info_t *thr = ompt_get_thread();
if (thr) {
kmp_taskdata_t *taskdata = thr->th.th_current_task;
ompt_lw_taskteam_t *lwt = NULL,
*next_lwt = LWT_FROM_TEAM(taskdata->td_team);
while (depth > 0) {
// next lightweight team (if any)
if (lwt)
lwt = lwt->parent;
// next heavyweight team (if any) after
// lightweight teams are exhausted
if (!lwt && taskdata) {
if (next_lwt) {
lwt = next_lwt;
next_lwt = NULL;
} else {
taskdata = taskdata->td_parent;
if (taskdata) {
next_lwt = LWT_FROM_TEAM(taskdata->td_team);
}
}
}
depth--;
}
if (lwt) {
info = &lwt->ompt_task_info;
} else if (taskdata) {
info = &taskdata->ompt_task_info;
}
}
return info;
}
ompt_task_info_t *__ompt_get_scheduling_taskinfo(int depth) {
ompt_task_info_t *info = NULL;
kmp_info_t *thr = ompt_get_thread();
if (thr) {
kmp_taskdata_t *taskdata = thr->th.th_current_task;
ompt_lw_taskteam_t *lwt = NULL,
*next_lwt = LWT_FROM_TEAM(taskdata->td_team);
while (depth > 0) {
// next lightweight team (if any)
if (lwt)
lwt = lwt->parent;
// next heavyweight team (if any) after
// lightweight teams are exhausted
if (!lwt && taskdata) {
// first try scheduling parent (for explicit task scheduling)
if (taskdata->ompt_task_info.scheduling_parent) {
taskdata = taskdata->ompt_task_info.scheduling_parent;
} else if (next_lwt) {
lwt = next_lwt;
next_lwt = NULL;
} else {
// then go for implicit tasks
taskdata = taskdata->td_parent;
if (taskdata) {
next_lwt = LWT_FROM_TEAM(taskdata->td_team);
}
}
}
depth--;
}
if (lwt) {
info = &lwt->ompt_task_info;
} else if (taskdata) {
info = &taskdata->ompt_task_info;
}
}
return info;
}
//******************************************************************************
// interface operations
//******************************************************************************
//----------------------------------------------------------
// thread support
//----------------------------------------------------------
ompt_data_t *__ompt_get_thread_data_internal() {
if (__kmp_get_gtid() >= 0) {
kmp_info_t *thread = ompt_get_thread();
if (thread == NULL)
return NULL;
return &(thread->th.ompt_thread_info.thread_data);
}
return NULL;
}
//----------------------------------------------------------
// state support
//----------------------------------------------------------
void __ompt_thread_assign_wait_id(void *variable) {
kmp_info_t *ti = ompt_get_thread();
ti->th.ompt_thread_info.wait_id = (ompt_wait_id_t)variable;
}
int __ompt_get_state_internal(ompt_wait_id_t *omp_wait_id) {
kmp_info_t *ti = ompt_get_thread();
if (ti) {
if (omp_wait_id)
*omp_wait_id = ti->th.ompt_thread_info.wait_id;
return ti->th.ompt_thread_info.state;
}
return ompt_state_undefined;
}
//----------------------------------------------------------
// parallel region support
//----------------------------------------------------------
int __ompt_get_parallel_info_internal(int ancestor_level,
ompt_data_t **parallel_data,
int *team_size) {
if (__kmp_get_gtid() >= 0) {
ompt_team_info_t *info;
if (team_size) {
info = __ompt_get_teaminfo(ancestor_level, team_size);
} else {
info = __ompt_get_teaminfo(ancestor_level, NULL);
}
if (parallel_data) {
*parallel_data = info ? &(info->parallel_data) : NULL;
}
return info ? 2 : 0;
} else {
return 0;
}
}
//----------------------------------------------------------
// lightweight task team support
//----------------------------------------------------------
void __ompt_lw_taskteam_init(ompt_lw_taskteam_t *lwt, kmp_info_t *thr, int gtid,
ompt_data_t *ompt_pid, void *codeptr) {
// initialize parallel_data with input, return address to parallel_data on
// exit
lwt->ompt_team_info.parallel_data = *ompt_pid;
lwt->ompt_team_info.master_return_address = codeptr;
lwt->ompt_task_info.task_data.value = 0;
lwt->ompt_task_info.frame.enter_frame = ompt_data_none;
lwt->ompt_task_info.frame.exit_frame = ompt_data_none;
lwt->ompt_task_info.scheduling_parent = NULL;
lwt->ompt_task_info.deps = NULL;
lwt->ompt_task_info.ndeps = 0;
lwt->heap = 0;
lwt->parent = 0;
}
void __ompt_lw_taskteam_link(ompt_lw_taskteam_t *lwt, kmp_info_t *thr,
int on_heap) {
ompt_lw_taskteam_t *link_lwt = lwt;
if (thr->th.th_team->t.t_serialized >
1) { // we already have a team, so link the new team and swap values
if (on_heap) { // the lw_taskteam cannot stay on stack, allocate it on heap
link_lwt =
(ompt_lw_taskteam_t *)__kmp_allocate(sizeof(ompt_lw_taskteam_t));
}
link_lwt->heap = on_heap;
// would be swap in the (on_stack) case.
ompt_team_info_t tmp_team = lwt->ompt_team_info;
link_lwt->ompt_team_info = *OMPT_CUR_TEAM_INFO(thr);
*OMPT_CUR_TEAM_INFO(thr) = tmp_team;
ompt_task_info_t tmp_task = lwt->ompt_task_info;
link_lwt->ompt_task_info = *OMPT_CUR_TASK_INFO(thr);
*OMPT_CUR_TASK_INFO(thr) = tmp_task;
// link the taskteam into the list of taskteams:
ompt_lw_taskteam_t *my_parent =
thr->th.th_team->t.ompt_serialized_team_info;
link_lwt->parent = my_parent;
thr->th.th_team->t.ompt_serialized_team_info = link_lwt;
} else {
// this is the first serialized team, so we just store the values in the
// team and drop the taskteam-object
*OMPT_CUR_TEAM_INFO(thr) = lwt->ompt_team_info;
*OMPT_CUR_TASK_INFO(thr) = lwt->ompt_task_info;
}
}
void __ompt_lw_taskteam_unlink(kmp_info_t *thr) {
ompt_lw_taskteam_t *lwtask = thr->th.th_team->t.ompt_serialized_team_info;
if (lwtask) {
thr->th.th_team->t.ompt_serialized_team_info = lwtask->parent;
ompt_team_info_t tmp_team = lwtask->ompt_team_info;
lwtask->ompt_team_info = *OMPT_CUR_TEAM_INFO(thr);
*OMPT_CUR_TEAM_INFO(thr) = tmp_team;
ompt_task_info_t tmp_task = lwtask->ompt_task_info;
lwtask->ompt_task_info = *OMPT_CUR_TASK_INFO(thr);
*OMPT_CUR_TASK_INFO(thr) = tmp_task;
if (lwtask->heap) {
__kmp_free(lwtask);
lwtask = NULL;
}
}
// return lwtask;
}
//----------------------------------------------------------
// task support
//----------------------------------------------------------
int __ompt_get_task_info_internal(int ancestor_level, int *type,
ompt_data_t **task_data,
ompt_frame_t **task_frame,
ompt_data_t **parallel_data,
int *thread_num) {
if (__kmp_get_gtid() < 0)
return 0;
if (ancestor_level < 0)
return 0;
// copied from __ompt_get_scheduling_taskinfo
ompt_task_info_t *info = NULL;
ompt_team_info_t *team_info = NULL;
kmp_info_t *thr = ompt_get_thread();
int level = ancestor_level;
if (thr) {
kmp_taskdata_t *taskdata = thr->th.th_current_task;
if (taskdata == NULL)
return 0;
kmp_team *team = thr->th.th_team, *prev_team = NULL;
if (team == NULL)
return 0;
ompt_lw_taskteam_t *lwt = NULL,
*next_lwt = LWT_FROM_TEAM(taskdata->td_team),
*prev_lwt = NULL;
while (ancestor_level > 0) {
// needed for thread_num
prev_team = team;
prev_lwt = lwt;
// next lightweight team (if any)
if (lwt)
lwt = lwt->parent;
// next heavyweight team (if any) after
// lightweight teams are exhausted
if (!lwt && taskdata) {
// first try scheduling parent (for explicit task scheduling)
if (taskdata->ompt_task_info.scheduling_parent) {
taskdata = taskdata->ompt_task_info.scheduling_parent;
} else if (next_lwt) {
lwt = next_lwt;
next_lwt = NULL;
} else {
// then go for implicit tasks
taskdata = taskdata->td_parent;
if (team == NULL)
return 0;
team = team->t.t_parent;
if (taskdata) {
next_lwt = LWT_FROM_TEAM(taskdata->td_team);
}
}
}
ancestor_level--;
}
if (lwt) {
info = &lwt->ompt_task_info;
team_info = &lwt->ompt_team_info;
if (type) {
*type = ompt_task_implicit;
}
} else if (taskdata) {
info = &taskdata->ompt_task_info;
team_info = &team->t.ompt_team_info;
if (type) {
if (taskdata->td_parent) {
*type = (taskdata->td_flags.tasktype ? ompt_task_explicit
: ompt_task_implicit) |
TASK_TYPE_DETAILS_FORMAT(taskdata);
} else {
*type = ompt_task_initial;
}
}
}
if (task_data) {
*task_data = info ? &info->task_data : NULL;
}
if (task_frame) {
// OpenMP spec asks for the scheduling task to be returned.
*task_frame = info ? &info->frame : NULL;
}
if (parallel_data) {
*parallel_data = team_info ? &(team_info->parallel_data) : NULL;
}
if (thread_num) {
if (level == 0)
*thread_num = __kmp_get_tid();
else if (prev_lwt)
*thread_num = 0;
else
*thread_num = prev_team->t.t_master_tid;
// *thread_num = team->t.t_master_tid;
}
return info ? 2 : 0;
}
return 0;
}
//----------------------------------------------------------
// team support
//----------------------------------------------------------
void __ompt_team_assign_id(kmp_team_t *team, ompt_data_t ompt_pid) {
team->t.ompt_team_info.parallel_data = ompt_pid;
}
//----------------------------------------------------------
// misc
//----------------------------------------------------------
static uint64_t __ompt_get_unique_id_internal() {
static uint64_t thread = 1;
static THREAD_LOCAL uint64_t ID = 0;
if (ID == 0) {
uint64_t new_thread = KMP_TEST_THEN_INC64((kmp_int64 *)&thread);
ID = new_thread << (sizeof(uint64_t) * 8 - OMPT_THREAD_ID_BITS);
}
return ++ID;
}

104
runtime/src/ompt-specific.h Normal file
View File

@ -0,0 +1,104 @@
/*
* ompt-specific.h - header of OMPT internal functions implementation
*/
//===----------------------------------------------------------------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is dual licensed under the MIT and the University of Illinois Open
// Source Licenses. See LICENSE.txt for details.
//
//===----------------------------------------------------------------------===//
#ifndef OMPT_SPECIFIC_H
#define OMPT_SPECIFIC_H
#include "kmp.h"
/*****************************************************************************
* forward declarations
****************************************************************************/
void __ompt_team_assign_id(kmp_team_t *team, ompt_data_t ompt_pid);
void __ompt_thread_assign_wait_id(void *variable);
void __ompt_lw_taskteam_init(ompt_lw_taskteam_t *lwt, kmp_info_t *thr,
int gtid, ompt_data_t *ompt_pid, void *codeptr);
void __ompt_lw_taskteam_link(ompt_lw_taskteam_t *lwt, kmp_info_t *thr,
int on_heap);
void __ompt_lw_taskteam_unlink(kmp_info_t *thr);
ompt_team_info_t *__ompt_get_teaminfo(int depth, int *size);
ompt_task_info_t *__ompt_get_task_info_object(int depth);
int __ompt_get_parallel_info_internal(int ancestor_level,
ompt_data_t **parallel_data,
int *team_size);
int __ompt_get_task_info_internal(int ancestor_level, int *type,
ompt_data_t **task_data,
ompt_frame_t **task_frame,
ompt_data_t **parallel_data, int *thread_num);
ompt_data_t *__ompt_get_thread_data_internal();
/*
* Unused currently
static uint64_t __ompt_get_get_unique_id_internal();
*/
/*****************************************************************************
* macros
****************************************************************************/
#define OMPT_CUR_TASK_INFO(thr) (&(thr->th.th_current_task->ompt_task_info))
#define OMPT_CUR_TASK_DATA(thr) \
(&(thr->th.th_current_task->ompt_task_info.task_data))
#define OMPT_CUR_TEAM_INFO(thr) (&(thr->th.th_team->t.ompt_team_info))
#define OMPT_CUR_TEAM_DATA(thr) \
(&(thr->th.th_team->t.ompt_team_info.parallel_data))
#define OMPT_HAVE_WEAK_ATTRIBUTE KMP_HAVE_WEAK_ATTRIBUTE
#define OMPT_HAVE_PSAPI KMP_HAVE_PSAPI
#define OMPT_STR_MATCH(haystack, needle) __kmp_str_match(haystack, 0, needle)
inline void *__ompt_load_return_address(int gtid) {
kmp_info_t *thr = __kmp_threads[gtid];
void *return_address = thr->th.ompt_thread_info.return_address;
thr->th.ompt_thread_info.return_address = NULL;
return return_address;
}
#define OMPT_STORE_RETURN_ADDRESS(gtid) \
if (ompt_enabled.enabled && gtid >= 0 && __kmp_threads[gtid] && \
!__kmp_threads[gtid]->th.ompt_thread_info.return_address) \
__kmp_threads[gtid]->th.ompt_thread_info.return_address = \
__builtin_return_address(0)
#define OMPT_LOAD_RETURN_ADDRESS(gtid) __ompt_load_return_address(gtid)
//******************************************************************************
// inline functions
//******************************************************************************
inline kmp_info_t *ompt_get_thread_gtid(int gtid) {
return (gtid >= 0) ? __kmp_thread_from_gtid(gtid) : NULL;
}
inline kmp_info_t *ompt_get_thread() {
int gtid = __kmp_get_gtid();
return ompt_get_thread_gtid(gtid);
}
inline void ompt_set_thread_state(kmp_info_t *thread, ompt_state_t state) {
thread->th.ompt_thread_info.state = state;
}
inline const char *ompt_get_runtime_version() {
return &__kmp_version_lib_ver[KMP_VERSION_MAGIC_LEN];
}
#endif

31
runtime/src/test-touch.c Normal file
View File

@ -0,0 +1,31 @@
// test-touch.c //
//===----------------------------------------------------------------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is dual licensed under the MIT and the University of Illinois Open
// Source Licenses. See LICENSE.txt for details.
//
//===----------------------------------------------------------------------===//
#ifdef __cplusplus
extern "C" {
#endif
extern double omp_get_wtime();
extern int omp_get_num_threads();
extern int omp_get_max_threads();
#ifdef __cplusplus
}
#endif
int main() {
omp_get_wtime();
omp_get_num_threads();
omp_get_max_threads();
return 0;
}
// end of file //

View File

@ -0,0 +1,30 @@
//===----------------------------------------------------------------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is dual licensed under the MIT and the University of Illinois Open
// Source Licenses. See LICENSE.txt for details.
//
//===----------------------------------------------------------------------===//
#include "ittnotify_config.h"
#if ITT_PLATFORM==ITT_PLATFORM_WIN
#pragma warning (disable: 593) /* parameter "XXXX" was set but never used */
#pragma warning (disable: 344) /* typedef name has already been declared (with same type) */
#pragma warning (disable: 174) /* expression has no effect */
#pragma warning (disable: 4127) /* conditional expression is constant */
#pragma warning (disable: 4306) /* conversion from '?' to '?' of greater size */
#endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
#if defined __INTEL_COMPILER
#pragma warning (disable: 869) /* parameter "XXXXX" was never referenced */
#pragma warning (disable: 1418) /* external function definition with no prior declaration */
#pragma warning (disable: 1419) /* external declaration in primary source file */
#endif /* __INTEL_COMPILER */

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,588 @@
//===----------------------------------------------------------------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is dual licensed under the MIT and the University of Illinois Open
// Source Licenses. See LICENSE.txt for details.
//
//===----------------------------------------------------------------------===//
#ifndef _ITTNOTIFY_CONFIG_H_
#define _ITTNOTIFY_CONFIG_H_
/** @cond exclude_from_documentation */
#ifndef ITT_OS_WIN
# define ITT_OS_WIN 1
#endif /* ITT_OS_WIN */
#ifndef ITT_OS_LINUX
# define ITT_OS_LINUX 2
#endif /* ITT_OS_LINUX */
#ifndef ITT_OS_MAC
# define ITT_OS_MAC 3
#endif /* ITT_OS_MAC */
#ifndef ITT_OS_FREEBSD
# define ITT_OS_FREEBSD 4
#endif /* ITT_OS_FREEBSD */
#ifndef ITT_OS
# if defined WIN32 || defined _WIN32
# define ITT_OS ITT_OS_WIN
# elif defined( __APPLE__ ) && defined( __MACH__ )
# define ITT_OS ITT_OS_MAC
# elif defined( __FreeBSD__ )
# define ITT_OS ITT_OS_FREEBSD
# else
# define ITT_OS ITT_OS_LINUX
# endif
#endif /* ITT_OS */
#ifndef ITT_PLATFORM_WIN
# define ITT_PLATFORM_WIN 1
#endif /* ITT_PLATFORM_WIN */
#ifndef ITT_PLATFORM_POSIX
# define ITT_PLATFORM_POSIX 2
#endif /* ITT_PLATFORM_POSIX */
#ifndef ITT_PLATFORM_MAC
# define ITT_PLATFORM_MAC 3
#endif /* ITT_PLATFORM_MAC */
#ifndef ITT_PLATFORM_FREEBSD
# define ITT_PLATFORM_FREEBSD 4
#endif /* ITT_PLATFORM_FREEBSD */
#ifndef ITT_PLATFORM
# if ITT_OS==ITT_OS_WIN
# define ITT_PLATFORM ITT_PLATFORM_WIN
# elif ITT_OS==ITT_OS_MAC
# define ITT_PLATFORM ITT_PLATFORM_MAC
# elif ITT_OS==ITT_OS_FREEBSD
# define ITT_PLATFORM ITT_PLATFORM_FREEBSD
# else
# define ITT_PLATFORM ITT_PLATFORM_POSIX
# endif
#endif /* ITT_PLATFORM */
#if defined(_UNICODE) && !defined(UNICODE)
#define UNICODE
#endif
#include <stddef.h>
#if ITT_PLATFORM==ITT_PLATFORM_WIN
#include <tchar.h>
#else /* ITT_PLATFORM==ITT_PLATFORM_WIN */
#include <stdint.h>
#if defined(UNICODE) || defined(_UNICODE)
#include <wchar.h>
#endif /* UNICODE || _UNICODE */
#endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
#ifndef ITTAPI_CDECL
# if ITT_PLATFORM==ITT_PLATFORM_WIN
# define ITTAPI_CDECL __cdecl
# else /* ITT_PLATFORM==ITT_PLATFORM_WIN */
# if defined _M_IX86 || defined __i386__
# define ITTAPI_CDECL __attribute__ ((cdecl))
# else /* _M_IX86 || __i386__ */
# define ITTAPI_CDECL /* actual only on x86 platform */
# endif /* _M_IX86 || __i386__ */
# endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
#endif /* ITTAPI_CDECL */
#ifndef STDCALL
# if ITT_PLATFORM==ITT_PLATFORM_WIN
# define STDCALL __stdcall
# else /* ITT_PLATFORM==ITT_PLATFORM_WIN */
# if defined _M_IX86 || defined __i386__
# define STDCALL __attribute__ ((stdcall))
# else /* _M_IX86 || __i386__ */
# define STDCALL /* supported only on x86 platform */
# endif /* _M_IX86 || __i386__ */
# endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
#endif /* STDCALL */
#define ITTAPI ITTAPI_CDECL
#define LIBITTAPI ITTAPI_CDECL
/* TODO: Temporary for compatibility! */
#define ITTAPI_CALL ITTAPI_CDECL
#define LIBITTAPI_CALL ITTAPI_CDECL
#if ITT_PLATFORM==ITT_PLATFORM_WIN
/* use __forceinline (VC++ specific) */
#define ITT_INLINE __forceinline
#define ITT_INLINE_ATTRIBUTE /* nothing */
#else /* ITT_PLATFORM==ITT_PLATFORM_WIN */
/*
* Generally, functions are not inlined unless optimization is specified.
* For functions declared inline, this attribute inlines the function even
* if no optimization level was specified.
*/
#ifdef __STRICT_ANSI__
#define ITT_INLINE static
#define ITT_INLINE_ATTRIBUTE __attribute__((unused))
#else /* __STRICT_ANSI__ */
#define ITT_INLINE static inline
#define ITT_INLINE_ATTRIBUTE __attribute__((always_inline, unused))
#endif /* __STRICT_ANSI__ */
#endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
/** @endcond */
#ifndef ITT_ARCH_IA32
# define ITT_ARCH_IA32 1
#endif /* ITT_ARCH_IA32 */
#ifndef ITT_ARCH_IA32E
# define ITT_ARCH_IA32E 2
#endif /* ITT_ARCH_IA32E */
/* Was there a magical reason we didn't have 3 here before? */
#ifndef ITT_ARCH_AARCH64
# define ITT_ARCH_AARCH64 3
#endif /* ITT_ARCH_AARCH64 */
#ifndef ITT_ARCH_ARM
# define ITT_ARCH_ARM 4
#endif /* ITT_ARCH_ARM */
#ifndef ITT_ARCH_PPC64
# define ITT_ARCH_PPC64 5
#endif /* ITT_ARCH_PPC64 */
#ifndef ITT_ARCH_MIPS
# define ITT_ARCH_MIPS 6
#endif /* ITT_ARCH_MIPS */
#ifndef ITT_ARCH_MIPS64
# define ITT_ARCH_MIPS64 6
#endif /* ITT_ARCH_MIPS64 */
#ifndef ITT_ARCH
# if defined _M_IX86 || defined __i386__
# define ITT_ARCH ITT_ARCH_IA32
# elif defined _M_X64 || defined _M_AMD64 || defined __x86_64__
# define ITT_ARCH ITT_ARCH_IA32E
# elif defined _M_IA64 || defined __ia64__
# define ITT_ARCH ITT_ARCH_IA64
# elif defined _M_ARM || defined __arm__
# define ITT_ARCH ITT_ARCH_ARM
# elif defined __powerpc64__
# define ITT_ARCH ITT_ARCH_PPC64
# elif defined __aarch64__
# define ITT_ARCH ITT_ARCH_AARCH64
# elif defined __mips__ && !defined __mips64
# define ITT_ARCH ITT_ARCH_MIPS
# elif defined __mips__ && defined __mips64
# define ITT_ARCH ITT_ARCH_MIPS64
# endif
#endif
#ifdef __cplusplus
# define ITT_EXTERN_C extern "C"
# define ITT_EXTERN_C_BEGIN extern "C" {
# define ITT_EXTERN_C_END }
#else
# define ITT_EXTERN_C /* nothing */
# define ITT_EXTERN_C_BEGIN /* nothing */
# define ITT_EXTERN_C_END /* nothing */
#endif /* __cplusplus */
#define ITT_TO_STR_AUX(x) #x
#define ITT_TO_STR(x) ITT_TO_STR_AUX(x)
#define __ITT_BUILD_ASSERT(expr, suffix) do { \
static char __itt_build_check_##suffix[(expr) ? 1 : -1]; \
__itt_build_check_##suffix[0] = 0; \
} while(0)
#define _ITT_BUILD_ASSERT(expr, suffix) __ITT_BUILD_ASSERT((expr), suffix)
#define ITT_BUILD_ASSERT(expr) _ITT_BUILD_ASSERT((expr), __LINE__)
#define ITT_MAGIC { 0xED, 0xAB, 0xAB, 0xEC, 0x0D, 0xEE, 0xDA, 0x30 }
/* Replace with snapshot date YYYYMMDD for promotion build. */
#define API_VERSION_BUILD 20151119
#ifndef API_VERSION_NUM
#define API_VERSION_NUM 0.0.0
#endif /* API_VERSION_NUM */
#define API_VERSION "ITT-API-Version " ITT_TO_STR(API_VERSION_NUM) \
" (" ITT_TO_STR(API_VERSION_BUILD) ")"
/* OS communication functions */
#if ITT_PLATFORM==ITT_PLATFORM_WIN
#include <windows.h>
typedef HMODULE lib_t;
typedef DWORD TIDT;
typedef CRITICAL_SECTION mutex_t;
#define MUTEX_INITIALIZER { 0 }
#define strong_alias(name, aliasname) /* empty for Windows */
#else /* ITT_PLATFORM==ITT_PLATFORM_WIN */
#include <dlfcn.h>
#if defined(UNICODE) || defined(_UNICODE)
#include <wchar.h>
#endif /* UNICODE */
#ifndef _GNU_SOURCE
#define _GNU_SOURCE 1 /* need for PTHREAD_MUTEX_RECURSIVE */
#endif /* _GNU_SOURCE */
#ifndef __USE_UNIX98
#define __USE_UNIX98 1 /* need for PTHREAD_MUTEX_RECURSIVE, on SLES11.1 with gcc 4.3.4 wherein pthread.h missing dependency on __USE_XOPEN2K8 */
#endif /*__USE_UNIX98*/
#include <pthread.h>
typedef void* lib_t;
typedef pthread_t TIDT;
typedef pthread_mutex_t mutex_t;
#define MUTEX_INITIALIZER PTHREAD_MUTEX_INITIALIZER
#define _strong_alias(name, aliasname) \
extern __typeof (name) aliasname __attribute__ ((alias (#name)));
#define strong_alias(name, aliasname) _strong_alias(name, aliasname)
#endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
#if ITT_PLATFORM==ITT_PLATFORM_WIN
#define __itt_get_proc(lib, name) GetProcAddress(lib, name)
#define __itt_mutex_init(mutex) InitializeCriticalSection(mutex)
#define __itt_mutex_lock(mutex) EnterCriticalSection(mutex)
#define __itt_mutex_unlock(mutex) LeaveCriticalSection(mutex)
#define __itt_load_lib(name) LoadLibraryA(name)
#define __itt_unload_lib(handle) FreeLibrary(handle)
#define __itt_system_error() (int)GetLastError()
#define __itt_fstrcmp(s1, s2) lstrcmpA(s1, s2)
#define __itt_fstrnlen(s, l) strnlen_s(s, l)
#define __itt_fstrcpyn(s1, b, s2, l) strncpy_s(s1, b, s2, l)
#define __itt_fstrdup(s) _strdup(s)
#define __itt_thread_id() GetCurrentThreadId()
#define __itt_thread_yield() SwitchToThread()
#ifndef ITT_SIMPLE_INIT
ITT_INLINE long
__itt_interlocked_increment(volatile long* ptr) ITT_INLINE_ATTRIBUTE;
ITT_INLINE long __itt_interlocked_increment(volatile long* ptr)
{
return InterlockedIncrement(ptr);
}
#endif /* ITT_SIMPLE_INIT */
#define DL_SYMBOLS (1)
#define PTHREAD_SYMBOLS (1)
#else /* ITT_PLATFORM!=ITT_PLATFORM_WIN */
#define __itt_get_proc(lib, name) dlsym(lib, name)
#define __itt_mutex_init(mutex) {\
pthread_mutexattr_t mutex_attr; \
int error_code = pthread_mutexattr_init(&mutex_attr); \
if (error_code) \
__itt_report_error(__itt_error_system, "pthread_mutexattr_init", \
error_code); \
error_code = pthread_mutexattr_settype(&mutex_attr, \
PTHREAD_MUTEX_RECURSIVE); \
if (error_code) \
__itt_report_error(__itt_error_system, "pthread_mutexattr_settype", \
error_code); \
error_code = pthread_mutex_init(mutex, &mutex_attr); \
if (error_code) \
__itt_report_error(__itt_error_system, "pthread_mutex_init", \
error_code); \
error_code = pthread_mutexattr_destroy(&mutex_attr); \
if (error_code) \
__itt_report_error(__itt_error_system, "pthread_mutexattr_destroy", \
error_code); \
}
#define __itt_mutex_lock(mutex) pthread_mutex_lock(mutex)
#define __itt_mutex_unlock(mutex) pthread_mutex_unlock(mutex)
#define __itt_load_lib(name) dlopen(name, RTLD_LAZY)
#define __itt_unload_lib(handle) dlclose(handle)
#define __itt_system_error() errno
#define __itt_fstrcmp(s1, s2) strcmp(s1, s2)
/* makes customer code define safe APIs for SDL_STRNLEN_S and SDL_STRNCPY_S */
#ifdef SDL_STRNLEN_S
#define __itt_fstrnlen(s, l) SDL_STRNLEN_S(s, l)
#else
#define __itt_fstrnlen(s, l) strlen(s)
#endif /* SDL_STRNLEN_S */
#ifdef SDL_STRNCPY_S
#define __itt_fstrcpyn(s1, b, s2, l) SDL_STRNCPY_S(s1, b, s2, l)
#else
#define __itt_fstrcpyn(s1, b, s2, l) strncpy(s1, s2, l)
#endif /* SDL_STRNCPY_S */
#define __itt_fstrdup(s) strdup(s)
#define __itt_thread_id() pthread_self()
#define __itt_thread_yield() sched_yield()
#if ITT_ARCH==ITT_ARCH_IA64
#ifdef __INTEL_COMPILER
#define __TBB_machine_fetchadd4(addr, val) __fetchadd4_acq((void *)addr, val)
#else /* __INTEL_COMPILER */
/* TODO: Add Support for not Intel compilers for IA-64 architecture */
#endif /* __INTEL_COMPILER */
#elif ITT_ARCH==ITT_ARCH_IA32 || ITT_ARCH==ITT_ARCH_IA32E /* ITT_ARCH!=ITT_ARCH_IA64 */
ITT_INLINE long
__TBB_machine_fetchadd4(volatile void* ptr, long addend) ITT_INLINE_ATTRIBUTE;
ITT_INLINE long __TBB_machine_fetchadd4(volatile void* ptr, long addend)
{
long result;
__asm__ __volatile__("lock\nxadd %0,%1"
: "=r"(result),"=m"(*(volatile int*)ptr)
: "0"(addend), "m"(*(volatile int*)ptr)
: "memory");
return result;
}
#elif ITT_ARCH==ITT_ARCH_ARM || ITT_ARCH==ITT_ARCH_PPC64 || ITT_ARCH==ITT_ARCH_AARCH64 || ITT_ARCH==ITT_ARCH_MIPS || ITT_ARCH==ITT_ARCH_MIPS64
#define __TBB_machine_fetchadd4(addr, val) __sync_fetch_and_add(addr, val)
#endif /* ITT_ARCH==ITT_ARCH_IA64 */
#ifndef ITT_SIMPLE_INIT
ITT_INLINE long
__itt_interlocked_increment(volatile long* ptr) ITT_INLINE_ATTRIBUTE;
ITT_INLINE long __itt_interlocked_increment(volatile long* ptr)
{
return __TBB_machine_fetchadd4(ptr, 1) + 1L;
}
#endif /* ITT_SIMPLE_INIT */
void* dlopen(const char*, int) __attribute__((weak));
void* dlsym(void*, const char*) __attribute__((weak));
int dlclose(void*) __attribute__((weak));
#define DL_SYMBOLS (dlopen && dlsym && dlclose)
int pthread_mutex_init(pthread_mutex_t*, const pthread_mutexattr_t*) __attribute__((weak));
int pthread_mutex_lock(pthread_mutex_t*) __attribute__((weak));
int pthread_mutex_unlock(pthread_mutex_t*) __attribute__((weak));
int pthread_mutex_destroy(pthread_mutex_t*) __attribute__((weak));
int pthread_mutexattr_init(pthread_mutexattr_t*) __attribute__((weak));
int pthread_mutexattr_settype(pthread_mutexattr_t*, int) __attribute__((weak));
int pthread_mutexattr_destroy(pthread_mutexattr_t*) __attribute__((weak));
pthread_t pthread_self(void) __attribute__((weak));
#define PTHREAD_SYMBOLS (pthread_mutex_init && pthread_mutex_lock && pthread_mutex_unlock && pthread_mutex_destroy && pthread_mutexattr_init && pthread_mutexattr_settype && pthread_mutexattr_destroy && pthread_self)
#endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
typedef enum {
__itt_collection_normal = 0,
__itt_collection_paused = 1
} __itt_collection_state;
typedef enum {
__itt_thread_normal = 0,
__itt_thread_ignored = 1
} __itt_thread_state;
#pragma pack(push, 8)
typedef struct ___itt_thread_info
{
const char* nameA; /*!< Copy of original name in ASCII. */
#if defined(UNICODE) || defined(_UNICODE)
const wchar_t* nameW; /*!< Copy of original name in UNICODE. */
#else /* UNICODE || _UNICODE */
void* nameW;
#endif /* UNICODE || _UNICODE */
TIDT tid;
__itt_thread_state state; /*!< Thread state (paused or normal) */
int extra1; /*!< Reserved to the runtime */
void* extra2; /*!< Reserved to the runtime */
struct ___itt_thread_info* next;
} __itt_thread_info;
#include "ittnotify_types.h" /* For __itt_group_id definition */
typedef struct ___itt_api_info_20101001
{
const char* name;
void** func_ptr;
void* init_func;
__itt_group_id group;
} __itt_api_info_20101001;
typedef struct ___itt_api_info
{
const char* name;
void** func_ptr;
void* init_func;
void* null_func;
__itt_group_id group;
} __itt_api_info;
typedef struct __itt_counter_info
{
const char* nameA; /*!< Copy of original name in ASCII. */
#if defined(UNICODE) || defined(_UNICODE)
const wchar_t* nameW; /*!< Copy of original name in UNICODE. */
#else /* UNICODE || _UNICODE */
void* nameW;
#endif /* UNICODE || _UNICODE */
const char* domainA; /*!< Copy of original name in ASCII. */
#if defined(UNICODE) || defined(_UNICODE)
const wchar_t* domainW; /*!< Copy of original name in UNICODE. */
#else /* UNICODE || _UNICODE */
void* domainW;
#endif /* UNICODE || _UNICODE */
int type;
long index;
int extra1; /*!< Reserved to the runtime */
void* extra2; /*!< Reserved to the runtime */
struct __itt_counter_info* next;
} __itt_counter_info_t;
struct ___itt_domain;
struct ___itt_string_handle;
typedef struct ___itt_global
{
unsigned char magic[8];
unsigned long version_major;
unsigned long version_minor;
unsigned long version_build;
volatile long api_initialized;
volatile long mutex_initialized;
volatile long atomic_counter;
mutex_t mutex;
lib_t lib;
void* error_handler;
const char** dll_path_ptr;
__itt_api_info* api_list_ptr;
struct ___itt_global* next;
/* Joinable structures below */
__itt_thread_info* thread_list;
struct ___itt_domain* domain_list;
struct ___itt_string_handle* string_list;
__itt_collection_state state;
__itt_counter_info_t* counter_list;
} __itt_global;
#pragma pack(pop)
#define NEW_THREAD_INFO_W(gptr,h,h_tail,t,s,n) { \
h = (__itt_thread_info*)malloc(sizeof(__itt_thread_info)); \
if (h != NULL) { \
h->tid = t; \
h->nameA = NULL; \
h->nameW = n ? _wcsdup(n) : NULL; \
h->state = s; \
h->extra1 = 0; /* reserved */ \
h->extra2 = NULL; /* reserved */ \
h->next = NULL; \
if (h_tail == NULL) \
(gptr)->thread_list = h; \
else \
h_tail->next = h; \
} \
}
#define NEW_THREAD_INFO_A(gptr,h,h_tail,t,s,n) { \
h = (__itt_thread_info*)malloc(sizeof(__itt_thread_info)); \
if (h != NULL) { \
h->tid = t; \
h->nameA = n ? __itt_fstrdup(n) : NULL; \
h->nameW = NULL; \
h->state = s; \
h->extra1 = 0; /* reserved */ \
h->extra2 = NULL; /* reserved */ \
h->next = NULL; \
if (h_tail == NULL) \
(gptr)->thread_list = h; \
else \
h_tail->next = h; \
} \
}
#define NEW_DOMAIN_W(gptr,h,h_tail,name) { \
h = (__itt_domain*)malloc(sizeof(__itt_domain)); \
if (h != NULL) { \
h->flags = 1; /* domain is enabled by default */ \
h->nameA = NULL; \
h->nameW = name ? _wcsdup(name) : NULL; \
h->extra1 = 0; /* reserved */ \
h->extra2 = NULL; /* reserved */ \
h->next = NULL; \
if (h_tail == NULL) \
(gptr)->domain_list = h; \
else \
h_tail->next = h; \
} \
}
#define NEW_DOMAIN_A(gptr,h,h_tail,name) { \
h = (__itt_domain*)malloc(sizeof(__itt_domain)); \
if (h != NULL) { \
h->flags = 1; /* domain is enabled by default */ \
h->nameA = name ? __itt_fstrdup(name) : NULL; \
h->nameW = NULL; \
h->extra1 = 0; /* reserved */ \
h->extra2 = NULL; /* reserved */ \
h->next = NULL; \
if (h_tail == NULL) \
(gptr)->domain_list = h; \
else \
h_tail->next = h; \
} \
}
#define NEW_STRING_HANDLE_W(gptr,h,h_tail,name) { \
h = (__itt_string_handle*)malloc(sizeof(__itt_string_handle)); \
if (h != NULL) { \
h->strA = NULL; \
h->strW = name ? _wcsdup(name) : NULL; \
h->extra1 = 0; /* reserved */ \
h->extra2 = NULL; /* reserved */ \
h->next = NULL; \
if (h_tail == NULL) \
(gptr)->string_list = h; \
else \
h_tail->next = h; \
} \
}
#define NEW_STRING_HANDLE_A(gptr,h,h_tail,name) { \
h = (__itt_string_handle*)malloc(sizeof(__itt_string_handle)); \
if (h != NULL) { \
h->strA = name ? __itt_fstrdup(name) : NULL; \
h->strW = NULL; \
h->extra1 = 0; /* reserved */ \
h->extra2 = NULL; /* reserved */ \
h->next = NULL; \
if (h_tail == NULL) \
(gptr)->string_list = h; \
else \
h_tail->next = h; \
} \
}
#define NEW_COUNTER_W(gptr,h,h_tail,name,domain,type) { \
h = (__itt_counter_info_t*)malloc(sizeof(__itt_counter_info_t)); \
if (h != NULL) { \
h->nameA = NULL; \
h->nameW = name ? _wcsdup(name) : NULL; \
h->domainA = NULL; \
h->domainW = name ? _wcsdup(domain) : NULL; \
h->type = type; \
h->index = 0; \
h->next = NULL; \
if (h_tail == NULL) \
(gptr)->counter_list = h; \
else \
h_tail->next = h; \
} \
}
#define NEW_COUNTER_A(gptr,h,h_tail,name,domain,type) { \
h = (__itt_counter_info_t*)malloc(sizeof(__itt_counter_info_t)); \
if (h != NULL) { \
h->nameA = name ? __itt_fstrdup(name) : NULL; \
h->nameW = NULL; \
h->domainA = domain ? __itt_fstrdup(domain) : NULL; \
h->domainW = NULL; \
h->type = type; \
h->index = 0; \
h->next = NULL; \
if (h_tail == NULL) \
(gptr)->counter_list = h; \
else \
h_tail->next = h; \
} \
}
#endif /* _ITTNOTIFY_CONFIG_H_ */

File diff suppressed because it is too large Load Diff

Some files were not shown because too many files have changed in this diff Show More