Caux_s< TA, TB, TC, TV > | |
Cpvfmm::BasisInterface< ValueType, Derived > | |
►Cpvfmm::BasisInterface< ValueType, ChebBasis< ValueType > > | |
Cpvfmm::ChebBasis< ValueType > | |
Chmlp::Cache | |
Chmlp::Cache1D< NSET, NWAY, T > | Cache1D<NSET, NWAY, T> creates a layer of cache with NSET that directly maps a 1D array. The direct map is [ id % NSET ]. Each set has NWAY that are fully associated |
Chmlp::Cache2D< NSET, NWAY, T > | |
Chmlp::Cache2D< 128, 16384, T > | |
Chmlp::CacheLine | |
►Chmlp::gofmm::centersplit< SPDMATRIX, N_SPLIT, T > | This the main splitter used to build the Spd-Askit tree. First compute the approximate center using subsamples. Then find the two most far away points to do the projection |
Chmlp::mpigofmm::centersplit< SPDMATRIX, N_SPLIT, T > | This the main splitter used to build the Spd-Askit tree. First compute the approximate center using subsamples. Then find the two most far away points to do the projection |
Chmlp::Cluster< T, Allocator > | |
Chmlp::tci::Comm | |
Chmlp::gofmm::CommandLineHelper | This is a helper class that parses the arguments from command lines |
►Chmlp::gofmm::Configuration< T > | Configuration contains all user-defined parameters |
Chmlp::gofmm::Setup< SPDMATRIX, SPLITTER, T > | These are data that shared by the whole local tree |
Chmlp::mpigofmm::Setup< SPDMATRIX, SPLITTER, T > | These are data that shared by the whole local tree. Distributed setup inherits mpitree::Setup |
Chmlp::tci::Context | |
Cconv_relu_pool2x2_asm_d6x8 | |
Cconv_relu_pool2x2_asm_d8x4 | |
Cconv_relu_pool2x2_int_d8x4 | |
Cconv_relu_pool2x2_ref_d8x4 | |
►Chmlp::Device | This class describes devices or accelerators that require a master thread to control. A device can accept tasks from multiple workers. All received tasks are expected to be executed independently in a time-sharing fashion. Whether these tasks are executed in parallel, sequential or with some built-in context switching scheme does not matter |
Chmlp::gpu::Nvidia | |
Chmlp::DeviceMemory< T > | |
Cdowncast< TC, TV > | |
Chmlp::Event | Wrapper for omp or pthread mutex |
►Chmlp::gofmm::Factor< T > | |
Chmlp::gofmm::NodeData< T > | This class contains all GOFMM related data carried by a tree node |
Cgaussian_int_d24x8 | |
Cgkmm_mrxnr< MR, NR, OPKERNEL, OP1, OP2, TA, TB, TC, TV > | This kernel takes opkernel, op1 and op2 to implement an MR-by-NR GKMM operation |
Cgkrm_mrxnr< MR, NR, OPKERNEL, OP1, OP2, OPREDUCE, TA, TB, TC, TV > | |
Cgnbx_mrxnr< MR, NR, OPKERNEL, OP1, OP2, TA, TB, TC, TPACKC, TV > | |
Cgsks_gaussian_int_d12x16 | |
Cgsks_gaussian_int_d6x32 | |
Cgsks_gaussian_int_d8x4 | |
Cgsks_gaussian_int_d8x6 | |
Cgsks_gaussian_int_s12x32 | |
Cgsks_gaussian_int_s16x6 | |
Cgsks_gaussian_int_s8x8 | |
Cgsks_polynomial_int_d8x6 | |
Cgsks_polynomial_int_s16x6 | |
Chmlp::gsks_ref_mrxnr< MR, NR, T > | |
Cidentity< T > | |
Chmlp::kernel_s< T, TP > | |
Chmlp::kernel_s< T, T > | |
Cpvfmm::KernelFnWrapper | |
Cpvfmm::KernelFunction< ValueType, DIM > | |
Cpvfmm::KernelFunction< Real, DIM > | |
Cpvfmm::KernelFunction< T, DIM > | |
Cpvfmm::KernelMatrix< Real > | |
Cpvfmm::KernelMatrix< T > | |
Cknn_int_d6x32 | |
Cknn_int_d8x4 | |
Cknn_int_s12x32 | |
Cpvfmm::Laplace3D< ValueType > | |
►Chmlp::LayerBase< T > | |
Chmlp::Layer< LAYERTYPE, T > | |
Chmlp::Layer< FC, T > | |
Chmlp::Layer< INPUT, T > | |
Chmlp::Lock | Wrapper for omp or pthread mutex |
►Chmlp::MatrifyableObject< NB, T, TPACK > | |
Chmlp::MatrixLike< NB, T, TPACK > | |
Cpvfmm::Matrix< ValueType > | |
Chmlp::MatrixReadWrite | This class creates 2D grids for 2D matrix partition |
Cpvfmm::MemoryManager::MemHead | Header data for each memory block |
Cpvfmm::MemoryManager | MemoryManager class declaration |
Chmlp::MortonHelper | |
CMPI_Status | |
►Chmlp::mpi::MPIObject | |
►Chmlp::DistDataBase< pair< T, size_t >, std::allocator< pair< T, size_t > > > | |
Chmlp::DistData< STAR, CBLK, pair< T, size_t > > | |
Chmlp::DistData< STAR, CIDS, pair< T, size_t > > | |
►Chmlp::DistDataBase< T > | |
Chmlp::DistData< CIRC, CIRC, T > | |
Chmlp::DistData< RBLK, STAR, T > | Ecah MPI process own ( n / size ) rows of A in a cyclic fashion (Round Robin). i.e. If there are 3 MPI processes, then |
Chmlp::DistData< RIDS, STAR, T > | Ecah MPI process own ( rids.size() ) rows of A, and rids denote the distribution. i.e. ranki owns A(rids[0],:), A(rids[1],:), A(rids[2],:), .. |
Chmlp::DistData< STAR, CBLK, T > | |
Chmlp::DistData< STAR, CIDS, T > | Ecah MPI process own ( cids.size() ) columns of A, and cids denote the distribution. i.e. ranki owns A(:,cids[0]), A(:,cids[1]), A(:,cids[2]), .. |
Chmlp::DistData< STAR, STAR, T > | |
Chmlp::DistData< STAR, USER, T > | |
►Chmlp::DistDataBase< TP, std::allocator< TP > > | |
Chmlp::DistData< STAR, CBLK, TP > | |
Chmlp::DistData< STAR, USER, TP > | |
►Chmlp::DistDataBase< T, Allocator > | |
Chmlp::DistData< ROWDIST, COLDIST, T, Allocator > | |
►Chmlp::DistVirtualMatrix< T, Allocator > | DistVirtualMatrix is the abstract base class for matrix-free access and operations. Most of the public functions will be virtual. To inherit DistVirtualMatrix, you "must" implement the evaluation operator. Otherwise, the code won't compile |
Chmlp::DistKernelMatrix< T, TP, Allocator > | |
Chmlp::mpitree::Tree< SETUP, NODEDATA > | This distributed tree inherits the shared memory tree with some additional MPI data structure and function call |
Chmlp::Scheduler | |
Chmlp::mpi::NumberIntPair< T > | |
►Chmlp::pack_pbxib< NB, T, TPACK > | |
Chmlp::pack2D_pbxib< NB, T, TPACK > | |
Cpvfmm::Permutation< ValueType > | |
►Chmlp::gofmm::randomsplit< SPDMATRIX, N_SPLIT, T > | This the splitter used in the randomized tree |
Chmlp::mpigofmm::randomsplit< SPDMATRIX, N_SPLIT, T > | |
Chmlp::Range | |
Chmlp::range | |
Crank_k_asm_d4x4 | |
Crank_k_asm_d6x8 | |
Crank_k_asm_d8x4 | |
Crank_k_asm_d8x6 | |
Crank_k_asm_s16x6 | |
Crank_k_asm_s4x4 | |
Crank_k_asm_s6x16 | |
Crank_k_asm_s8x12 | |
Crank_k_asm_s8x8 | |
Crank_k_int_d24x8 | |
Crank_k_opt_d12x16 | |
Crank_k_opt_d6x32 | |
Crank_k_opt_s12x32 | |
Crank_k_ref_d8x4 | |
►Chmlp::ReadWrite | This class provides the ability to perform dependency analysis |
Chmlp::Data< pair< T, size_t > > | |
►Chmlp::Data< pair< T, size_t >, std::allocator< pair< T, size_t > > > | |
Chmlp::DistDataBase< pair< T, size_t >, std::allocator< pair< T, size_t > > > | |
Chmlp::Data< size_t > | |
Chmlp::Data< T > | |
►Chmlp::Data< T, std::allocator< T > > | |
Chmlp::DistDataBase< T > | |
►Chmlp::Data< TP, std::allocator< TP > > | |
Chmlp::DistDataBase< TP, std::allocator< TP > > | |
►Chmlp::Data< T, Allocator > | |
Chmlp::DistDataBase< T, Allocator > | |
Chmlp::DistKernelMatrix< T, TP, Allocator > | |
Chmlp::KernelMatrix< T, Allocator > | |
Chmlp::OOCData< T, Allocator > | |
Chmlp::SparseData< T, Allocator > | |
►Chmlp::tree::Node< SETUP, NODEDATA > | This is the default ball tree splitter. Given coordinates, compute the direction from the two most far away points. Project all points to this line and split into two groups using a median select |
Chmlp::mpitree::Node< SETUP, NODEDATA > | |
Chmlp::View< T > | |
Chmlp::OOCData< T > | |
Chmlp::Regression< T > | |
►Chmlp::root::RootFinderBase< T > | |
Chmlp::root::Bisection< FUNC, T > | This is not thread safe |
Chmlp::root::Newton< FUNC, T > | |
Chmlp::RunTime | RunTime is statically created in hmlp_runtime.cpp |
Csemiring_mrxnr< MR, NR, OP1, OP2, TA, TB, TC, TV > | |
Chmlp::mpitree::Setup< SPLITTER, DATATYPE > | Data and setup that are shared with all nodes |
Chmlp::tree::Setup< SPLITTER, DATATYPE > | Data and setup that are shared with all nodes |
►Chmlp::mpitree::Setup< SPLITTER, T > | |
Chmlp::mpigofmm::Setup< SPDMATRIX, SPLITTER, T > | These are data that shared by the whole local tree. Distributed setup inherits mpitree::Setup |
►Chmlp::tree::Setup< SPLITTER, T > | |
Chmlp::gofmm::Setup< SPDMATRIX, SPLITTER, T > | These are data that shared by the whole local tree |
Chmlp::gofmm::SimpleGOFMM< T, SPDMATRIX > | |
Cpvfmm::Smoother< ValueType > | |
►Chmlp::SPDMatrixMPISupport< T > | |
Chmlp::MLPGaussNewton< T > | |
►Chmlp::VirtualMatrix< T > | |
Chmlp::MLPGaussNewton< T > | |
Chmlp::OOCCovMatrix< T > | |
Chmlp::OOCSPDMatrix< T > | |
Chmlp::PVFMMKernelMatrix< T > | |
Chmlp::SPDMatrix< T > | This class does not need to inherit hmlp::Data<T>, but it should support two interfaces for data fetching |
►Chmlp::VirtualMatrix< T, Allocator > | |
Chmlp::DistVirtualMatrix< T, Allocator > | DistVirtualMatrix is the abstract base class for matrix-free access and operations. Most of the public functions will be virtual. To inherit DistVirtualMatrix, you "must" implement the evaluation operator. Otherwise, the code won't compile |
Chmlp::KernelMatrix< T, Allocator > | |
►Chmlp::SPDMatrixMPISupport< DATATYPE > | |
Chmlp::VirtualMatrix< DATATYPE, Allocator > | VirtualMatrix is the abstract base class for matrix-free access and operations. Most of the public functions will be virtual. To inherit VirtualMatrix, you "must" implement the evaluation operator. Otherwise, the code won't compile |
Chmlp::Statistic | |
Cpvfmm::Stokes3D< ValueType > | |
Chmlp::gofmm::Summary< NODE > | Provide statistics summary for the execution section |
►Chmlp::Task | |
Chmlp::CovReduceTask< T > | |
Chmlp::CovTask< T > | |
Chmlp::gemm::xgemmBarrierTask< T > | This task is generated by the top level routine |
Chmlp::gemm::xgemmTask< T > | |
Chmlp::gofmm::CacheNearNodesTask< NNPRUNE, NODE > | Task wrapper for CacheNearNodes() |
Chmlp::gofmm::FactorizeTask< NODE, T > | |
Chmlp::gofmm::gpu::LeavesToLeavesVer2Task< CACHE, NNPRUNE, NODE, T > | |
Chmlp::gofmm::InterpolateTask< NODE > | The correponding task of Interpolate() |
Chmlp::gofmm::LeavesToLeavesTask< SUBTASKID, NNPRUNE, NODE, T > | |
Chmlp::gofmm::MatrixPermuteTask< FORWARD, NODE > | Doward traversal to create matrix views, at the leaf level execute explicit permutation |
Chmlp::gofmm::NearSamplesTask< NODE, T > | |
Chmlp::gofmm::NeighborsTask< NODE, T > | |
Chmlp::gofmm::SetupFactorTask< NODE, T > | |
Chmlp::gofmm::SkeletonizeTask< NODE, T > | |
Chmlp::gofmm::SkeletonKIJTask< NNPRUNE, NODE, T > | |
Chmlp::gofmm::SkeletonsToNodesTask< NNPRUNE, NODE, T > | |
Chmlp::gofmm::SkeletonsToSkeletonsTask< NNPRUNE, NODE, T > | There is no dependency between each task. However there are raw (read after write) dependencies: |
Chmlp::gofmm::SolverTreeViewTask< NODE > | Creates an hierarchical tree view for a matrix |
Chmlp::gofmm::SolveTask< NODE, T > | |
Chmlp::gofmm::TreeViewTask< NODE > | This task creates an hierarchical tree view for w<RIDS> and u<RIDS> |
Chmlp::gofmm::ULVBackwardSolveTask< NODE, T > | |
Chmlp::gofmm::ULVForwardSolveTask< NODE, T > | |
Chmlp::gofmm::UpdateWeightsTask< NODE, T > | |
►Chmlp::MessageTask | This task is designed to take care MPI communications |
►Chmlp::ListenerTask | This task is the abstraction for all tasks handled by Listeners |
Chmlp::RecvTask< T, ARG > | |
►Chmlp::RecvTask< T, TREE > | |
Chmlp::mpigofmm::UnpackFarTask< T, TREE > | |
Chmlp::mpigofmm::UnpackLeafTask< T, TREE > | |
Chmlp::SendTask< T, ARG > | |
►Chmlp::SendTask< T, TREE > | |
Chmlp::mpigofmm::PackFarTask< T, TREE > | |
Chmlp::mpigofmm::PackNearTask< T, TREE > | |
Chmlp::mpigofmm::CacheFarNodesTask< NNPRUNE, NODE > | |
Chmlp::mpigofmm::CacheNearNodesTask< NNPRUNE, NODE > | |
Chmlp::mpigofmm::DistFactorizeTask< NODE, T > | |
Chmlp::mpigofmm::DistFactorTreeViewTask< NODE, T > | |
Chmlp::mpigofmm::DistMergeFarNodesTask< SYMMETRIC, NODE, T > | |
Chmlp::mpigofmm::DistSetupFactorTask< NODE, T > | |
Chmlp::mpigofmm::DistSkeletonizeTask< NODE, T > | |
Chmlp::mpigofmm::DistSkeletonKIJTask< NNPRUNE, NODE, T > | |
Chmlp::mpigofmm::DistSkeletonsToNodesTask< NNPRUNE, NODE, T > | |
Chmlp::mpigofmm::DistTreeViewTask< NODE > | This task creates an hierarchical tree view for weights<RIDS> and potentials<RIDS> |
Chmlp::mpigofmm::DistULVBackwardSolveTask< NODE, T > | |
Chmlp::mpigofmm::DistULVForwardSolveTask< NODE, T > | |
Chmlp::mpigofmm::DistUpdateWeightsTask< NODE, T > | Notice that NODE here is MPITree::Node |
Chmlp::mpigofmm::InterpolateTask< NODE > | |
Chmlp::mpigofmm::L2LReduceTask2< NODE, T > | |
Chmlp::mpigofmm::L2LTask2< NODE, T > | |
Chmlp::mpigofmm::MergeFarNodesTask< SYMMETRIC, NODE, T > | |
Chmlp::mpigofmm::S2SReduceTask2< NODE, LETNODE, T > | |
Chmlp::mpigofmm::S2STask2< NODE, LETNODE, T > | Notice that S2S depends on all Far interactions, which may include local tree nodes or let nodes. For HSS case, the only Far interaction is the sibling. Skeleton weight of the sibling will always be exchanged by default in N2S. Thus, currently we do not need a distributed S2S, because the skeleton weight is already in place |
Chmlp::mpigofmm::SkeletonizeTask< NODE, T > | |
Chmlp::mpitree::DistIndexPermuteTask< NODE > | |
Chmlp::mpitree::DistSplitTask< NODE > | This is the default ball tree splitter. Given coordinates, compute the direction from the two most far away points. Project all points to this line and split into two groups using a median select |
Chmlp::NULLTask< ARGUMENT > | This is a specific type of task that represents NOP |
Chmlp::tree::IndexPermuteTask< NODE > | Permuate the order of gids for each internal node to the order of leaf nodes |
Chmlp::tree::SplitTask< NODE > | |
Chmlp::thread_communicator | |
►Chmlp::tree::Tree< SETUP, NODEDATA > | |
Chmlp::mpitree::Tree< SETUP, NODEDATA > | This distributed tree inherits the shared memory tree with some additional MPI data structure and function call |
Chmlp::tree::Tree< gofmm::hmlp::gofmm::Setup< SPDMATRIX, hmlp::gofmm::centersplit< SPDMATRIX, 2, T >, T >, gofmm::hmlp::gofmm::NodeData< T > > | |
Cpvfmm::TypeTraits< T > | Identify each type uniquely |
►Chmlp::unpack_ibxjb< NB, T, TPACK > | |
Chmlp::unpack2D_ibxjb< NB, T, TPACK > | |
Cv4df_t | |
Cv4li_t | |
Cv8df_t | |
Cvariable_bandwidth_gaussian_int_d8x4 | |
Cvariable_bandwidth_gaussian_ref_d8x4 | |
►Cvector | |
Chmlp::Data< pair< T, size_t > > | |
Chmlp::Data< pair< T, size_t >, std::allocator< pair< T, size_t > > > | |
Chmlp::Data< size_t > | |
Chmlp::Data< T > | |
Chmlp::Data< T, std::allocator< T > > | |
Chmlp::Data< TP, std::allocator< TP > > | |
Chmlp::Data< T, Allocator > | |
Cpvfmm::Vector< ValueType > | |
Cpvfmm::Vector< Long > | |
►Chmlp::VirtualFunction< T > | |
Chmlp::FunctionBase< PARAM, T > | |
►Chmlp::model::VirtualModel< T > | |
Chmlp::model::Classification< FUNC, T > | |
Chmlp::model::Regression< FUNC, PARAM, DATA, T > | |
Chmlp::VirtualNormalizedGraph< PARAM, T > | |
Chmlp::Worker | |