Builtin algorithms

graphscope.nx.builtin.hits(G, max_iter=100, tol=1e-08, nstart=None, normalized=True)

Returns HITS hubs and authorities values for nodes.

The HITS algorithm computes two numbers for a node. Authorities estimates the node value based on the incoming links. Hubs estimates the node value based on outgoing links.

Parameters:
  • G (graph) – A NetworkX graph

  • max_iter (integer, optional) – Maximum number of iterations in power method.

  • tol (float, optional) – Error tolerance used to check convergence in power method iteration.

  • nstart (dictionary, optional) – Starting value of each node for power method iteration.

  • normalized (bool (default=True)) – Normalize results by the sum of all of the values.

Returns:

(hubs,authorities) – Two dictionaries keyed by node containing the hub and authority values.

Return type:

two-tuple of dictionaries

Raises:

PowerIterationFailedConvergence – If the algorithm fails to converge to the specified tolerance within the specified number of iterations of the power iteration method.

Examples

>>> G = nx.path_graph(4)
>>> h, a = nx.hits(G)

Notes

The eigenvector calculation is done by the power iteration method and has no guarantee of convergence. The iteration will stop after max_iter iterations or an error tolerance of number_of_nodes(G)*tol has been reached.

The HITS algorithm was designed for directed graphs but this algorithm does not check if the input graph is directed and will execute on undirected graphs.

References

graphscope.nx.builtin.degree_centrality(G)

Compute the degree centrality for nodes.

The degree centrality for a node v is the fraction of nodes it is connected to.

Parameters:

G (graph) – A networkx graph

Returns:

nodes – Dictionary of nodes with degree centrality as the value.

Return type:

dictionary

See also

betweenness_centrality, load_centrality, eigenvector_centrality

Notes

The degree centrality values are normalized by dividing by the maximum possible degree in a simple graph n-1 where n is the number of nodes in G.

For multigraphs or graphs with self loops the maximum degree might be higher than n-1 and values of degree centrality greater than 1 are possible.

graphscope.nx.builtin.in_degree_centrality(G)

Compute the in-degree centrality for nodes.

The in-degree centrality for a node v is the fraction of nodes its incoming edges are connected to.

Parameters:

G (graph) – A NetworkX graph

Returns:

nodes – Dictionary of nodes with in-degree centrality as values.

Return type:

dictionary

Raises:

NetworkXNotImplemented – If G is undirected.

Notes

The degree centrality values are normalized by dividing by the maximum possible degree in a simple graph n-1 where n is the number of nodes in G.

For multigraphs or graphs with self loops the maximum degree might be higher than n-1 and values of degree centrality greater than 1 are possible.

graphscope.nx.builtin.out_degree_centrality(G)

Compute the out-degree centrality for nodes.

The out-degree centrality for a node v is the fraction of nodes its outgoing edges are connected to.

Parameters:

G (graph) – A NetworkX graph

Returns:

nodes – Dictionary of nodes with out-degree centrality as values.

Return type:

dictionary

Raises:

NetworkXNotImplemented – If G is undirected.

Notes

The degree centrality values are normalized by dividing by the maximum possible degree in a simple graph n-1 where n is the number of nodes in G.

For multigraphs or graphs with self loops the maximum degree might be higher than n-1 and values of degree centrality greater than 1 are possible.

graphscope.nx.builtin.eigenvector_centrality(G, max_iter=100, tol=1e-06, nstart=None, weight=None)

Compute the eigenvector centrality for the graph G.

Eigenvector centrality computes the centrality for a node based on the centrality of its neighbors. The eigenvector centrality for node $i$ is the $i$-th element of the vector $x$ defined by the equation

\[Ax = \lambda x\]

where $A$ is the adjacency matrix of the graph G with eigenvalue $lambda$. By virtue of the Perron–Frobenius theorem, there is a unique solution $x$, all of whose entries are positive, if $lambda$ is the largest eigenvalue of the adjacency matrix $A$ ([2]_).

Parameters:
  • G (graph) – A networkx graph

  • max_iter (integer, optional (default=100)) – Maximum number of iterations in power method.

  • tol (float, optional (default=1.0e-6)) – Error tolerance used to check convergence in power method iteration.

  • nstart (dictionary, optional (default=None)) – Starting value of eigenvector iteration for each node.

  • weight (None or string, optional (default=None)) – If None, all edge weights are considered equal. Otherwise holds the name of the edge attribute used as weight. In this measure the weight is interpreted as the connection strength.

Returns:

nodes – Dictionary of nodes with eigenvector centrality as the value.

Return type:

dictionary

Examples

>>> G = nx.path_graph(4)
>>> centrality = nx.eigenvector_centrality(G)
>>> sorted((v, f"{c:0.2f}") for v, c in centrality.items())
[(0, '0.37'), (1, '0.60'), (2, '0.60'), (3, '0.37')]
Raises:
  • NetworkXPointlessConcept – If the graph G is the null graph.

  • NetworkXError – If each value in nstart is zero.

  • PowerIterationFailedConvergence – If the algorithm fails to converge to the specified tolerance within the specified number of iterations of the power iteration method.

See also

eigenvector_centrality_numpy, pagerank, hits

Notes

The measure was introduced by [1]_ and is discussed in [2]_.

The power iteration method is used to compute the eigenvector and convergence is not guaranteed. Our method stops after max_iter iterations or when the change in the computed vector between two iterations is smaller than an error tolerance of G.number_of_nodes() * tol. This implementation uses ($A + I$) rather than the adjacency matrix $A$ because it shifts the spectrum to enable discerning the correct eigenvector even for networks with multiple dominant eigenvalues.

For directed graphs this is “left” eigenvector centrality which corresponds to the in-edges in the graph. For out-edges eigenvector centrality first reverse the graph with G.reverse().

References

graphscope.nx.builtin.katz_centrality(G, alpha=0.1, beta=1.0, max_iter=100, tol=1e-06, nstart=None, normalized=True, weight=None)

Compute the Katz centrality for the nodes of the graph G.

Katz centrality computes the centrality for a node based on the centrality of its neighbors. It is a generalization of the eigenvector centrality. The Katz centrality for node $i$ is

\[x_i = \alpha \sum_{j} A_{ij} x_j + \beta,\]

where $A$ is the adjacency matrix of graph G with eigenvalues $lambda$.

The parameter $beta$ controls the initial centrality and

\[\alpha < \frac{1}{\lambda_{\max}}.\]

Katz centrality computes the relative influence of a node within a network by measuring the number of the immediate neighbors (first degree nodes) and also all other nodes in the network that connect to the node under consideration through these immediate neighbors.

Extra weight can be provided to immediate neighbors through the parameter $beta$. Connections made with distant neighbors are, however, penalized by an attenuation factor $alpha$ which should be strictly less than the inverse largest eigenvalue of the adjacency matrix in order for the Katz centrality to be computed correctly. More information is provided in [1]_.

Parameters:
  • G (graph) – A NetworkX graph.

  • alpha (float) – Attenuation factor

  • beta (scalar or dictionary, optional (default=1.0)) – Weight attributed to the immediate neighborhood. If not a scalar, the dictionary must have an value for every node.

  • max_iter (integer, optional (default=1000)) – Maximum number of iterations in power method.

  • tol (float, optional (default=1.0e-6)) – Error tolerance used to check convergence in power method iteration.

  • nstart (dictionary, optional) – Starting value of Katz iteration for each node.

  • normalized (bool, optional (default=True)) – If True normalize the resulting values.

  • weight (None or string, optional (default=None)) – If None, all edge weights are considered equal. Otherwise holds the name of the edge attribute used as weight. In this measure the weight is interpreted as the connection strength.

Returns:

nodes – Dictionary of nodes with Katz centrality as the value.

Return type:

dictionary

Raises:
  • NetworkXError – If the parameter beta is not a scalar but lacks a value for at least one node

  • PowerIterationFailedConvergence – If the algorithm fails to converge to the specified tolerance within the specified number of iterations of the power iteration method.

Examples

>>> import math
>>> G = nx.path_graph(4)
>>> phi = (1 + math.sqrt(5)) / 2.0  # largest eigenvalue of adj matrix
>>> centrality = nx.katz_centrality(G, 1 / phi - 0.01)
>>> for n, c in sorted(centrality.items()):
...     print(f"{n} {c:.2f}")
0 0.37
1 0.60
2 0.60
3 0.37

See also

katz_centrality_numpy, eigenvector_centrality, eigenvector_centrality_numpy, pagerank, hits

Notes

Katz centrality was introduced by [2]_.

This algorithm it uses the power method to find the eigenvector corresponding to the largest eigenvalue of the adjacency matrix of G. The parameter alpha should be strictly less than the inverse of largest eigenvalue of the adjacency matrix for the algorithm to converge. You can use max(nx.adjacency_spectrum(G)) to get $lambda_{max}$ the largest eigenvalue of the adjacency matrix. The iteration will stop after max_iter iterations or an error tolerance of number_of_nodes(G) * tol has been reached.

When $alpha = 1/lambda_{max}$ and $beta=0$, Katz centrality is the same as eigenvector centrality.

For directed graphs this finds “left” eigenvectors which corresponds to the in-edges in the graph. For out-edges Katz centrality first reverse the graph with G.reverse().

References

graphscope.nx.builtin.has_path(G, source, target)

Returns True if G has a path from source to target.

Parameters:
  • G (NetworkX graph) –

  • source (node) – Starting node for path

  • target (node) – Ending node for path

graphscope.nx.builtin.average_shortest_path_length(G, weight=None, method=None)

Returns the average shortest path length.

The average shortest path length is

\[a =\sum_{s,t \in V} \frac{d(s, t)}{n(n-1)}\]

where V is the set of nodes in G, d(s, t) is the shortest path from s to t, and n is the number of nodes in G.

Parameters:
  • G (NetworkX graph) –

  • weight (None, string or function, optional (default = None)) – If None, every edge has weight/distance/cost 1. If a string, use this edge attribute as the edge weight. Any edge attribute not present defaults to 1. If this is a function, the weight of an edge is the value returned by the function. The function must accept exactly three positional arguments: the two endpoints of an edge and the dictionary of edge attributes for that edge. The function must return a number.

  • method (string, optional (default = 'unweighted' or 'djikstra')) – The algorithm to use to compute the path lengths. Supported options are ‘unweighted’, ‘dijkstra’, ‘bellman-ford’, ‘floyd-warshall’ and ‘floyd-warshall-numpy’. Other method values produce a ValueError. The default method is ‘unweighted’ if weight is None, otherwise the default method is ‘dijkstra’.

Raises:
  • NetworkXPointlessConcept – If G is the null graph (that is, the graph on zero nodes).

  • NetworkXError – If G is not connected (or not weakly connected, in the case of a directed graph).

  • ValueError – If method is not among the supported options.

Examples

>>> G = nx.path_graph(5)
>>> nx.average_shortest_path_length(G)
2.0

For disconnected graphs, you can compute the average shortest path length for each component

>>> G = nx.Graph([(1, 2), (3, 4)])
>>> for C in (G.subgraph(c).copy() for c in nx.connected_components(G)):
...     print(nx.average_shortest_path_length(C))
1.0
1.0
graphscope.nx.builtin.bfs_edges(G, source, depth_limit=None)

edges in a breadth-first-search starting at source.

Parameters:
  • G (networkx graph) –

  • source (node) – Specify starting node for breadth-first search; this function iterates over only those edges in the component reachable from this node.

  • depth_limit (int, optional(default=len(G))) – Specify the maximum search depth

Returns:

edges – A list of edges in the breadth-first-search.

Return type:

list

Examples

To get the edges in a breadth-first search:

>>> G = nx.path_graph(3)
>>> list(nx.bfs_edges(G, 0))
[(0, 1), (1, 2)]
>>> list(nx.bfs_edges(G, source=0, depth_limit=1))
[(0, 1)]
graphscope.nx.builtin.k_core(G, k=None, core_number=None)

Returns the k-core of G.

A k-core is a maximal subgraph that contains nodes of degree k or more.

Parameters:
  • G (networkx graph) – A graph or directed graph

  • k (int, optional) – The order of the core. If not specified return the main core.

Returns:

  • :class:`VertexDataContext` (A context with each vertex assigned with a boolean:)

  • 1 if the vertex satisfies k-core, otherwise 0.

References

graphscope.nx.builtin.clustering(G, nodes=None, weight=None)

Compute the clustering coefficient for nodes.

For unweighted graphs, the clustering of a node \(u\) is the fraction of possible triangles through that node that exist,

\[c_u = \frac{2 T(u)}{deg(u)(deg(u)-1)},\]

where \(T(u)\) is the number of triangles through node \(u\) and \(deg(u)\) is the degree of \(u\).

For weighted graphs, there are several ways to define clustering [1]_. the one used here is defined as the geometric average of the subgraph edge weights [2]_,

\[c_u = \frac{1}{deg(u)(deg(u)-1))} \sum_{vw} (\hat{w}_{uv} \hat{w}_{uw} \hat{w}_{vw})^{1/3}.\]

The edge weights \(\hat{w}_{uv}\) are normalized by the maximum weight in the network \(\hat{w}_{uv} = w_{uv}/\max(w)\).

The value of \(c_u\) is assigned to 0 if \(deg(u) < 2\).

Additionally, this weighted definition has been generalized to support negative edge weights [3].

For directed graphs, the clustering is similarly defined as the fraction of all possible directed triangles or geometric average of the subgraph edge weights for unweighted and weighted directed graph respectively [4].

\[c_u = \frac{2}{deg^{tot}(u)(deg^{tot}(u)-1) - 2deg^{\leftrightarrow}(u)} T(u),\]

where \(T(u)\) is the number of directed triangles through node \(u\), \(deg^{tot}(u)\) is the sum of in degree and out degree of \(u\) and \(deg^{\leftrightarrow}(u)\) is the reciprocal degree of \(u\).

Parameters:
  • G (graph) –

  • nodes (container of nodes, optional (default=all nodes in G)) – Compute clustering for nodes in this container.

  • weight (string or None, optional (default=None)) – The edge attribute that holds the numerical value used as a weight. If None, then each edge has weight 1.

Returns:

out – Clustering coefficient at specified nodes

Return type:

float, or dictionary

Examples

>>> G = nx.complete_graph(5)
>>> print(nx.clustering(G, 0))
1.0
>>> print(nx.clustering(G))
{0: 1.0, 1: 1.0, 2: 1.0, 3: 1.0, 4: 1.0}

Notes

Self loops are ignored.

References

graphscope.nx.builtin.triangles(G, nodes=None)

Compute the number of triangles.

Finds the number of triangles that include a node as one vertex.

Parameters:
  • G (graph) – A networkx graph

  • nodes (container of nodes, optional (default= all nodes in G)) – Compute triangles for nodes in this container.

Returns:

out – Number of triangles keyed by node label.

Return type:

dictionary

Examples

>>> G = nx.complete_graph(5)
>>> print(nx.triangles(G, 0))
6
>>> print(nx.triangles(G))
{0: 6, 1: 6, 2: 6, 3: 6, 4: 6}
>>> print(list(nx.triangles(G, (0, 1)).values()))
[6, 6]

Notes

When computing triangles for the entire graph each triangle is counted three times, once at each node. Self loops are ignored.

graphscope.nx.builtin.average_clustering(G, nodes=None, weight=None, count_zeros=True)

Compute the average clustering coefficient for the graph G.

The clustering coefficient for the graph is the average,

\[C = \frac{1}{n}\sum_{v \in G} c_v,\]

where \(n\) is the number of nodes in G.

Parameters:
  • G (graph) –

  • nodes (container of nodes, optional (default=all nodes in G)) – Compute average clustering for nodes in this container.

  • weight (string or None, optional (default=None)) – The edge attribute that holds the numerical value used as a weight. If None, then each edge has weight 1.

  • count_zeros (bool) – If False include only the nodes with nonzero clustering in the average.

Returns:

avg – Average clustering

Return type:

float

Examples

>>> G = nx.complete_graph(5)
>>> print(nx.average_clustering(G))
1.0

Notes

This is a space saving routine; it might be faster to use the clustering function to get a list and then take the average.

Self loops are ignored.

References