MinHashLSHModel¶
-
class
pyspark.ml.feature.MinHashLSHModel(java_model: Optional[JavaObject] = None)[source]¶ Model produced by
MinHashLSH, where where multiple hash functions are stored. Each hash function is picked from the following family of hash functions, where \(a_i\) and \(b_i\) are randomly chosen integers less than prime: \(h_i(x) = ((x \cdot a_i + b_i) \mod prime)\) This hash family is approximately min-wise independent according to the reference.New in version 2.2.0.
Notes
See Tom Bohman, Colin Cooper, and Alan Frieze. “Min-wise independent linear permutations.” Electronic Journal of Combinatorics 7 (2000): R26.
Methods
approxNearestNeighbors(dataset, key, …[, …])Given a large dataset and an item, approximately find at most k items which have the closest distance to the item.
approxSimilarityJoin(datasetA, datasetB, …)Join two datasets to approximately find all pairs of rows whose distance are smaller than the threshold.
clear(param)Clears a param from the param map if it has been explicitly set.
copy([extra])Creates a copy of this instance with the same uid and some extra params.
explainParam(param)Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.
Returns the documentation of all params with their optionally default values and user-supplied values.
extractParamMap([extra])Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.
Gets the value of inputCol or its default value.
Gets the value of numHashTables or its default value.
getOrDefault(param)Gets the value of a param in the user-supplied param map or its default value.
Gets the value of outputCol or its default value.
getParam(paramName)Gets a param by its name.
hasDefault(param)Checks whether a param has a default value.
hasParam(paramName)Tests whether this instance contains a param with a given (string) name.
isDefined(param)Checks whether a param is explicitly set by user or has a default value.
isSet(param)Checks whether a param is explicitly set by user.
load(path)Reads an ML instance from the input path, a shortcut of read().load(path).
read()Returns an MLReader instance for this class.
save(path)Save this ML instance to the given path, a shortcut of ‘write().save(path)’.
set(param, value)Sets a parameter in the embedded param map.
setInputCol(value)Sets the value of
inputCol.setOutputCol(value)Sets the value of
outputCol.transform(dataset[, params])Transforms the input dataset with optional parameters.
write()Returns an MLWriter instance for this ML instance.
Attributes
Returns all params ordered by name.
Methods Documentation
-
approxNearestNeighbors(dataset: pyspark.sql.dataframe.DataFrame, key: pyspark.ml.linalg.Vector, numNearestNeighbors: int, distCol: str = 'distCol') → pyspark.sql.dataframe.DataFrame¶ Given a large dataset and an item, approximately find at most k items which have the closest distance to the item. If the
outputColis missing, the method will transform the data; if theoutputColexists, it will use that. This allows caching of the transformed data when necessary.- Parameters
- dataset
pyspark.sql.DataFrame The dataset to search for nearest neighbors of the key.
- key
pyspark.ml.linalg.Vector Feature vector representing the item to search for.
- numNearestNeighborsint
The maximum number of nearest neighbors.
- distColstr
Output column for storing the distance between each result row and the key. Use “distCol” as default value if it’s not specified.
- dataset
- Returns
pyspark.sql.DataFrameA dataset containing at most k items closest to the key. A column “distCol” is added to show the distance between each row and the key.
Notes
This method is experimental and will likely change behavior in the next release.
-
approxSimilarityJoin(datasetA: pyspark.sql.dataframe.DataFrame, datasetB: pyspark.sql.dataframe.DataFrame, threshold: float, distCol: str = 'distCol') → pyspark.sql.dataframe.DataFrame¶ Join two datasets to approximately find all pairs of rows whose distance are smaller than the threshold. If the
outputColis missing, the method will transform the data; if theoutputColexists, it will use that. This allows caching of the transformed data when necessary.- Parameters
- datasetA
pyspark.sql.DataFrame One of the datasets to join.
- datasetB
pyspark.sql.DataFrame Another dataset to join.
- thresholdfloat
The threshold for the distance of row pairs.
- distColstr, optional
Output column for storing the distance between each pair of rows. Use “distCol” as default value if it’s not specified.
- datasetA
- Returns
pyspark.sql.DataFrameA joined dataset containing pairs of rows. The original rows are in columns “datasetA” and “datasetB”, and a column “distCol” is added to show the distance between each pair.
-
clear(param: pyspark.ml.param.Param) → None¶ Clears a param from the param map if it has been explicitly set.
-
copy(extra: Optional[ParamMap] = None) → JP¶ Creates a copy of this instance with the same uid and some extra params. This implementation first calls Params.copy and then make a copy of the companion Java pipeline component with extra params. So both the Python wrapper and the Java pipeline component get copied.
- Parameters
- extradict, optional
Extra parameters to copy to the new instance
- Returns
JavaParamsCopy of this instance
-
explainParam(param: Union[str, pyspark.ml.param.Param]) → str¶ Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.
-
explainParams() → str¶ Returns the documentation of all params with their optionally default values and user-supplied values.
-
extractParamMap(extra: Optional[ParamMap] = None) → ParamMap¶ Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.
- Parameters
- extradict, optional
extra param values
- Returns
- dict
merged param map
-
getInputCol() → str¶ Gets the value of inputCol or its default value.
-
getNumHashTables() → int¶ Gets the value of numHashTables or its default value.
-
getOrDefault(param: Union[str, pyspark.ml.param.Param[T]]) → Union[Any, T]¶ Gets the value of a param in the user-supplied param map or its default value. Raises an error if neither is set.
-
getOutputCol() → str¶ Gets the value of outputCol or its default value.
-
getParam(paramName: str) → pyspark.ml.param.Param¶ Gets a param by its name.
-
hasDefault(param: Union[str, pyspark.ml.param.Param[Any]]) → bool¶ Checks whether a param has a default value.
-
hasParam(paramName: str) → bool¶ Tests whether this instance contains a param with a given (string) name.
-
isDefined(param: Union[str, pyspark.ml.param.Param[Any]]) → bool¶ Checks whether a param is explicitly set by user or has a default value.
-
isSet(param: Union[str, pyspark.ml.param.Param[Any]]) → bool¶ Checks whether a param is explicitly set by user.
-
classmethod
load(path: str) → RL¶ Reads an ML instance from the input path, a shortcut of read().load(path).
-
classmethod
read() → pyspark.ml.util.JavaMLReader[RL]¶ Returns an MLReader instance for this class.
-
save(path: str) → None¶ Save this ML instance to the given path, a shortcut of ‘write().save(path)’.
-
set(param: pyspark.ml.param.Param, value: Any) → None¶ Sets a parameter in the embedded param map.
-
transform(dataset: pyspark.sql.dataframe.DataFrame, params: Optional[ParamMap] = None) → pyspark.sql.dataframe.DataFrame¶ Transforms the input dataset with optional parameters.
New in version 1.3.0.
- Parameters
- dataset
pyspark.sql.DataFrame input dataset
- paramsdict, optional
an optional param map that overrides embedded params.
- dataset
- Returns
pyspark.sql.DataFrametransformed dataset
-
write() → pyspark.ml.util.JavaMLWriter¶ Returns an MLWriter instance for this ML instance.
Attributes Documentation
-
inputCol= Param(parent='undefined', name='inputCol', doc='input column name.')¶
-
numHashTables: pyspark.ml.param.Param[int] = Param(parent='undefined', name='numHashTables', doc='number of hash tables, where increasing number of hash tables lowers the false negative rate, and decreasing it improves the running performance.')¶
-
outputCol= Param(parent='undefined', name='outputCol', doc='output column name.')¶
-
params¶ Returns all params ordered by name. The default implementation uses
dir()to get all attributes of typeParam.
-