Algoritmo feed Woonkly - segmentación objetiva
Modelo de Machine Learning, prototipo que se adaptará a la red social, para que las recomendaciones de los productos sean objetivas, potenciando la segmentación y personalización de los anuncios
Sistema de recomendación de datos a usuarios mediante TensorFlow para la red social de Woonkly
Los sistemas de recomendación del mundo real a menudo se componen de dos etapas:
  1. 1.
    La etapa de recuperación es responsable de seleccionar un conjunto inicial de cientos de candidatos entre todos los posibles candidatos. El objetivo principal de este modelo es eliminar de manera eficiente a todos los candidatos en los que el usuario no está interesado. Debido a que el modelo de recuperación puede estar tratando con millones de candidatos, tiene que ser computacionalmente eficiente.
  2. 2.
    La etapa de clasificación toma los resultados del modelo de recuperación y los ajusta para seleccionar el mejor puñado de recomendaciones posibles. Su tarea consiste en reducir el conjunto de elementos que pueden interesar al usuario a una lista corta de posibles candidatos.

Imports

Let's first get our imports out of the way.
1
!pip install -q tensorflow-recommenders
2
!pip install -q --upgrade tensorflow-datasets
Copied!
In [ ]:
1
import os
2
import pprint
3
import tempfile
4
5
from typing import Dict, Text
6
7
import numpy as np
8
import tensorflow as tf
9
import tensorflow_datasets as tfds
Copied!
In [ ]:
1
import tensorflow_recommenders as tfrs
Copied!

Preparing the dataset

We're going to use the same data
This time, we're also going to keep the ratings: these are the objectives we are trying to predict
1
ratings = tfds.load("movielens/100k-ratings", split="train")
2
3
ratings = ratings.map(lambda x: {
4
"movie_title": x["movie_title"],
5
"user_id": x["user_id"],
6
"user_rating": x["user_rating"]
7
})
Copied!
As before, we'll split the data by putting 80% of the ratings in the train set, and 20% in the test set.In
1
tf.random.set_seed(42)
2
shuffled = ratings.shuffle(100_000, seed=42, reshuffle_each_iteration=False)
3
4
train = shuffled.take(80_000)
5
test = shuffled.skip(80_000).take(20_000)
Copied!
Let's also figure out unique user ids and movie titles present in the data.
This is important because we need to be able to map the raw values of our categorical features to embedding vectors in our models. To do that, we need a vocabulary that maps a raw feature value to an integer in a contiguous range: this allows us to look up the corresponding embeddings in our embedding
1
movie_titles = ratings.batch(1_000_000).map(lambda x: x["movie_title"])
2
user_ids = ratings.batch(1_000_000).map(lambda x: x["user_id"])
3
4
unique_movie_titles = np.unique(np.concatenate(list(movie_titles)))
5
unique_user_ids = np.unique(np.concatenate(list(user_ids)))
Copied!

Implementing a model

Architecture

Ranking models do not face the same efficiency constrains as retrieval models do, and so we have a little bit more freedom in our choice of architectures.
A model composed of multiple stacked dense layers is a relatively common architecture for ranking tasks. We can implement it as follows:In
1
class RankingModel(tf.keras.Model):
2
3
def __init__(self):
4
super().__init__()
5
embedding_dimension = 32
6
7
# Compute embeddings for users.
8
self.user_embeddings = tf.keras.Sequential([
9
tf.keras.layers.experimental.preprocessing.StringLookup(
10
vocabulary=unique_user_ids, mask_token=None),
11
tf.keras.layers.Embedding(len(unique_user_ids) + 1, embedding_dimension)
12
])
13
14
# Compute embeddings for movies.
15
self.movie_embeddings = tf.keras.Sequential([
16
tf.keras.layers.experimental.preprocessing.StringLookup(
17
vocabulary=unique_movie_titles, mask_token=None),
18
tf.keras.layers.Embedding(len(unique_movie_titles) + 1, embedding_dimension)
19
])
20
21
# Compute predictions.
22
self.ratings = tf.keras.Sequential([
23
# Learn multiple dense layers.
24
tf.keras.layers.Dense(256, activation="relu"),
25
tf.keras.layers.Dense(64, activation="relu"),
26
# Make rating predictions in the final layer.
27
tf.keras.layers.Dense(1)
28
])
29
30
def call(self, inputs):
31
32
user_id, movie_title = inputs
33
34
user_embedding = self.user_embeddings(user_id)
35
movie_embedding = self.movie_embeddings(movie_title)
36
37
return self.ratings(tf.concat([user_embedding, movie_embedding], axis=1))
Copied!
This model takes user ids and movie titles, and outputs a predicted rating:In
1
RankingModel()((["42"], ["One Flew Over the Cuckoo's Nest (1975)"]))
Copied!

Loss and metrics

The next component is the loss used to train our model. TFRS has several loss layers and tasks to make this easy.
In this instance, we'll make use of the Ranking task object: a convenience wrapper that bundles together the loss function and metric computation.
We'll use it together with the MeanSquaredError Keras loss in order to predict the ratings.In
1
task = tfrs.tasks.Ranking(
2
loss = tf.keras.losses.MeanSquaredError(),
3
metrics=[tf.keras.metrics.RootMeanSquaredError()]
4
)
Copied!
The task itself is a Keras layer that takes true and predicted as arguments, and returns the computed loss. We'll use that to implement the model's training loop.

The full model

We can now put it all together into a model. TFRS exposes a base model class (tfrs.models.Model) which streamlines bulding models: all we need to do is to set up the components in the __init__ method, and implement the compute_loss method, taking in the raw features and returning a loss value.
The base model will then take care of creating the appropriate training loop to fit our model.
1
class MovielensModel(tfrs.models.Model):
2
3
def __init__(self):
4
super().__init__()
5
self.ranking_model: tf.keras.Model = RankingModel()
6
self.task: tf.keras.layers.Layer = tfrs.tasks.Ranking(
7
loss = tf.keras.losses.MeanSquaredError(),
8
metrics=[tf.keras.metrics.RootMeanSquaredError()]
9
)
10
11
def compute_loss(self, features: Dict[Text, tf.Tensor], training=False) -> tf.Tensor:
12
rating_predictions = self.ranking_model(
13
(features["user_id"], features["movie_title"]))
14
15
# The task computes the loss and the metrics.
16
return self.task(labels=features["user_rating"], predictions=rating_predictions)
Copied!

Fitting and evaluating

After defining the model, we can use standard Keras fitting and evaluation routines to fit and evaluate the model.
Let's first instantiate the model.In
1
model = MovielensModel()
2
model.compile(optimizer=tf.keras.optimizers.Adagrad(learning_rate=0.1))
Copied!
Then shuffle, batch, and cache the training and evaluation data.
1
cached_train = train.shuffle(100_000).batch(8192).cache()
2
cached_test = test.batch(4096).cache()
Copied!
Then train the model:
1
model.fit(cached_train, epochs=3)
Copied!
As the model trains, the loss is falling and the RMSE metric is improving.
Finally, we can evaluate our model on the test set:
1
model.evaluate(cached_test, return_dict=True)
Copied!
Última actualización 10mo ago