月報 2025/09, 2025/10
Thu 11 Sep 2025
14:15:30
エアフライヤーで鶏皮だけカリカリに焼いて塩かけて食べるのいいね.
油がどんどん出てくるからクッキングシートを敷くのはもはや諦めて受け皿で鶏油として回収した. 200円でコップ1/2杯分の旨い油が取れるのいいな. これで炒飯作ってみた. 旨いけどちょっと鶏油の量を控えすぎたかもしれない. まだあるので今度は使い切って入れてみる.
これを作るためだけに鶏油を回収して保管するというのはアホらしいので, チャーハンを作る前のその鍋で鶏皮を焼いちゃうのが良いな.
18:46:13
ニコニコでおすすめに従って連続再生してたら Turkey! っていうアニメが流れてきた. 一話最後の最後まで大真面目に見てたのにアホ加減に度肝を抜かれた. こういう方向のアニメ久しぶりだ. 六話まで見てるがトンチキ具合に拍車がかかってる.
Tue 16 Sep 2025
14:55:08 最近読んだ論文をまとめる
ブラウザの reading スペースに読んでる/読んだ論文をひたすらタブ開いてるんだけど, 閉じる前にメモだけ残しておくテスト. 過去の日記と重複あり.
-
LLM-based User Profile Management for Recommender System
- ユーザー情報をテキスト情報として持っておいて推薦システムもLLMがこれを使う
- It’s Enough: Relaxing Diagonal Constraints in Linear Autoencoders for Recommendation
- EASE の対角成分の制約を緩くする
Continual Recommender Systems
Modern recommender systems operate in uniquely dynamic settings: user interests, item pools, and popularity trends shift continuously, and models must adapt in real time without forgetting past preferences. While existing tutorials on continual or lifelong learning cover broad machine learning domains (e.g., vision and graphs), they do not address recommendation-specific demands-such as balancing stability and plasticity per user, handling cold-start items, and optimizing recommendation metrics under streaming feedback. This tutorial aims to make a timely contribution by filling that gap. We begin by reviewing the background and problem settings, followed by a comprehensive overview of existing approaches. We then highlight recent efforts to apply continual learning to practical deployment environments, such as resource-constrained systems and sequential interaction settings. Finally, we discuss open challenges and future research directions. We expect this tutorial to benefit researchers and practitioners in recommender systems, data mining, AI, and information retrieval across academia and industry.
- Continual Recommender Systems
- 毎回再学習するんじゃなくて, 前回までに獲得したモデルを引き継いで更新してく
A Pre-trained Sequential Recommendation Framework: Popularity Dynamics for Zero-shot Transfer
Sequential recommenders are crucial to the success of online applications, \eg e-commerce, video streaming, and social media. While model architectures continue to improve, for every new application domain, we still have to train a new model from scratch for high quality recommendations. On the other hand, pre-trained language and vision models have shown great success in zero-shot or few-shot adaptation to new application domains. Inspired by the success of pre-trained models in peer AI fields, we propose a novel pre-trained sequential recommendation framework: PrepRec. We learn universal item representations by modeling item popularity dynamics. Through extensive experiments on five real-world datasets, we show that PrepRec, without any auxiliary information, can not only zero-shot transfer to a new domain, but achieve competitive performance compared to state-of-the-art sequential recommender models with only a fraction of the model size. In addition, with a simple post-hoc interpolation, PrepRec can improve the performance of existing sequential recommenders on average by 13.8\% in Recall@10 and 29.5% in NDCG@10. We provide an anonymized implementation of PrepRec at https://anonymous.4open.science/r/PrepRec--2F60/
-
A Pre-trained Sequential Recommendation Framework: Popularity Dynamics for Zero-shot Transfer
-
ドメインに全く依存せず, アイテムを人気度のダイナミクスで表現する
- 学習データと適用データはアイテムもユーザーも全く異なってOK
PBiLoss: Popularity-Aware Regularization to Improve Fairness in Graph-Based Recommender Systems
Recommender systems, especially those based on graph neural networks (GNNs), have achieved remarkable success in capturing user-item interaction patterns. However, they remain susceptible to popularity bias--the tendency to over-recommend popular items--resulting in reduced content diversity and compromised fairness. In this paper, we propose PBiLoss, a novel regularization-based loss function designed to counteract popularity bias in graph-based recommender models explicitly. PBiLoss augments traditional training objectives by penalizing the model's inclination toward popular items, thereby encouraging the recommendation of less popular but potentially more personalized content. We introduce two sampling strategies: Popular Positive (PopPos) and Popular Negative (PopNeg), which respectively modulate the contribution of the positive and negative popular items during training. We further explore two methods to distinguish popular items: one based on a fixed popularity threshold and another without any threshold, making the approach flexible and adaptive. Our proposed method is model-agnostic and can be seamlessly integrated into state-of-the-art graph-based frameworks such as LightGCN and its variants. Comprehensive experiments across multiple real-world datasets demonstrate that PBiLoss significantly improves fairness, as demonstrated by reductions in the Popularity-Rank Correlation for Users (PRU) and Popularity-Rank Correlation for Items (PRI), while maintaining or even enhancing standard recommendation accuracy and ranking metrics. These results highlight the effectiveness of directly embedding fairness objectives into the optimization process, providing a practical and scalable solution for balancing accuracy and equitable content exposure in modern recommender systems.
- PBiLoss: Popularity-Aware Regularization to Improve Fairness in Graph-Based Recommender Systems
-
アイテムの人気度合いで学習の重みを変える
LONGER: Scaling Up Long Sequence Modeling in Industrial Recommenders
Modeling ultra-long user behavior sequences is critical for capturing both long- and short-term preferences in industrial recommender systems. Existing solutions typically rely on two-stage retrieval or indirect modeling paradigms, incuring upstream-downstream inconsistency and computational inefficiency. In this paper, we present LONGER, a Long-sequence Optimized traNsformer for GPU-Efficient Recommenders. LONGER incorporates (i) a global token mechanism for stabilizing attention over long contexts, (ii) a token merge module with lightweight InnerTransformers and hybrid attention strategy to reduce quadratic complexity, and (iii) a series of engineering optimizations, including training with mixed-precision and activation recomputation, KV cache serving, and the fully synchronous model training and serving framework for unified GPU-based dense and sparse parameter updates. LONGER consistently outperforms strong baselines in both offline metrics and online A/B testing in both advertising and e-commerce services at ByteDance, validating its consistent effectiveness and industrial-level scaling laws. Currently, LONGER has been fully deployed at more than 10 influential scenarios at ByteDance, serving billion users.
-
LONGER: Scaling Up Long Sequence Modeling in Industrial Recommenders
- めちゃ長いシーケンスを安定して入力できる Transformer を作った
- MLP-Mixer: An all-MLP Architecture for Vision
-
CNN も Transformer も使わずに MLP だけで画像認識させる
- 入力データを転置させながら MLP に通すのがコツ
-
Pyramid Mixer: Multi-dimensional Multi-period Interest Modeling for Sequential Recommendation
-
時系列推薦をほぼ MLP だけでやる
- 時系列データを MLP-Mixer みたいに転置しながら通す
Tue 23 Sep 2025
14:35:33 諸行無常
特に npm パッケージを使ってるようなものは脆弱性のアラートが上がってしょうがないんで, ばしばしアーカイブしてく.
Tue 07 Oct 2025
15:37:09 甘え
嫌われたくない人は細かいことを言わないようにする.
Wed 08 Oct 2025
15:57:43 動画生成は推論ができるという主張
問題を画像に関する問題に変換することで, 動画生成 (i2v) は問題を解決できる, かもしれない.
-
Gravity earth/moon
-
Colorization
-
Maze
Thu 23 Oct 2025
16:46:11
ポケモン Legends Z-A 始めた. 先週の16日に発売されたんだけど, 17日の金曜日夕方から始めた. 真面目に最後までやるのは長いんだけど, スタッフロールが流れるところまでなら丸2日でクリアした. 早く終わらせるだけなら強いポケモンと取っ替え引っ替えするんだろうけど, ピジョットとどうしても最後まで旅をしたいので.
でも今はピジョットはもう引っ込めて, ジガルデに取って代わられた.
16:52:50
腕時計ってどうしても邪魔に思えるから付けてなかったんだけど, バイクとかで遠出するときは欲しくなった. というわけで2000円以内で買えるカシオの腕時計を買った. 軽くて薄いので良いかもしれない.