' which treats the LLM as an agent to interactively explore related entities and relations on KGs and perform reasoning based on the retrieved knowledge. We further implement this paradigm by introducing a new approach called Think-on-Graph (ToG), in which the LLM agent iteratively executes beam search on KG, discovers the most promising reasoning paths, and returns the most likely reasoning results. We use a number of well-designed experiments to examine and illustrate the following advantages of ToG: 1) compared with LLMs, ToG has better deep reasoning power; 2) ToG has the ability of knowledge traceability and knowledge correctability by leveraging LLMs reasoning and expert feedback; 3) ToG provides a flexible plug-and-play framework for different LLMs, KGs and prompting strategies without any additional training cost; 4) the performance of ToG with small LLM models could exceed large LLM such as GPT-4 in certain scenarios and this reduces the cost of LLM deployment and application. As a training-free method with lower computational cost and better generality, ToG achieves overall SOTA in 6 out of 9 datasets where most previous SOTAs rely on additional training.","authors":[{"id":"65d45da0c136ef133167f4f8","name":"Jiashuo Sun","org":"Xiamen University","orgid":"5f71b2bd1c455f439fe3deb7"},{"id":"5621bfd845cedb3398351800","name":"Chengjin Xu","org":"International Digital Economy Academy","orgid":"5f71b4021c455f439fe46f0d"},{"id":"6526776055b3f8ac46609063","name":"Lumingyuan Tang","org":"University of Southern California","orgid":"5f71b4161c455f439fe47839"},{"id":"64484aeee3c28ee84c2cc0cd","name":"Saizhuo Wang","org":"The Hong Kong University of Science and Technology","orgid":"62331e370a6eb147dca8abfb"},{"id":"542b7405dabfae2b4e16abd0","name":"Chen Lin","org":"Xiamen University","orgid":"5f71b2bd1c455f439fe3deb7"},{"id":"53f4479fdabfaeecd69b16e0","name":"Yeyun Gong","org":"Microsoft","orgid":"5f71b2831c455f439fe3c634"},{"id":"63156db0cd729caec636deee","name":"Lionel Ni","org":"The Hong Kong University of Science and Technology (Guangzhou))","orgid":"5f71b2961c455f439fe3ce4f"},{"id":"53f4cdfcdabfaeedd377b4ed","name":"Heung-Yeung Shum","org":"Microsoft","orgid":"5f71b2831c455f439fe3c634"},{"id":"6597e4c1e577a45cc2f0213c","name":"Jian Guo","org":"Hong Kong University of Science and Technology","orgid":"62331e370a6eb147dca8abfb"}],"create_time":"2023-07-18T04:54:09.925Z","doi":"10.48550\u002Farxiv.2307.07697","id":"64b60eaa3fda6d7f06eae92c","issn":"2331-8422","keywords":["Knowledge Graph","Chain-of-Thought","Large Language Models"],"lang":"en","num_citation":131,"pdf":"https:\u002F\u002Fcz5waila03cyo0tux1owpyofgoryroob.aminer.cn\u002F77\u002F2F\u002F53\u002F772F53026680C3728976F44F51E0F4D2.pdf","title":"Think-on-Graph: Deep and Responsible Reasoning of Large Language Model on Knowledge Graph","update_times":{"u_a_t":"2024-09-29T19:08:40Z","u_c_t":"2024-11-17T05:52:47.658Z","u_v_t":"2024-12-16T14:54:15.103Z"},"urls":["http:\u002F\u002Farxiv.org\u002Fabs\u002F2307.07697"],"venue":{"info":{"name":"arXiv (Cornell University)","publisher":"Cornell University"},"volume":"abs\u002F2307.07697"},"venue_hhb_id":"5ea18efeedb6e7d53c00a01c","versions":[{"id":"64b60eaa3fda6d7f06eae92c","sid":"2307.07697","src":"arxiv","vsid":"S4306400194","year":2023},{"id":"64c78b983fda6d7f06db49b8","sid":"journals\u002Fcorr\u002Fabs-2307-07697","src":"dblp","vsid":"journals\u002Fcorr","year":2023},{"id":"65ea8ca813fb2c6cf6314f10","sid":"nnVO1PvbTv","src":"conf_iclr","vsid":"ICLR.cc\u002F2024\u002FConference","year":2024},{"id":"667b00169255e7a318a59986","sid":"119601712acbe6ee133a1744f0970190c4195519","src":"semanticscholar","vsid":"1901e811-ee72-4b20-8f7e-de08cd395a10","year":2023},{"id":"66c60bff6c88b2fc28cb717d","sid":"fbcda3e1f19b1ed86a30b7c34f792e8e738f32ae","src":"semanticscholar","year":2023},{"id":"66e0233701d2a3fbfc29e4df","sid":"conf\u002Ficlr\u002FSunXTW0GNSG24","src":"dblp","vsid":"conf\u002Ficlr","year":2024},{"id":"6578d6f2939a5f40826c4e74","sid":"W4384643740","src":"openalex","vsid":"S4306400194","year":2023}],"year":2023},{"abstract":"The lack of fine-grained 3D shape segmentation data is the main obstacle to developing learning-based 3D segmentation techniques. We propose an effective semi-supervised method for learning 3D segmentations from a few labeled 3D shapes and a large amount of unlabeled 3D data. For the unlabeled data, we present a novel multilevel consistency loss to enforce consistency of network predictions between perturbed copies of a 3D shape at multiple levels: point level, part level, and hierarchical level. For the labeled data, we develop a simple yet effective part substitution scheme to augment the labeled 3D shapes with more structural variations to enhance training. Our method has been extensively validated on the task of 3D object semantic segmentation on PartNet and ShapeNetPart, and indoor scene semantic segmentation on ScanNet. It exhibits superior performance to existing semi-supervised and unsupervised pre-training 3D approaches.","authors":[{"id":"6177375e60a9653833e1a581","name":"Sun Chun-Yu","org":"Tsinghua University","orgid":"5f71b2881c455f439fe3c860"},{"id":"5612669745cedb339792c35d","name":"Yang Yu-Qi","org":"Tsinghua University","orgid":"5f71b2881c455f439fe3c860"},{"id":"6325e71e293b827d8579462c","name":"Guo Hao-Xiang","org":"Tsinghua University","orgid":"5f71b2881c455f439fe3c860"},{"id":"542db030dabfae489b98945f","name":"Wang Peng-Shuai","org":"Microsoft Research Asia","orgid":"5f71b2831c455f439fe3c634"},{"id":"5601b98045cedb3395ea5140","name":"Tong Xin","org":"Microsoft Research Asia","orgid":"5f71b2831c455f439fe3c634"},{"email":"yangliu@microsoft.com","id":"542a013adabfae5848a87b59","name":"Liu Yang","org":"Microsoft Research Asia","orgid":"5f71b2831c455f439fe3c634"},{"id":"53f4cdfcdabfaeedd377b4ed","name":"Shum Heung-Yeung","org":"Tsinghua University","orgid":"5f71b2881c455f439fe3c860"}],"create_time":"2024-07-26T03:01:23.367Z","doi":"10.1007\u002Fs41095-022-0281-9","id":"625f6bf65aee126c0ffb35cc","issn":"2096-0433","keywords":["shape segmentation","semi-supervised learning","multilevel consistency"],"lang":"en","num_citation":3,"pages":{"end":"247","start":"229"},"pdf":"https:\u002F\u002Fcz5waila03cyo0tux1owpyofgoryroob.aminer.cn\u002F92\u002FEE\u002F8D\u002F92EE8D6630F381BF43215B11AA684C1F.pdf","title":"Semi-supervised 3D Shape Segmentation with Multilevel Consistency and Part Substitution","update_times":{"u_a_t":"2024-12-25T17:10:28Z","u_c_t":"2023-04-18T14:06:30.699Z","u_v_t":"2024-12-25T17:10:28Z"},"urls":["https:\u002F\u002Fsemanticscholar.org\u002Fpaper\u002Fff9575536c3a0b4effb2b79aa8083fdc6fe9025e"],"venue":{"info":{"name":"COMPUTATIONAL VISUAL MEDIA"},"issue":"2","volume":"9"},"venue_hhb_id":"5ea1879cedb6e7d53c009c78","versions":[{"id":"627cdb2c5aee126c0f4b8e96","sid":"ff9575536c3a0b4effb2b79aa8083fdc6fe9025e","src":"semanticscholar","vsid":"COMPUTATIONAL VISUAL MEDIA","year":2023},{"id":"63b4eafb90e50fcafda43236","sid":"10.1007\u002Fs41095-022-0281-9","src":"springer","vsid":"41095","year":2023},{"id":"63d7ae8290e50fcafdaced28","sid":"journals\u002Fcorr\u002Fabs-2204-08824","src":"dblp","vsid":"journals\u002Fcorr","year":2022},{"id":"63f322ea90e50fcafdf3d333","sid":"journals\u002Fcvm\u002FSunYGWTLS23","src":"dblp","vsid":"journals\u002Fcvm","year":2023},{"id":"64c10fc13fda6d7f066d9739","sid":"10.1007\u002Fs41095-022-0281-9","src":"springernature","vsid":"41095","year":2023},{"id":"657880ed939a5f4082e98776","sid":"W4224305464","src":"openalex","vsid":"S4306400194","year":2022},{"id":"6578fa12939a5f4082a9b488","sid":"W4313531260","src":"openalex","vsid":"S2487656537","year":2023},{"id":"66a22e9401d2a3fbfce61628","sid":"jsksmt-e202302002","src":"wf","vsid":"jsksmt-e","year":2023},{"id":"625f6bf65aee126c0ffb35cc","sid":"2204.08824","src":"arxiv","year":2022},{"id":"63bfff3e90e50fcafd7b3886","sid":"c597ec4614c94f0dada5703ea1d9d83f","src":"doaj","year":2023},{"id":"648e7fbed68f896efa5e8e66","sid":"10.1007\u002Fs41095-022-0281-9","src":"crossref","year":2023},{"id":"63ecd90290e50fcafd5d02f8","sid":"WOS:000907570700002","src":"wos","vsid":"COMPUTATIONAL VISUAL MEDIA","year":2023}],"year":2023}],"profilePubsTotal":355,"profilePatentsPage":0,"profilePatents":null,"profilePatentsTotal":null,"profilePatentsEnd":false,"profileProjectsPage":0,"profileProjects":null,"profileProjectsTotal":null,"newInfo":null,"checkDelPubs":[]}};