Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.

All subtopics
Posts under Machine Learning & AI topic

Post

Replies

Boosts

Views

Activity

Cannot find type ToolOutput in scope
My sample app has been working with the following code: func call(arguments: Arguments) async throws -> ToolOutput { var temp:Int switch arguments.city { case .singapore: temp = Int.random(in: 30..<40) case .china: temp = Int.random(in: 10..<30) } let content = GeneratedContent(temp) let output = ToolOutput(content) return output } However in 26 beta 5, ToolOutput no longer available, please advice what has changed.
3
0
257
Aug ’25
Plenty of LanguageModelSession.GenerationError.refusal errors after 26.4 update
Hello! After the 26.4 update I get a huge number of LanguageModelSession.GenerationError.refusal errors when using guided generation Generables for inexplicable reasons. Such errors also occur, if I want to cast a response to boolean by using 'generating: Bool.self'. The explanation generated on the grounds of the error always looks like this: Response(userPrompt: "", duration: 0.230917542, promptTokenCount: Optional(66), responseTokenCount: Optional(11), feedbackAttachment: nil, content: "I apologize, but I cannot fulfill this request.", rawContent: "I apologize, but I cannot fulfill this request.", transcriptEntries: ArraySlice([])) All the prompts and Generables I use are definitely not profane. Before 26.4 such errors on the same prompts and Generables never occurred. The 26.4 update rendered those features unusable to me. Is this a known bug or what am I doing wrong?
3
0
464
1w
Shortcut - “Use Model” error handling?
I have a series of shortcuts that I’ve written that use the “Use Model” action to do various things. For example, I have a shortcut “Clipboard Markdown to Notes” that takes the content of the clipboard, creates a new note in Notes, converts the markdown content to rich text, adds it to the note etc. One key step is to analyze the markdown content with “Use Model” and generate a short descriptive title for the note. I use the on-device model for this, but sometimes the content and prompt exceed the context window size and the action fails with an error message to that effect. In that case, I’d like to either repeat the action using the Cloud model, or, if the error was a refusal, to prompt the user to enter a title to use. I‘ve tried using an IF based on whether the response had any text in it, but that didn’t work. No matter what I’ve tried, I can’t seem to find a way to catch the error from Use Model, determine what the error was, and take appropriate action. Is there a way to do this? (And by the way, a huge ”thank you” to whoever had the idea of making AppIntents visible in Shortcuts and adding the Use Model action — has made a huge difference already, and it lets us see what Siri will be able to use as well.)
3
0
622
1w
Error when open mlpackage with XCode
Hello, I'm trying to write a model with PyTorch and convert it to CoreML. I wrote another models and that works succesfully, even the one that gave the problem is, but I can't visualize it with XCode to know where is running. The error that appear is: There was a problem decoding this Core ML document validator error: unable to open file for read Anyone knows why is this happening? Thanks a lot, Álvaro Corrochano
3
0
253
Apr ’25
iOS 26.4: Regressions in Foundation Models
After installing iOS 26.4 the Foundation Models instruction following and tool calling capabilities have been degraded significantly. The model is not usable anymore. Examples: This works: "Is the car plugged in?" This does not work: "Tell me if the car is plugged in" Anything with the work "frunk" (front trunk) triggers Guardrail Violation. Phrases like "Lock Pride" also trigger Guardrail Violation (Pride is the name of the car). Tool calling only works half the time for really obvious things.
3
1
446
1w
Accessing Apple Intelligence APIs: Custom Prompt Support and Inference Capabilities
Hello Apple Developer Community, I'm exploring the integration of Apple Intelligence features into my mobile application and have a couple of questions regarding the current and upcoming API capabilities: Custom Prompt Support: Is there a way to pass custom prompts to Apple Intelligence to generate specific inferences? For instance, can we provide a unique prompt to the Writing Tools or Image Playground APIs to obtain tailored outputs? Direct Inference Capabilities: Beyond the predefined functionalities like text rewriting or image generation, does Apple Intelligence offer APIs that allow for more generalized inference tasks based on custom inputs? I understand that Apple has provided APIs such as Writing Tools, Image Playground, and Genmoji. However, I'm interested in understanding the extent of customization and flexibility these APIs offer, especially concerning custom prompts and generalized inference. Additionally, are there any plans or timelines for expanding these capabilities, perhaps with the introduction of new SDKs or frameworks that allow deeper integration and customization? Any insights, documentation links, or experiences shared would be greatly appreciated. Thank you in advance for your assistance!
3
0
367
Jun ’25
Foundation Models flags 'Six Flags Great America' as unsafe
I'm working on a to-do list app that uses SpeechTranscriber and Foundation Models framework to transcribe a user's voice into text and create to-do items based off of it. After about 30 minutes looking at my code, I couldn't figure out why I was failing to generate a to-do for "I need to go to Six Flags Great America tomorrow at 3pm." It turns out, I was consistently firing the Foundation Models's safety filter violation for unsafe content ("May contain unsafe content"). Lesson learned: consider comprehensively logging Foundation Models error states to quickly identify when safety filters are unexpectedly triggered.
3
1
515
Jul ’25
LanguageModelSession always returns very lengthy responses
No matter what, the LanguageModelSession always returns very lengthy / verbose responses. I set the maximumResponseTokens option to various small numbers but it doesn't appear to have any effect. I've even used this instructions format to keep responses between 3-8 words but it returns multiple paragraphs. Is there a way to manage LLM response length? Thanks.
3
0
331
Sep ’25
My Vision for AI and Algorithmically Optimised Operating Systems
Bear with me, please. Please make sure a highly skilled technical person reads and understands this. I want to describe my vision for (AI/Algorithmically) Optimised Operating Systems. To explain it properly, I will describe the process to build it (pseudo). Required Knowledge (no particular order): Processor Logic Circuits, LLM models, LLM tool usage, Python OO coding, Procedural vs OO, NLP fuzzy matching, benchmarking, canvas/artefacts/dynamic HTML interfaces, concepts of how AI models are vastly compressed and miniaturised forms of full data, Algorithmic vs AI. First, take all OO Python code (example) on GitHub (example), then separate each function from each object into its own procedure (procedural logic) by making a logical procedural list of actions to perform only that function based on its entire dependency chain (i.e. all other objects it relies on). Relate all compiled functions using (for example) fuzzy matching on the name, or AI-based functional profiling to get multiple instances of each function. Starting with the most used function, test each one against the others that perform the same task for bugs and completeness. Determine the fastest, most optimal version of that function (and every function). Add a single instance of each most optimal function to the centralised tool codebase, which will later be utilised by the language models. This ensures we rely only on the most optimised function for each and every use case — with every program using one shared instance of that function instead of compiling it separately. In this way, we optimise all existing (Python OO) code based on maximum availability of certain resources. Use the optimised code to teach the AI how to map each user task directly to optimised code (i.e. each user-facing function in unoptimised program code to its optimised version). Later, teach the codebase how to optimise these functions directly to the logic circuits on the processor — as an Operating System — using essentially the same process. Eventually, use the most optimised form for tasks via a dynamic canvas/artefacts UI interface generated by the Start Menu using the language model. Allow custom interfaces to emerge dynamically from the available functions based on language understanding. Grow and optimise the algorithmic codebase until the entire operating system — with versioning — evolves based on the AI-generated user interfaces (compressing them into new versions of the tools codebase). Please don’t give up until you understand this concept completely and nothing in the description is beyond your full comprehension. It’s a highly significant step forward.
3
0
342
Oct ’25
I Need some clarifications about FoundationModels
Hello I’m experimenting with Apple’s on‑device language model via the FoundationModels framework in Xcode (using LanguageModelSession in my code). I’d like to confirm a few points: • Is the language model provided by FoundationModels designed and trained by Apple? Or is it based on an open‑source model? • Is this on‑device model available on iOS (and iPadOS), or is it limited to macOS? • When I write code in Xcode, is code completion powered by this same local model? If so, why isn’t the same model available in the left‑hand chat sidebar in Xcode (so that I can use it there instead of relying on ChatGPT)? • Can I grant this local model access to my personal data (photos, contacts, SMS, emails) so it can answer questions based on that information? If yes, what APIs, permission prompts, and privacy constraints apply? Thanks
3
0
665
Oct ’25
Apple Intelligence crashed/stopped working
Hi everyone, I’m currently using macOS Version 15.3 Beta (24D5034f), and I’m encountering an issue with Apple Intelligence. The image generation tools seem to work fine, but everything else shows a message saying that it’s “not available at this time.” I’ve tried restarting my Mac and double-checked my settings, but the problem persists. Is anyone else experiencing this issue on the beta version? Are there any fixes or settings I might be overlooking? Any help or insights would be greatly appreciated! Thanks in advance!
3
1
1.7k
Jan ’26
InferenceError with Apple Foundation Model – Context Length Exceeded on macOS 26.0 Beta
Hello Team, I'm currently working on a proof of concept using Apple's Foundation Model for a RAG-based chat system on my MacBook Pro with the M1 Max chip. Environment details: macOS: 26.0 Beta Xcode: 26.0 beta 2 (17A5241o) Target platform: iPad (as the iPhone simulator does not support Foundation models) While testing, even with very small input prompts to the LLM, I intermittently encounter the following error: InferenceError::inference-Failed::Failed to run inference: Context length of 4096 was exceeded during singleExtend. Has anyone else experienced this issue? Are there known limitations or workarounds for context length handling in this setup? Any insights would be appreciated. Thank you!
3
0
294
Jul ’25
FoundationModels coding
I am writing an app that parses text and conducts some actions. I don't want to give too much away ;) However, I am having a huge problem with token sizes. LanguageModelSession will of course give me the on device model 4096 available, but when you go over 4096, my code doesn't seem to be falling back to PCC, or even the system configured ChatGPT. Can anyone assist me with this? For some reason, after reading the docs, it's very unclear how this transition between the three takes place.
3
0
845
Jan ’26
tensorflow-metal error
I'm using python 3.9.6, tensorflow 2.20.0, tensorflow-metal 1.2.0, and when I try to run import tensorflow as tf It gives Traceback (most recent call last): File "/Users/haoduoyu/Code/demo.py", line 1, in <module> import tensorflow as tf File "/Users/haoduoyu/Code/test/lib/python3.9/site-packages/tensorflow/__init__.py", line 438, in <module> _ll.load_library(_plugin_dir) File "/Users/haoduoyu/Code/test/lib/python3.9/site-packages/tensorflow/python/framework/load_library.py", line 151, in load_library py_tf.TF_LoadLibrary(lib) tensorflow.python.framework.errors_impl.NotFoundError: dlopen(/Users/haoduoyu/Code/test/lib/python3.9/site-packages/tensorflow-plugins/libmetal_plugin.dylib, 0x0006): Library not loaded: @rpath/_pywrap_tensorflow_internal.so Referenced from: <8B62586B-B082-3113-93AB-FD766A9960AE> /Users/haoduoyu/Code/test/lib/python3.9/site-packages/tensorflow-plugins/libmetal_plugin.dylib Reason: tried: '/Users/haoduoyu/Code/test/lib/python3.9/site-packages/tensorflow-plugins/../_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/_pywrap_tensorflow_internal.so' (no such file), '/Users/haoduoyu/Code/test/lib/python3.9/site-packages/tensorflow-plugins/../_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/_pywrap_tensorflow_internal.so' (no such file) As long as I uninstall tensorflow-metal, nothing goes wrong. How can I fix this problem?
3
3
1.4k
Jan ’26
Does Image Playground is On-device + Private Cloud ?
Apple's Image Playground primarily performs image generation on-device, but can use secure Private Cloud Compute for more complex requests that require larger models. Private Cloud Compute (PCC) For more complex tasks that require greater computational power than the device can provide, Image Playground leverages Apple's Private Cloud Compute. This system extends the privacy and security of the device to the cloud: Secure Environment: PCC runs on Apple silicon servers and uses a secure enclave to protect data, ensuring requests are processed in a verified, secure environment. No Data Storage: Data is never stored or made accessible to Apple when using PCC; it is used only to fulfill the specific request. Independent Verification: Independent experts are able to inspect the code running on these servers to verify Apple's privacy promises.
3
0
1.1k
Dec ’25
lldb issues with Vision
HI, I've been modifying the Camera sample app found here: https://aninterestingwebsite.com/tutorials/sample-apps/capturingphotos-camerapreview ... in the processpreview images, I am calling in to the Vision APis to either detect a person or object, then I'm using the segmentation mask to extract the person and composite them onto a different background with some other filters. I am using coreimage to filter the CIImages, and converting and displaying as a SwiftUI Image. When running on my IPhone, it works fine. When running on my Iphone with the debugger, it crashes within a few seconds... Attached is a screenshot. At the top is an EXC_BAD_ACCESS in libRPAC.dylib`std::__1::__hash_table<std::__1::__hash_value_type<long, qos_info_t>, std::__1::__unordered_map_hasher<long, std::__1::__hash_value_type<long, qos_info_t>, std::__1::hash, std::__1::equal_to, true>, std::__1::__unordered_map_equal<long, std::__1::__hash_value_type<long, qos_info_t>, std::__1::equal_to, std::__1::hash, true>, std::__1::allocator<std::__1::__hash_value_type<long, qos_info_t>>>::__emplace_unique_key_args<long, std::__1::piecewise_construct_t const&, std::__1::tuple<long const&>, std::__1::tuple<>>: This was working fine a couple of days ago.. Not sure why it's popping up now. Am I correct in interpreting this as an LLDB issue? How do I fix it?
3
1
173
May ’25
Siri not calling my INExtension
Things I did: created an Intents Extension target added "Supported Intents" to both my main app target and the intent extension, with "INAddTasksIntent" and "INCreateNoteIntent" created the AppIntentVocabulary in my main app target created the handlers in the code in the Intents Extension target class AddTaskIntentHandler: INExtension, INAddTasksIntentHandling { func resolveTaskTitles(for intent: INAddTasksIntent) async -> [INSpeakableStringResolutionResult] { if let taskTitles = intent.taskTitles { return taskTitles.map { INSpeakableStringResolutionResult.success(with: $0) } } else { return [INSpeakableStringResolutionResult.needsValue()] } } func handle(intent: INAddTasksIntent) async -> INAddTasksIntentResponse { // my code to handle this... let response = INAddTasksIntentResponse(code: .success, userActivity: nil) response.addedTasks = tasksCreated.map { INTask( title: INSpeakableString(spokenPhrase: $0.name), status: .notCompleted, taskType: .completable, spatialEventTrigger: nil, temporalEventTrigger: intent.temporalEventTrigger, createdDateComponents: DateHelper.localCalendar().dateComponents([.year, .month, .day, .minute, .hour], from: Date.now), modifiedDateComponents: nil, identifier: $0.id ) } return response } } class AddItemIntentHandler: INExtension, INCreateNoteIntentHandling { func resolveTitle(for intent: INCreateNoteIntent) async -> INSpeakableStringResolutionResult { if let title = intent.title { return INSpeakableStringResolutionResult.success(with: title) } else { return INSpeakableStringResolutionResult.needsValue() } } func resolveGroupName(for intent: INCreateNoteIntent) async -> INSpeakableStringResolutionResult { if let groupName = intent.groupName { return INSpeakableStringResolutionResult.success(with: groupName) } else { return INSpeakableStringResolutionResult.needsValue() } } func handle(intent: INCreateNoteIntent) async -> INCreateNoteIntentResponse { do { // my code for handling this... let response = INCreateNoteIntentResponse(code: .success, userActivity: nil) response.createdNote = INNote( title: INSpeakableString(spokenPhrase: itemName), contents: itemNote.map { [INTextNoteContent(text: $0)] } ?? [], groupName: INSpeakableString(spokenPhrase: list.name), createdDateComponents: DateHelper.localCalendar().dateComponents([.day, .month, .year, .hour, .minute], from: Date.now), modifiedDateComponents: nil, identifier: newItem.id ) return response } catch { return INCreateNoteIntentResponse(code: .failure, userActivity: nil) } } } uninstalled my app restarted my physical device and simulator Yet, when I say "Remind me to buy dog food in Index" (Index is the name of my app), as stated in the examples of INAddTasksIntent, Siri proceeds to say that a list named "Index" doesn't exist in apple Reminders app, instead of processing the request in my app. Am I missing something?
3
0
583
1w
Cannot find type ToolOutput in scope
My sample app has been working with the following code: func call(arguments: Arguments) async throws -&gt; ToolOutput { var temp:Int switch arguments.city { case .singapore: temp = Int.random(in: 30..&lt;40) case .china: temp = Int.random(in: 10..&lt;30) } let content = GeneratedContent(temp) let output = ToolOutput(content) return output } However in 26 beta 5, ToolOutput no longer available, please advice what has changed.
Replies
3
Boosts
0
Views
257
Activity
Aug ’25
Plenty of LanguageModelSession.GenerationError.refusal errors after 26.4 update
Hello! After the 26.4 update I get a huge number of LanguageModelSession.GenerationError.refusal errors when using guided generation Generables for inexplicable reasons. Such errors also occur, if I want to cast a response to boolean by using 'generating: Bool.self'. The explanation generated on the grounds of the error always looks like this: Response(userPrompt: "", duration: 0.230917542, promptTokenCount: Optional(66), responseTokenCount: Optional(11), feedbackAttachment: nil, content: "I apologize, but I cannot fulfill this request.", rawContent: "I apologize, but I cannot fulfill this request.", transcriptEntries: ArraySlice([])) All the prompts and Generables I use are definitely not profane. Before 26.4 such errors on the same prompts and Generables never occurred. The 26.4 update rendered those features unusable to me. Is this a known bug or what am I doing wrong?
Replies
3
Boosts
0
Views
464
Activity
1w
Shortcut - “Use Model” error handling?
I have a series of shortcuts that I’ve written that use the “Use Model” action to do various things. For example, I have a shortcut “Clipboard Markdown to Notes” that takes the content of the clipboard, creates a new note in Notes, converts the markdown content to rich text, adds it to the note etc. One key step is to analyze the markdown content with “Use Model” and generate a short descriptive title for the note. I use the on-device model for this, but sometimes the content and prompt exceed the context window size and the action fails with an error message to that effect. In that case, I’d like to either repeat the action using the Cloud model, or, if the error was a refusal, to prompt the user to enter a title to use. I‘ve tried using an IF based on whether the response had any text in it, but that didn’t work. No matter what I’ve tried, I can’t seem to find a way to catch the error from Use Model, determine what the error was, and take appropriate action. Is there a way to do this? (And by the way, a huge ”thank you” to whoever had the idea of making AppIntents visible in Shortcuts and adding the Use Model action — has made a huge difference already, and it lets us see what Siri will be able to use as well.)
Replies
3
Boosts
0
Views
622
Activity
1w
Error when open mlpackage with XCode
Hello, I'm trying to write a model with PyTorch and convert it to CoreML. I wrote another models and that works succesfully, even the one that gave the problem is, but I can't visualize it with XCode to know where is running. The error that appear is: There was a problem decoding this Core ML document validator error: unable to open file for read Anyone knows why is this happening? Thanks a lot, Álvaro Corrochano
Replies
3
Boosts
0
Views
253
Activity
Apr ’25
iOS 26.4: Regressions in Foundation Models
After installing iOS 26.4 the Foundation Models instruction following and tool calling capabilities have been degraded significantly. The model is not usable anymore. Examples: This works: "Is the car plugged in?" This does not work: "Tell me if the car is plugged in" Anything with the work "frunk" (front trunk) triggers Guardrail Violation. Phrases like "Lock Pride" also trigger Guardrail Violation (Pride is the name of the car). Tool calling only works half the time for really obvious things.
Replies
3
Boosts
1
Views
446
Activity
1w
Accessing Apple Intelligence APIs: Custom Prompt Support and Inference Capabilities
Hello Apple Developer Community, I'm exploring the integration of Apple Intelligence features into my mobile application and have a couple of questions regarding the current and upcoming API capabilities: Custom Prompt Support: Is there a way to pass custom prompts to Apple Intelligence to generate specific inferences? For instance, can we provide a unique prompt to the Writing Tools or Image Playground APIs to obtain tailored outputs? Direct Inference Capabilities: Beyond the predefined functionalities like text rewriting or image generation, does Apple Intelligence offer APIs that allow for more generalized inference tasks based on custom inputs? I understand that Apple has provided APIs such as Writing Tools, Image Playground, and Genmoji. However, I'm interested in understanding the extent of customization and flexibility these APIs offer, especially concerning custom prompts and generalized inference. Additionally, are there any plans or timelines for expanding these capabilities, perhaps with the introduction of new SDKs or frameworks that allow deeper integration and customization? Any insights, documentation links, or experiences shared would be greatly appreciated. Thank you in advance for your assistance!
Replies
3
Boosts
0
Views
367
Activity
Jun ’25
Foundation Models flags 'Six Flags Great America' as unsafe
I'm working on a to-do list app that uses SpeechTranscriber and Foundation Models framework to transcribe a user's voice into text and create to-do items based off of it. After about 30 minutes looking at my code, I couldn't figure out why I was failing to generate a to-do for "I need to go to Six Flags Great America tomorrow at 3pm." It turns out, I was consistently firing the Foundation Models's safety filter violation for unsafe content ("May contain unsafe content"). Lesson learned: consider comprehensively logging Foundation Models error states to quickly identify when safety filters are unexpectedly triggered.
Replies
3
Boosts
1
Views
515
Activity
Jul ’25
LanguageModelSession always returns very lengthy responses
No matter what, the LanguageModelSession always returns very lengthy / verbose responses. I set the maximumResponseTokens option to various small numbers but it doesn't appear to have any effect. I've even used this instructions format to keep responses between 3-8 words but it returns multiple paragraphs. Is there a way to manage LLM response length? Thanks.
Replies
3
Boosts
0
Views
331
Activity
Sep ’25
Xcode Version 26.0.1 (17A400) Model assets are unavailable
Hello, I was trying to test out Foundation Model however it says Model assets are unavailable. I got my MacBook M1 back in China when i was living there. is this due to region lock?
Replies
3
Boosts
1
Views
1.4k
Activity
Oct ’25
My Vision for AI and Algorithmically Optimised Operating Systems
Bear with me, please. Please make sure a highly skilled technical person reads and understands this. I want to describe my vision for (AI/Algorithmically) Optimised Operating Systems. To explain it properly, I will describe the process to build it (pseudo). Required Knowledge (no particular order): Processor Logic Circuits, LLM models, LLM tool usage, Python OO coding, Procedural vs OO, NLP fuzzy matching, benchmarking, canvas/artefacts/dynamic HTML interfaces, concepts of how AI models are vastly compressed and miniaturised forms of full data, Algorithmic vs AI. First, take all OO Python code (example) on GitHub (example), then separate each function from each object into its own procedure (procedural logic) by making a logical procedural list of actions to perform only that function based on its entire dependency chain (i.e. all other objects it relies on). Relate all compiled functions using (for example) fuzzy matching on the name, or AI-based functional profiling to get multiple instances of each function. Starting with the most used function, test each one against the others that perform the same task for bugs and completeness. Determine the fastest, most optimal version of that function (and every function). Add a single instance of each most optimal function to the centralised tool codebase, which will later be utilised by the language models. This ensures we rely only on the most optimised function for each and every use case — with every program using one shared instance of that function instead of compiling it separately. In this way, we optimise all existing (Python OO) code based on maximum availability of certain resources. Use the optimised code to teach the AI how to map each user task directly to optimised code (i.e. each user-facing function in unoptimised program code to its optimised version). Later, teach the codebase how to optimise these functions directly to the logic circuits on the processor — as an Operating System — using essentially the same process. Eventually, use the most optimised form for tasks via a dynamic canvas/artefacts UI interface generated by the Start Menu using the language model. Allow custom interfaces to emerge dynamically from the available functions based on language understanding. Grow and optimise the algorithmic codebase until the entire operating system — with versioning — evolves based on the AI-generated user interfaces (compressing them into new versions of the tools codebase). Please don’t give up until you understand this concept completely and nothing in the description is beyond your full comprehension. It’s a highly significant step forward.
Replies
3
Boosts
0
Views
342
Activity
Oct ’25
I Need some clarifications about FoundationModels
Hello I’m experimenting with Apple’s on‑device language model via the FoundationModels framework in Xcode (using LanguageModelSession in my code). I’d like to confirm a few points: • Is the language model provided by FoundationModels designed and trained by Apple? Or is it based on an open‑source model? • Is this on‑device model available on iOS (and iPadOS), or is it limited to macOS? • When I write code in Xcode, is code completion powered by this same local model? If so, why isn’t the same model available in the left‑hand chat sidebar in Xcode (so that I can use it there instead of relying on ChatGPT)? • Can I grant this local model access to my personal data (photos, contacts, SMS, emails) so it can answer questions based on that information? If yes, what APIs, permission prompts, and privacy constraints apply? Thanks
Replies
3
Boosts
0
Views
665
Activity
Oct ’25
Provide unique identifier for tool calls and responses
Hey, Would be great to have an equivalent of toolCallId for both toolCall and toolResult in the transcript. Otherwise, it is hard to connect tool calls with their respective responses, when there were multiple parallel calls to the same tool. Thanks!
Replies
3
Boosts
0
Views
422
Activity
Jul ’25
Apple Intelligence crashed/stopped working
Hi everyone, I’m currently using macOS Version 15.3 Beta (24D5034f), and I’m encountering an issue with Apple Intelligence. The image generation tools seem to work fine, but everything else shows a message saying that it’s “not available at this time.” I’ve tried restarting my Mac and double-checked my settings, but the problem persists. Is anyone else experiencing this issue on the beta version? Are there any fixes or settings I might be overlooking? Any help or insights would be greatly appreciated! Thanks in advance!
Replies
3
Boosts
1
Views
1.7k
Activity
Jan ’26
InferenceError with Apple Foundation Model – Context Length Exceeded on macOS 26.0 Beta
Hello Team, I'm currently working on a proof of concept using Apple's Foundation Model for a RAG-based chat system on my MacBook Pro with the M1 Max chip. Environment details: macOS: 26.0 Beta Xcode: 26.0 beta 2 (17A5241o) Target platform: iPad (as the iPhone simulator does not support Foundation models) While testing, even with very small input prompts to the LLM, I intermittently encounter the following error: InferenceError::inference-Failed::Failed to run inference: Context length of 4096 was exceeded during singleExtend. Has anyone else experienced this issue? Are there known limitations or workarounds for context length handling in this setup? Any insights would be appreciated. Thank you!
Replies
3
Boosts
0
Views
294
Activity
Jul ’25
FoundationModels coding
I am writing an app that parses text and conducts some actions. I don't want to give too much away ;) However, I am having a huge problem with token sizes. LanguageModelSession will of course give me the on device model 4096 available, but when you go over 4096, my code doesn't seem to be falling back to PCC, or even the system configured ChatGPT. Can anyone assist me with this? For some reason, after reading the docs, it's very unclear how this transition between the three takes place.
Replies
3
Boosts
0
Views
845
Activity
Jan ’26
tensorflow-metal error
I'm using python 3.9.6, tensorflow 2.20.0, tensorflow-metal 1.2.0, and when I try to run import tensorflow as tf It gives Traceback (most recent call last): File "/Users/haoduoyu/Code/demo.py", line 1, in <module> import tensorflow as tf File "/Users/haoduoyu/Code/test/lib/python3.9/site-packages/tensorflow/__init__.py", line 438, in <module> _ll.load_library(_plugin_dir) File "/Users/haoduoyu/Code/test/lib/python3.9/site-packages/tensorflow/python/framework/load_library.py", line 151, in load_library py_tf.TF_LoadLibrary(lib) tensorflow.python.framework.errors_impl.NotFoundError: dlopen(/Users/haoduoyu/Code/test/lib/python3.9/site-packages/tensorflow-plugins/libmetal_plugin.dylib, 0x0006): Library not loaded: @rpath/_pywrap_tensorflow_internal.so Referenced from: <8B62586B-B082-3113-93AB-FD766A9960AE> /Users/haoduoyu/Code/test/lib/python3.9/site-packages/tensorflow-plugins/libmetal_plugin.dylib Reason: tried: '/Users/haoduoyu/Code/test/lib/python3.9/site-packages/tensorflow-plugins/../_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/_pywrap_tensorflow_internal.so' (no such file), '/Users/haoduoyu/Code/test/lib/python3.9/site-packages/tensorflow-plugins/../_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/_pywrap_tensorflow_internal.so' (no such file) As long as I uninstall tensorflow-metal, nothing goes wrong. How can I fix this problem?
Replies
3
Boosts
3
Views
1.4k
Activity
Jan ’26
Download toolkit link failing for Foundation Models adapter training
Attempted to download the Adapter Toolkit linked to from https://aninterestingwebsite.com/apple-intelligence/foundation-models-adapter/. Failed on all attempts, with a "403 Forbidden" error. I had accepted the agreement on the first attempt. How would we get access please?
Replies
3
Boosts
1
Views
296
Activity
Jun ’25
Does Image Playground is On-device + Private Cloud ?
Apple's Image Playground primarily performs image generation on-device, but can use secure Private Cloud Compute for more complex requests that require larger models. Private Cloud Compute (PCC) For more complex tasks that require greater computational power than the device can provide, Image Playground leverages Apple's Private Cloud Compute. This system extends the privacy and security of the device to the cloud: Secure Environment: PCC runs on Apple silicon servers and uses a secure enclave to protect data, ensuring requests are processed in a verified, secure environment. No Data Storage: Data is never stored or made accessible to Apple when using PCC; it is used only to fulfill the specific request. Independent Verification: Independent experts are able to inspect the code running on these servers to verify Apple's privacy promises.
Replies
3
Boosts
0
Views
1.1k
Activity
Dec ’25
lldb issues with Vision
HI, I've been modifying the Camera sample app found here: https://aninterestingwebsite.com/tutorials/sample-apps/capturingphotos-camerapreview ... in the processpreview images, I am calling in to the Vision APis to either detect a person or object, then I'm using the segmentation mask to extract the person and composite them onto a different background with some other filters. I am using coreimage to filter the CIImages, and converting and displaying as a SwiftUI Image. When running on my IPhone, it works fine. When running on my Iphone with the debugger, it crashes within a few seconds... Attached is a screenshot. At the top is an EXC_BAD_ACCESS in libRPAC.dylib`std::__1::__hash_table<std::__1::__hash_value_type<long, qos_info_t>, std::__1::__unordered_map_hasher<long, std::__1::__hash_value_type<long, qos_info_t>, std::__1::hash, std::__1::equal_to, true>, std::__1::__unordered_map_equal<long, std::__1::__hash_value_type<long, qos_info_t>, std::__1::equal_to, std::__1::hash, true>, std::__1::allocator<std::__1::__hash_value_type<long, qos_info_t>>>::__emplace_unique_key_args<long, std::__1::piecewise_construct_t const&, std::__1::tuple<long const&>, std::__1::tuple<>>: This was working fine a couple of days ago.. Not sure why it's popping up now. Am I correct in interpreting this as an LLDB issue? How do I fix it?
Replies
3
Boosts
1
Views
173
Activity
May ’25
Siri not calling my INExtension
Things I did: created an Intents Extension target added "Supported Intents" to both my main app target and the intent extension, with "INAddTasksIntent" and "INCreateNoteIntent" created the AppIntentVocabulary in my main app target created the handlers in the code in the Intents Extension target class AddTaskIntentHandler: INExtension, INAddTasksIntentHandling { func resolveTaskTitles(for intent: INAddTasksIntent) async -> [INSpeakableStringResolutionResult] { if let taskTitles = intent.taskTitles { return taskTitles.map { INSpeakableStringResolutionResult.success(with: $0) } } else { return [INSpeakableStringResolutionResult.needsValue()] } } func handle(intent: INAddTasksIntent) async -> INAddTasksIntentResponse { // my code to handle this... let response = INAddTasksIntentResponse(code: .success, userActivity: nil) response.addedTasks = tasksCreated.map { INTask( title: INSpeakableString(spokenPhrase: $0.name), status: .notCompleted, taskType: .completable, spatialEventTrigger: nil, temporalEventTrigger: intent.temporalEventTrigger, createdDateComponents: DateHelper.localCalendar().dateComponents([.year, .month, .day, .minute, .hour], from: Date.now), modifiedDateComponents: nil, identifier: $0.id ) } return response } } class AddItemIntentHandler: INExtension, INCreateNoteIntentHandling { func resolveTitle(for intent: INCreateNoteIntent) async -> INSpeakableStringResolutionResult { if let title = intent.title { return INSpeakableStringResolutionResult.success(with: title) } else { return INSpeakableStringResolutionResult.needsValue() } } func resolveGroupName(for intent: INCreateNoteIntent) async -> INSpeakableStringResolutionResult { if let groupName = intent.groupName { return INSpeakableStringResolutionResult.success(with: groupName) } else { return INSpeakableStringResolutionResult.needsValue() } } func handle(intent: INCreateNoteIntent) async -> INCreateNoteIntentResponse { do { // my code for handling this... let response = INCreateNoteIntentResponse(code: .success, userActivity: nil) response.createdNote = INNote( title: INSpeakableString(spokenPhrase: itemName), contents: itemNote.map { [INTextNoteContent(text: $0)] } ?? [], groupName: INSpeakableString(spokenPhrase: list.name), createdDateComponents: DateHelper.localCalendar().dateComponents([.day, .month, .year, .hour, .minute], from: Date.now), modifiedDateComponents: nil, identifier: newItem.id ) return response } catch { return INCreateNoteIntentResponse(code: .failure, userActivity: nil) } } } uninstalled my app restarted my physical device and simulator Yet, when I say "Remind me to buy dog food in Index" (Index is the name of my app), as stated in the examples of INAddTasksIntent, Siri proceeds to say that a list named "Index" doesn't exist in apple Reminders app, instead of processing the request in my app. Am I missing something?
Replies
3
Boosts
0
Views
583
Activity
1w