CPC H04N 21/44218 (2013.01) [A61B 5/0205 (2013.01); A61B 5/165 (2013.01); G06F 3/011 (2013.01); G06F 3/012 (2013.01); G06F 3/14 (2013.01); G06N 5/04 (2013.01); G06Q 40/06 (2013.01); G06V 40/174 (2022.01); G10L 25/63 (2013.01); A61B 5/0077 (2013.01); A61B 5/021 (2013.01); A61B 5/024 (2013.01); A61B 5/0533 (2013.01); A61B 5/0816 (2013.01); A61B 5/369 (2021.01); A61B 7/04 (2013.01); A61B 2503/12 (2013.01); G06F 3/015 (2013.01); G06F 3/016 (2013.01); G06F 2203/011 (2013.01); G06N 3/126 (2013.01)] | 20 Claims |
1. A method, comprising:
generating a first portion of a video, wherein the first portion depicts an initial object corresponding to an initial goal, wherein the first portion of the video is displayed on an output circuit of a user device;
generating a second portion of the video based on emotional response data captured by an emotion-tracking device of the user device, wherein the second portion depicts an updated object corresponding to an updated goal, the updated goal is updated from the initial goal, and the updated object being different from the initial object, wherein the emotional response data is captured by the emotion-tracking device while a user is viewing the first portion using the output circuit, and wherein the emotional response data indicates that the user reacted negatively to the initial object corresponding to the initial goal;
determining a cutoff point of the first portion; and
stitching the second portion of the video to the first portion of the video at the cutoff point, wherein the output circuit of the user device continuously plays the video from the first portion to the second portion.
|