unity3d 相机跟随对象中如何实现相机侧面观察对象

unity3d相机切换效果
var camera0 : C var camera1 : C function Update () { if (Input.GetKey ("1")) & & { & && &&&camera1.enabled = & && &&&camera0.enabled = & & } & & if (Input.GetKey ("2")) & & { & && &&&camera1.enabled = & && &&&camera0.enabled = & & }
function OnGUI () {
GUI.Box (Rect (10,10,100,90), "Camera Switch");
& & & & // Make the first button. If it is pressed, Application.Loadlevel (1) will be executed & & & & if (GUI.Button (Rect (20,40,80,20), "Camera 1")) { & & & & & & & & camera1.enabled = & && &&&camera0.enabled = & & & & }
& & & & // Make the second button. & & & & if (GUI.Button (Rect (20,70,80,20), "Camera 2")) { & & & & & & & & camera1.enabled = & && &&&camera0.enabled = & & & & }
。。。。。。。。。。。。。。。。。。。。。。。。。。。。。
镜头切换脚本
var _camera1 : C var _camera2 : C function Update ()
&if (Input.GetKey ("1")) //如果敲击键盘的"1" &{
&&_camera1.enabled = //camera1激活 &&_camera2.enabled = //camera1停止 &}
&if (Input.GetKey ("0")) //如果敲击键盘的"0" &{
&&_camera1.enabled = //camera1停止 &&_camera2.enabled = //camera1激活 &}
但是这个脚本不能使镜头恢复初始状态,还在研究中。
。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。
unity3d相机参数及同一场景中多个相机的应用
由 uke 于 星期日, 11/08/2009 - 18:35 发表
在unity3d中,相机是一个场景中必不可少的元素,相机就像是人的眼睛,三维场景的呈现,最后还是要通过相机来实现的。下图为相机的属性面板:
可以看出,相机物体与一般物体的区别即在于其有一个camera组件,下面我们来看看这个组件下的属性参数都有什么作用.
clear flags:这个属性用来设置此相机的画面背景如何处理,共有四个选择:天空盒,固定颜色,仅深度,不做处理。如果再多一个固定图片的选项就好了,可以直接在此做背景图了。
当选择天空盒时,你将会在场景中看到你在场景渲染设置中所用到的天空盒画面。当选择固定颜色时,下面的background color即为用到的颜色。当选择仅深度时,这个相机是没有背景的,就好像这个相机渲染出了一个有深度的透明画面一样。最后一项不太明白在什么情况下使用。
normalized view port rect
这组参数是用来分割画面的,只能分割成方形画面,设置一坐标点及宽和高即可。可以做四格漫画了,比较有用一组参数。
near clip plane
far clip plane
field of view
这三个参数直接决定此相机视野的深度和广度,用过手动相机的朋友会比较明白。
orthographic 此参数将相机设为正交相机,既画面没有透视变化,如果你要做一些平面效果的话,那就使用正交相机。
orthographic size此参数设定正交相机的视野范围。配合上一个参数使用。
Depth:设定多个相机的渲染先后顺序。
cullingMask:比较有用的一个参数,设定当前相机的渲染对象层,类似于分层渲染,这样就可以设定哪些对象可以被渲染,哪些对象不被渲染。配合layer使用。
targetTexture:指定渲染纹理。
。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。
unity3d自定义曲线路径
这个脚本可以让你在U3D内设置曲线并让物体沿着你设置的曲线进行运动。(教程最后面附源代码下载地址)
1.新建一个空的GameOject(用来存放路径点),在它的层次下新建几个Cube作为路径点,再新建一个GameOject作为运动的物体(Cube或Sqhere)。
2.为运动的物体添加SplineController脚本.
3.&如图,选中移动物体对象,打开属性面板,将存放路径点的对象Game拖曳到上一步添加脚本后预留的接口上,系统将自动计算路径,并以红色线条显示在场景中。
你可以通过改变路径点Cube位置来调整路径,或者添加新的路径点,系统都会自动计算路径,你要移动的物体将会沿着你设置的路径移动。
4.&下面介绍添加脚本后生成的接口的作用,
Spline Root&:自动计算路径曲线接口。
Duration&:移动一次所持续时间,即可以控制移动速度,默认10秒。
Orientation Modern&:控制移动物体的角度、方向,可选两项:
NODE:角度固定不变。
TANGENT:将与曲线正切动态改变角度。
Wrap Mode&:循环模式,可选两项:
ONCE:只运行一次。
LOOP:一直循环。
Auto Strart&:是否自动计算曲线起始点。
Auto Close&:是否自动计算曲线终结点。
Hide On Execute&:程序运行后路径点是否可见。
官方社区的一个例子,文章转自:.cn/s/blog_00gzvr.html
。。。。。。。。。。。。。。。。。。。。。。。。。。。。。
摄像机路径动画及动画录制
Here's the Blender camera path project that I mentioned&. In the interest of focus (and size), I stripped out everything except the basics. I got the basic idea when I came across&&on making control-point-less bezier curve&motion&in Flash. Or rather, the control points are generated from the points already existing in the path. This way you can get nice curves straight from mesh data.&
To use it, in Blender first make a straight line using the curve tool (if that isn't a contradiction&&). Add -& Curve -& Bezier Curve, then press V to straighten the points, and then press Tab to leave edit mode. In the Link & Materials pane, call it "line". You only have to make one of these and you can use it with any number of 3D paths.&
So, make a 3D path (Add -& Curve -& Path). In the Curve & Surface pane, click in the BevOb: field and type "line". This uses the bezier curve we made first to define the shape. In this case we just want a simple line so we can tilt the path and get surface normals from it later. Move points in the path by clicking on them and pressing G (Grab) then clicking again to place. Extend the path by selecting a point on the end and pressing E and clicking to place. Press T to tilt selected points left or right. Continue as long as you want, but if you're making a looping path, do not try to close the loop...just leave a gap between the first and last points. In the DefResolU: field, probably 1 is fine, or 2 if you want extra precision. Outside of edit mode, press option-C to convert the path to a mesh, and save it (be sure to make a copy of the pre-converted path, in case you want to go back and easily edit it later).&
When you bring it into Unity, make sure that "Automatically calculate normals" is set to 180. Otherwise, if you did some funky things with the path (like 360 degree spins), Unity makes some extra points, which obviously messes up the end of the path.
When you place the path somewhere in your scene, you can position, rotate, and scale it however you want, and these transforms are accounted for properly. You can leave the mesh renderer on to visually see where the path goes exactly, and then disable the renderer when you have it positioned where you want it.&
To make a camera go along the path, put the PathFollow script on it (I put it in the Camera-Control menu, but you can also use it for other objects). Drag the path from Blender onto the Path slot. Use Move Speed to control the overall speed of the camera movement, though relative speeds within the path are controlled by how far apart the points are. So if you want a completely even speed, make all the points pretty much the same distance apart from each other.&
Direction is either forward or backward, though backward just traverses the poi it doesn't make the camera face backward or anything. The little arrows you see in Blender, conveniently enough, tell you what direction the path goes in.&
Movement Only makes the camera follow the points through space, but leaves the rotation alone so the camera stays facing the direction it's in when it starts. Typically you'd use this with the&Motion&Record script (see below) to add manual rotation to all the points.&
Loop...I'm sure you can figure that one out.&&
Startpoint starts the camera movement at a given point. So if you have a path made up of 120 points, using 60 for Startpoint makes the camera start halfway along the path.
Endpoint ends the camera movement at a given point, counting backwards from the last point. It only works in non-looping mode. If you're not looping, this should be set to at least 3, or else movement data from the first few points "bleeds" into the last few. This is how loop mode works, but you typically don't want that for non-looping.&
Ease in and Ease out are for starting and ending smoothly. Intended for non-looping mode.&
"Object to track" is a transform, which if used makes the camera always point at that object as it moves along the path. If this is none, then it has no effect.&
Rotation Data is a string generated by&Motion&Record. If you leave it empty, it does nothing.&
Zoom Data is also a string generated by&Motion&Record. If left empty, it does nothing. Otherwise it behaves like Rotation Data, but changes the zoom (field of view).&
The rest of the variables you can ignore, but they need to be public for&MotionRecord to access...is there a way to do hidden public variables? (Short of static, which won't work in this case since you might want to use multiple objects on paths at once).&
The use of&Motion&Record is detailed in the next post.
OK, docs for&Motion&Record:&
If you want to add extra rotation to camera path movement, either in combination with Movement Only or just to add some additional rotation to that which you get from the path itself, put the&Motion&Record script on the same camera that's using the Path Follow script. This allows you to steer the view with the mouse/keyboard as the camera goes along, and edit each point afterward if you want.&
(My&&uses Movement Only with manual rotation for the helicopter--this allows the helicopter to move along the path, with the manual rotation making it behave more like a helicopter. It also uses regular mode--not Movement Only--with extra manual rotation for the chase viewpoint, in order to get a sort of "hand-held" movement which you can't get from the path alone. The car demo in that same topic uses regular mode without any extra rotation, with the Object To Track set to the car.)&
You also need the three&Motion&Record&GUI&objects, which exist as a prefab. Probably easiest to make a package of this and import it into whatever project you want.&
Sensitivity is how sensitive the mouse controls are. Key Sensitivity is how sensitive the keyboard controls are for&motion. Invert Y Axis does just that, and File Name is the name of a text file that will be generated when you're done. This gets put in the base folder of the project, outside of Assets, because it's not used directly.&
When you run the scene, you'll see "Recording" in the lower right. This means you can move the view around as you go along the path, and the rotation and/or zoom at each point is recorded. If it says Rotation/Zoom, then you are recording both. If it says Rotation, then you're using zoom data that you recorded before and put on the camera, so you're only recording rotation, but you can edit the zoom. If it says Zoom, then you've got some rotation data, so you're only recording zoom, but you can edit the rotation. If you have both rotation and zoom data, then you can only edit them. If you remove the rotation and/or zoom data from the camera, then you can record them again.&
The % in the upper left is what point you're on at the moment, out of the total number of points in the path.&
Press H to get a reminder of keyboard commands. (Or the help key, theoretically, though it doesn't seem to work even though Unity doesn't object to the name.) These are:&
WSAD to rotate the view on the X and Y axes instead of/in addition to the mouse. ZX or & & (really . and ,) rotates on the forward axis. [ (left bracket) zooms out, ] (right bracket) zooms in. Hold down shift to make the movement go 10 times slower for fine control. Press space to reset all axes to 0 and the zoom back to the field of view you started out with (usually 60, unless you've changed it).&
Press 0 (zero) to set the overall speed to none. That way you can move the view around as much as you want at that point before continuing. Press 1 to go back to 100% speed, or 2 to go 10X faster. Press + and - (really = and -) or left and right arrows to go a little bit faster or slower. It's sometimes useful to go in slow-motion...I did most of the helicopter movement in one "take" that way, with a little editing afterward. On the other hand, the viewpoint movement was done in real-time to get a sort of hand-held feel.&
When you get to the end of the path, or if you press Q, then the rotation data is saved to a text file with the specified name. Open this file up and select all, then copy, then switch back to Unity and paste into the Rotation Data field on the Path Follow script. (Yep, the whole thing will fit even if you've got zillions of points.) This may seem slightly clunky, but I can't think of any other way to do this, short of just reading a data file directly, which I don't want to do because I want the data to be self-contained in the project. Any suggestions welcome.&
If you run the scene again with data in the Rotation Data field, you can then edit each point. "Recording" no longer shows, to be replaced by "Edit" if you're in edit mode. Controls as above, except you normally can't rotate the view, but now you can press E to enter edit mode. This allows you to change the rotation of whatever point you're at (see upper left display). E again goes back to movement mode (or you can use 1 or 2 to jump to that speed and exit edit mode at the same time). Press up or down arrows to jump to the next or previous point (and enter edit mode if it's not active already). Again, pressing Q or reaching the end of the path will save the edited data, which you can then paste into the Rotation Data field, overwriting the original data. Probably a good idea to use different names for the files so you can go back to the pre-edited version if necessary.&
Keep in mind that you're just creating/editing rotations on each point in the path, so the Path Follow script is generating bezier curves on the fly. This means that playback will probably not exactly match what you did when recording. It's useful after editing some points to go backward before those points and then enter play mode to see exactly what you did. If you want finer control, make the path resolution higher in Blender (and adjust Move Speed to compensate).&
And I think that's about it, unless I forgot something.... If anyone uses this, let me know if you find any bugs/problems. Also let me know if you think of some additional functionality. Or better yet, implement it yourself and post it.&&I can already think of an external view mode for editing the rotation of objects on the path, though it was surprisingly intuitive to do the helicopter rotation from first-person mode...also quite a bit faster than external editing would be, I think.&
Last edited by Eric5h5 on Mon Jul 02,
edited 3 times in total
原贴地址:/viewtopic.php?t=5898&highlight=gui+motio
阅读(...) 评论() &Unity3D【】学习笔记-相机的观察方式及工作相机变更_花杀兮吧_百度贴吧
&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&签到排名:今日本吧第个签到,本吧因你更精彩,明天继续来努力!
本吧签到人数:0可签7级以上的吧50个
本月漏签0次!成为超级会员,赠送8张补签卡连续签到:天&&累计签到:天超级会员单次开通12个月以上,赠送连续签到卡3张
关注:1贴子:
Unity3D【】学习笔记-相机的观察方式及工作相机变更
using UnityEusing System.Cusing UnityEpublic class Test : MonoBehaviour{
private Camera mainC
private Camera camera0;
void Start()
mainCamera = GameObject.FindGameObjectWithTag(&MainCamera&).GetComponent&Camera&();
camera0 = GameObject.Find(&Camera0&).GetComponent&Camera&();
mainCamera.enabled =
camera0.enabled =
if (!mainCamera)
print(&未能找到目标物体!&);
if (!camera0)
print(&未能找到目标物体!&);
void OnGUI()
if (GUILayout.Button(&放射观察&, GUILayout.Height(30)))
mainCamera.orthographic =
if (GUILayout.Button(&垂直观察&, GUILayout.Height(30)))
mainCamera.orthographic =
if (GUILayout.Button(&视角A&, GUILayout.Height(30)))
mainCamera.enabled =
camera0.enabled =
if (GUILayout.Button(&视角B&, GUILayout.Height(30)))
mainCamera.enabled =
camera0.enabled =
缺牙要及时修复,揭秘种植牙如何做到几十年不掉?
贴吧热议榜
使用签名档&&
保存至快速回贴写给VR手游开发小白的教程:(四)补充篇,详细介绍Unity中相机的投影矩阵_Unity3D教程_中国AR网
&当前位置: &
这篇作为上一篇的补充介绍,主要讲Unity里面的投影矩阵的问题:上篇的链接写给关于Unity中的Camera,里面对每一项属性都做了简要的介绍,没看过的小伙伴传送门在下面一、裁剪面先从这个专业的词汇开始,以下是圣典对裁剪面的介绍:The Near and Far Clip Plane properties determine where the Camera's view begins and ends. The planes are laid out perpendicular to the Camera's direction and are measured from the its position. The Near plane is the closest location that will be rendered, and the Far plane is the furthest.近裁剪面及远裁剪面属性确定相机视野的开始和结束。平面是布置在与相机的方向垂直的位置,并从它的位置测量。近裁剪面是最近的渲染位置,远平面是最远的渲染位置。这句话感觉说了和没说差不多,因为我们没有看到过裁剪面,所以了解裁剪面的第一步,我们需要在Unity当中去直观的看看它下图,当我们近距离观察Camera的时候,会发现一个用白线画的金字塔(四棱锥),这很好理解,他表示了Camera的视野范围,奇怪的是这个金字塔(四棱锥)少了一个角,从而金字塔不仅有了底面,还有一个顶面。相信猜也能猜到了,金字塔的顶部这个面,是近裁剪面(near clip planes),底面,则是远裁剪面(near clip planes)那么说了半天,裁剪面有什么用呢?我们继续在Unity的摄像机中改变Clipping Planes的值,看看变化,首先把Clipping Planes的near和far分别调为2和6如下图预览界面,在近裁剪面和远裁剪面之间没有包含物体,渲染的图像里是不会有物体的增加far的值,现在立方体的很小一块包含进去了,但是我们看预览界面,并看不出它是立方体,只能看出平面效果好的,继续增加,现在整个立方体都包含进来了,看预览,终于可以明显看出是正方体了这个例子说明了Camera似乎只渲染近裁剪面与远裁剪面之间的物体,这个原理就好比平面图的渲染,我们需要对图片进行裁剪完成后,程序才知道要渲染的范围,无论这张图是全部需要还是只需要一部分,第一步,都是裁剪。现在,3D的渲染也需要裁剪,于是近裁剪面与远裁剪面就诞生了,只不过裁剪的范围并不是平面的,而是立体的(被切掉顶端的金字塔),这个立体的形状,我们称之为视裁剪体。二、视裁剪体视裁剪体,专业的叫法是视锥体(fusum),它由6个面构成,上下,左右,前后,先看圣典里面的介绍:The outer edges of the image are defined by the diverging lines that correspond to the corners of the image. If those lines were traced backwards towards the camera, they would all eventually converge at a single point. In Unity, this point is located exactly at the camera's transform position and is known as the centre of perspective. The angle subtended by the lines converging from the top and bottom centres of the screen at the centre of perspective is called the field of view (often abbreviated to FOV).在影像的边缘被称为对应影像角落的偏离线。如果被描绘的那些线向相机的后方转,他们最终将汇聚在一个点上。在Unity, 这个点恰好位于被称为视图中心的变换位置上。在视图中,屏幕中顶部的中心和底部的中心汇聚的线的夹角,被称为视野(通常缩写成FOV)。As stated above, anything that falls outside the diverging lines at the edges of the image will not be visible to the camera, but there are also two other restrictions on what it will render. The near and far clipping planes are parallel to the camera's XY plane and each set at a certain distance along its centre line. Anything closer to the camera than the near clipping plane and anything farther away than the far clipping plane will not be rendered.如上所述, 任何超出影像边缘的偏离先之外的东西都是看不见的。渲染还有另外两个限制条件。近裁剪面和远裁剪面是与相机的XY平面平行的,并且每个裁剪面离中心线有一定的距离。任何在近裁剪面的之内和超出远裁剪面之外的物体都不会被渲染。我们在上面已经了解了远近裁剪面,即前后,那么上下左右四个面又是怎么定义的呢?上面的介绍已经涉及到了,就是FOV的概念。继续回到Unity当中,看下FOV的具体效果,修改Field of View的值,30,视锥体收缩,正方体不再内部FOV修改为60,明显感觉视锥体扩张,预览又能看到正方体了从上面看FOV的值似乎决定了上下左右四个面的夹角,而且其大小是用度来表示的,这里的60即表示60度好了,现在一个视锥体的所有参数都已经明确了,已知Camera的坐标,只要知道远近裁剪面的值,FOV的值即可定义一个唯一的视锥体说了半天,视锥体要怎么使用?ok,接下来开始正题,投影变换。三、投影变换Unity中Camera的投影变换分为两种:透视投影和正交投影。简要说明两者的区别,正交投影的观察体是长方体,是规则的,它使用一组平行投影将三维对象投影到投影平面上去,相信对Unity了解比较深入的同学都知道正交投影的功能,距离Camera的远近并不会影响物体的缩放,比如说距离10m和1000m的实际大小相同的物体,呈现在画面里的大小也是相同的,这显然是我们不希望的,3D游戏模拟的是现实生活,而在现实生活当中,离我们远的物体,看起来当然比较小,而即使是一部手机,放在眼睛前方的时候,看起来,却会硕大无比。于是正交投影在3D游戏当中的使用就非常有限了。接下来是透视投影,这是3D游戏中广为使用的一种技术,它使用一组由投影中心产生的放射投影线,将三维对象投影到投影平面上去。透视投影的观察体就是以上一直在说的视锥体。它具有通过物体远近来缩放的能力,现在,需要把视锥体包含的物体投影成画面,这个过程,需要做的变换,就是投影变换那么为什么要变换呢?视锥体实际上不是一个规则体,这对于我们做裁剪很不利,从3D物体到画面的过程,通常需要经历以下三步:1. 用透视变换矩阵把顶点从视锥体中变换到裁剪空间的规则观察体(CVV)中2. 使用CVV进行裁剪3. 屏幕映射:将经过前述过程得到的坐标映射到屏幕坐标系上。这个过程,可以用一张图来表示(图摘自它处)从视锥体变换到立方体的过程中,近裁剪面被放大变成了立方体的前表面,远裁剪面被缩小变成了立方体的后表面,这就解释了为什么透视投影可以将物体的远近很直观表达出来的原因,很简单,因为它放大了近处的物体,缩小了远处的物体。那么怎么做这个变换呢,我们可以理解为视锥体中某一个点(x,y,z,1)与某一个矩阵相乘得到的新点(x1,y1,z1,1)即为对应CVV中的点,这样把视锥体中所有的点与该矩阵相乘,获得的就是一个CVV。而这个矩阵,就是透视投影矩阵。直接亮出这个矩阵的值,想看详细推导的同学,给个链接:这里有很多参数的意义用下图来表示对于一个视锥体,我们取它的截面一般有如下两种方法,不过一般都取yz面作为截面来计算参数,这里我们要取FOV,near,far,botton,top,right,left的值,其中botton,top,right,left是投影平面的上下左右边界值,投影平面,就是近裁剪面。四、修改投影矩阵建立一个非标准投影我们继续回归到Unity当中,Unity关于Camera投影矩阵的文档相当相当的少,唯一可用的就是Camera.projectionMatrix的API里面零星的介绍,链接:但至少我们是可以输出投影矩阵看一下的print(Camera.main.projectionMatrix); //这句话输出主摄像机的投影矩阵上图看到了投影矩阵的值,FOV=60,near=0.3,far=1000的情况下,进行计算,发现除了第一个值有问题其他都正确。第一个值为什么是1.08878?经过研究,我发现Unity有一个特性,无论怎么修改窗口的比例,m【1,1】的值总是不变,固定为1.73205但是,只要改变FOV,它就会改变,所以Unity一定是把FOV定义为投影平面的上边缘与下边缘的夹角,即top=near*tan(FOV/2),而right就不能通过right=near*tan(FOV/2)来计算了,而是要用right=top*aspect这条公式,我们调节屏幕尺寸的时候,实际上改变了m【0,0】的结果而不会改变其他值。我们写一个脚本去改变投影矩阵的值,看看效果using UnityEusing System.Cpublic class example : MonoBehaviour {public Matrix4x4 originalPvoid Update() {//改变原始矩阵的某些值Matrix4x4 p = originalPCamera.main.projectionMatrix =}public void Awake() {originalProjection = Camera.main.projectionMprint(Camera.main.projectionMatrix);}}这里取E01=0.5,发现远近裁剪面变成平行四变形,相应的画面也斜了其他的我就不演示了,改变其他的值会得到相应的效果原因很简单,还是要贴出之前推导出来的公式M矩阵的m01我们把他从0改到了0.5,影响的是x坐标变换的结果,本来x坐标是与y无关的,现在随着y的增加,x也会增加如下图,相当于本来正方形中的每一个像素与y都无关,现在每一个像素在y不为0的时候都会向右平移0.5y的距离,这样,就导致看起来像平行四边形了其他的就不一一推导了,反正VR在做投影的时候会涉及到这一块,这样以后涉及到投影矩阵的时候大家就不会那么迷茫了最后贴一串代码,是在圣典上发现的,实现画面像水一像波动的特效,也是通过修改投影矩阵的方式实现的复制黏贴后,加在主摄像机上就可以实现了,这么强大的特效居然几行代码就可以搞定,实在觉得不可思议。using UnityEusing System.C//让相机以流行的方式晃动public class example : MonoBehaviour {public Matrix4x4 originalPvoid Update() {//改变原始矩阵的某些值Matrix4x4 p = originalPp.m01 += Mathf.Sin(Time.time * 1.2F) * 0.1F;p.m10 += Mathf.Sin(Time.time * 1.5F) * 0.1F;Camera.main.projectionMatrix =}public void Awake() {originalProjection = Camera.main.projectionM}}后面会继续回归VR的主题,继续去详细解释脚本里面的东西。
中国AR网()为更好的服务国内AR技术爱好者 ,现已推出“AR那些事”官方公众号,请在微信公众账号中搜索「armeiti」或者加QQ群:,即可获得每日内容推送和最新的AR开发教程及AR H游戏资源哦!
上一篇: 下一篇:
有话您说 访客日访客日访客日访客日访客日

我要回帖

更多关于 unity陀螺仪控制相机 的文章

 

随机推荐